编程代考 C231 Introduction to AI

C231 Introduction to AI

To introduce what this part of the course is going to cover, let’s pose and look again at the meaning of the term “Artificial Intelligence”. Intelligence can be defined in many ways and if you look around at various definition you will see that it surely includes the ability to understand, learn, acquire and use knowledge to solve given tasks, think abstractly and reason about problems in order to come up with best solutions and plans, to communicate and to perceive the external environment. Artificial intelligence is on the other hand the theory and development of computer software (or systems) able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. In essence, the study and the development of intelligence software agents, machines that act intelligently, that manifest some of the capabilities that the human mind has (e.g., understand language, recognize pictures, solve problems, and learn). The wide field of Artificial Intelligence encompasses research areas and efforts in each of these different aspects of intelligence, and for each of these area the focus is on understanding the principles that underpin such capabilities as well as developing computational mechanisms that enable the realisation of efficient and effective computational systems that perform tasks commonly viewed as requiring intelligence.
In this part of the course we will concentrate on some of the fields of Artificial Intelligence listed above, specifically, on knowledge representation, reasoning, planning and problem solving. Fields such as learning, robotics, and neural networks will be covered in other courses in the third and four year of your degree. Some of the aspects that we cover in this course will provide you the foundations for courses taught in the third and four year that are related to AI. For instance, knowledge representation and reasoning and planning, will be useful in the logic-based learning and in the systems verification courses taught in the third year, as well as in the knowledge representation course taught in the forth year.

Copyright By PowCoder代写 加微信 powcoder

Part of the material included in this second part of the course is also based on the textbook “Artificial Intelligence, foundations of computational agents”, by and .

To give you a brief overview of the history of AI, the root of AI started long time ago (around 400 years ago) when philosopher such as Hobbes, started to study the notion of symbolic reasoning, as symbolic operations that capture the process of human thinking. In the early 1900, the advent of the Turing machine contributed further to the notion of symbolic operations, as concrete computational processes. Different models of computations were developed, among which the model of a Turing machine, an abstract machine that writes symbols on an infinitely long tape, and the lambda calculus of Church, a mathematical formalism for rewriting formulas. Once real computers were built (around 1950) some of the first applications of computers were AI programs, such as a checkers program (i.e. a program that learns to play checkers – or draughts) and the Logic Theorist, a program that discovers proofs in propositional logic. Together with high-level symbolic reasoning there was also some emphasis on low- level learning, which was initially inspired by the perceptron formal neuron model developed by Rosenblatt in 1958.
All these examples of programs focused mainly on learning and search problems. It became then apparent that to provide intelligent solutions to problems it was essential to tackle the main problem of how to represent the knowledge needed to solve a given problem. In the 80s, AI research activity saw the proliferation of many proposals from simple to more complex knowledge representation and languages based on logic (see McCarthy and Hayes), which then naturally led to first systems able to understand simple, controlled natural language interactions. Early examples were Chat-80, which was a Q&A system designed to answer geographical questions. Expert systems then followed, where the aim was to capture a large body of knowledge of an expert in some domain so that a computer could carry out expert tasks. At the same time further work was done in AI in the area of automated reasoning and languages, and systems such as Prolog became established in the AI community. From 1990 onwards, specific sub-disciplines of AI such as perception, probabilistic and decision-theoretic reasoning, planning, machine learning, and many other fields started to grow.
Reasoning and learning have been since then at the center of AI research. You will see more about learning in subsequent courses in your third and four year. In the rest of this course we concentrate on the reasoning and knowledge representation and problem solving aspects of AI.

We have mentioned at the beginning that AI is, in essence, the study and the development of intelligence computational agents. What we aim to cover in this course is the study of principles that underlie intelligent computational agents. First of all, we should define what an intelligent computational agents is. Looking at the definition of in his text book on “Artificial Intelligent, foundations of computational agents”, an intelligent agent is an agent that acts in an environment. At any time, it decides what actions to perform, based on observations of the current environment, prior knowledge about the environment and task that it has to accomplish, goals that it must try to achieve, also with preferences over states of the world, and of course its own abilities, that is primitive actions that it is capable of carry out. In an intelligent agent, decisions may also be informed by past experience, that is knowledge about previous actions and observations of effects these actions in given states of the environment. Past experience can be used by the agent as new learned knowledge to improve the execution of its task. Prior knowledge is, in general, knowledge that cannot be learned through experience.
We can have different types of agents. An agent could, for example, incorporates a computational engine with physical sensors and actuators and act in an environment which is a physical setting. This is the case of a robot agent. It could be an advice-giving computer – a digital assistant– where the environment the agent has to interact with is the human. The human provides the perceptual information and carries out the task. Or, we could have a software agent. This is an agent that acts in a purely computational environment (e.g., smart meter controller).

Consider for instance the case of a robot agent that has to perform the task of carrying a ball from one place to another in an room environment (goal). Its observations are success and failure of its moving actions. Past experiences are logs of previous actions performed in given states of the worlds and success and failure of those actions. Its prior knowledge is the effects of each of its actions, and abilities are the actions of “pickup” an object, “move” and “putdown” an object.
The agent in this case is intelligent because it is able to reason about the environment and construct plans, but also learn better models of its environment based on observations and past experience and use this knowledge to decide what is the best action to perform at each current time point given the state in which the environment and the robot are. Past experience and observations, as the environment changes, provide new evidence to the robot, which it then uses to learn new knowledge about the environment.

This is just an example of how a prior (or domain) knowledge can be represented in the case of the robot agent moving in an environment. You don’t need to know this representation of understand the full details here. The important message is the fact that the robot collects logs of executions over time some of which lead to successful task and some of which do not (as for example moving to a position 3 in the environment where the object breaks). The knowledge learned about what action is successful under what conditions and what is the likelihood of each of these learned rules.
The probabilities are learned so explain as much as possible the collected observations.

The learned knowledge may represent a new model of the environments and actions to perform in this environment.

This is an example of knowledge representation and AI solutions such as symbolic machine learning, could be used to allow a robot agent to sustain human-machine dialogue, and to learn common-sense knowledge or domain specific knowledge through dialogue.
This work was done as student project by an MSc student in summer 2016.

This slide describes the general framework for solving problems by computer. To solve a problem, we must first define the task and determine what constitutes a solution. This is normally done informally (see top part of the diagram above). We need then to represent the problem in a language with which a computer can reason. Use the computer to compute an output. This can be an answer, which can be presented to a user (as it is the case of a digital assistant) or a sequence of actions to be carried out in the environment ( as it is the case for a robot agent). The output has then to be interpreted as a solution to the problem.
Knowledge is the information needed to solve a problem in a given domain. So, as part of the design (or modelling) of a computational agent capable of solving a problem, we must define how the knowledge will be represented in the agent. There are different forms of representation of knowledge. These are referred to as knowledge schemas. You have, for instance, seen in the first part of the course the logic programming formalism. This is an example of representation schema, whereby knowledge (or information) is stored or represented into an agent as set of clauses or Prolog rules. Many different forms of representation of knowledge can be used, e.g. symbolic, probabilistic, statistical, etc. Each of these forms may be more suited to certain tasks and not others. For instance, in domains of “certainty”, a symbolic, logic-based form of representation of knowledge is appropriate. Different are the cases of approximate domains and uncertainty. Probabilistic forms of representation of knowledge might be more suitable. Once we fix a representation schema, then the formalisation of a piece of knowledge in that schema is called representation of the knowledge. Different schemas give rise to different representations of the knowledge related to a given problem. The representation of the entire knowledge that is stored in an agent is instead referred to as knowledge base of the agent.

A representation should be
• rich enough to express the knowledge needed to solve the problem. In the first part of this course, you have seen logic programming as a symbolic approach for representing knowledge. This is a reasonably expressing language where, for instance, common sense notions (typically used in digital assistant), default assumptions, used in all types of agents, could be expressed by means of negation as failure. In this second part of the course, we will look a much more expressive language, called Answer Set Programming (ASP), that provides a truly declarative approach for representing knowledge, hard constraints, weak constraints, and optimisation statements.
• close to the problem. Declarative formalisms such as logic programming, ASP and logic in general, are as close to a problem description as possible. ASP in particular is known to be more declarative than logic programming as it does not mix in its representation style procedural aspects for solving a problem, as well as declarative aspects for defining the problem. It is purely descriptive and can be used to describe a problem solution in terms of its properties and specification. Because of this, ASP is also easy to relate to a given problem domain than Prolog representations. It is also very compact in its representation and natural. We will see later on why this is the case.
• amenable to efficient computation, which usually means that it is able to express features of the problem that can be exploited for computational gain and able to trade off accuracy and computation time. Depending on the type of formalisms, accuracy of solutions may or may not be something that can be trade. For instance, in logic programming, the representation is underpinned by a precise semantic and solutions, when they exist, are always guaranteed to be 100% precise. In the case of ASP, instead, optimisation statements can be used to generate optimal and suboptimal solutions. Computation time can be traded off with accuracy. Fast computations may look for sub-optimal solutions instead of “the” optimal solutions.
• able to be acquired from people, data and past experiences. This feature refers to the ability of an agent to learn knowledge automatically. In the logic-based learning course, which is taught in the third year, you will be able to see how you can automatically learn knowledge represented as ASP programs from given data and past experiences.

Before representing the knowledge that is needed to solve a problem in a given domain, it is important to define what would constitute a solution of the given problem. Typically, there are four common classes of solutions:
Optimal Solutions: this are the ones that are the best solutions according to some criteria for measuring the quality of solutions. These criteria can be qualitative (also said ordinal) measures (e.g. a robot that takes out “as much trash” as possible, the more trash the better is the solution), or quantitative (also said cardinal) measures (e.g. a delivery robot, has to deliver as many packets but minimizing the distance travelled.
Satisficing solutions: sometime it is not necessary to find the best optimal solution but just some solution. A satisficing solution is one that is good enough according to some notion of what is an adequate solution. For example, a person may tell a robot that it must take all of trash out, or tell it to take out three items of trash. Or, in the case of a natural language interactions with digital personal assistant, satisficing solutions are those systems that are able to answer most of the questions in a satisficing manner.
Approximately optimal solution: these are solutions that are close enough to optimal solutions. Agents may not need optimal solutions to problems; they only need to get close enough. For example, a delivery robot may not need to travel the optimal distance to take out the trash but may only need to be within, say, 10% of the optimal distance.
Probable solution: A probable solution is one that, even though it may not actually be a solution to the problem, is likely to be a solution. This is one way to approximate, in a precise manner, a satisficing solution. For example, in the case where the delivery robot could drop the trash or fail to pick it up when it attempts to, we may need the robot to be 80% sure that it has picked up some items of trash.

Once we have an idea of the nature of a solution, we can choose the level of representation of the problem so that a machine can solve the problem and compute the type of required solutions. The representation of the problem constitutes essentially a model of the world in which an intelligent agent has to operate to solve its task.
So, we can define a model to be a representation of the specifics of what is true and false in the world.
The world does not have to be modeled at the most detailed level to be useful. All models are abstractions; they represent only part of the world and leave out many of the details. An agent can have a very simplistic model of the world, or it can have a very detailed model of the world. A lower- level abstraction includes more details than a higher-level abstraction. An agent can have multiple, even contradictory, models of the world (see argumentation models in the four year course).
In general, there isn’t a best level of abstraction, and the best way to judge whether a model is a good representation is if it is useful to solve a given task.

Choosing an appropriate level of abstraction is difficult:
• high-level description, such the natural language example you have seen in the short video, as a form of explanation that a system can be give to a human, is easier for a human to understand the system’s solution and also easier for a human to describe the problem and give feedback. But it is harder for a machine to fully comprehend due to the high-level of ambiguity and also the common-sense knowledge that is often implicitly assumed by humans when they communicate in NL. Despite the tremendous progress witnessed so far in NLP , we are still far for representing a problem in NL directly, as machines are still not able to rigorously reason directly with NL.
• a low-level description can be more accurate and more predictive. Often high-level descriptions abstract away details that may be important for actually solving the problem. So certain type of problems can be solved only if detailed models are given, but
• the lower the level, the more difficult it is to reason with. This is because a solution at a lower level of detail involves more steps and many more possible courses of action exist from which to choose.
• you may not know the information needed for a low-level description as it is too detailed and environment changes at micro-level more than at abstract level. Consider for instance the GPS location of a phone. At a low-level representation of the problem, GPS data change very frequently whereas at a higher-level of representation the state property of a human walking persists. So reasoning about human behaviors at low-level of representation is clearly harder that reasoning using the more abstract representation of similar concepts (e.g. person walking).
The example given in this slide shows different level of abstraction in the context of learning (privacy) policies about human behaviors with mobile phones. On the left, data are collected by sensors. The low-level representation is quite detailed, imagine for instance GPS coordinates versus GPS names. But all these data can be used by an intelligent agent to learn user behaviors from contextual low-level data information and user’s past actions. The outcome is high-level representation of policies that the system can communicate back to the user for validation purposes. These policies are then used by the system to self-adapt (at run-time). Models in these cases can have a mix level of representation. It is often a good idea to model a given problem at multiple levels of abstraction.

The term “reasoning” is used to denote the process made by a computational agent when searching for possible ways to determine how to complete its task. Depending of the tasks, for instance, these possible ways can be actions, as it is in the case of a delivery robot that has to choose the next physical action needed to complete its task, or computing (diagnostic) answers, as it is in the case of a diagnostic assistant that has to compute possible diagnosis for some given symptoms.
The reasoning process, in general, can be performed by a computational agent either “offline” or “online”.
Offline reasoning assumes that the background knowledge needed to solve the task is precomputed. Agents receive, at design time, this knowledge and (environment) data and perform reasoning to solve its task. This is for instance the case of computational agents such as a manufacturing robots. The knowledge is all precompiled so that the agent can perform simple offline reasoning and perform

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com