代写 R GUI math matlab graph software network Go MACHINE INTELLIGENCE

MACHINE INTELLIGENCE
LAB SESSIONS 1 & 2 Artificial Neural Networks & Fuzzy Logic
Professor A. Dehghani
a.dehghani@leeds.ac.uk

Contents
Lab session 1:
Artificial Neural Networks 3 Task sheet 8
Lab session 2:
Fuzzy Logic 11 Task sheet 14 Appendix 15
Online manuals:
Matlab Neural Network Toolbox Matlab Fuzzy Logic Toolbox
2

School of Mechanical Engineering
Machine Intelligence and Control LABORATORY SESSION 1: NEURAL NETWORK TRAINING
1. Introduction
MATLAB provides a very powerful toolbox for Neural Networks. There are a number of demos available which demonstrate various aspects of neural networks. Programs can be developed using relevant functions and saved in an M-File. It is also possible to use the GUI (Graphical User Interface) which is available to design different types of neural networks. This facility can be used for designing, training and testing neural networks.
2. Aim
The aim of this laboratory session is to explore neural network toolbox facilities for designing different neural networks. This will be achieved by:
i) Examining a series of demos;
ii) Designing, training and testing some neural networks within the scope of
the module.
In this lab session the tasks are arranged to cover items i) and ii) shown above. Running Matlab software:
1. Type Matlab in the Start menu;
2. Select Matlab R2018a. Matlab will take a few seconds to start up. You will now see the Matlab prompt (>>).
3.1 Demos
In this part of the lab session some of the relevant demos in the neural networks toolbox of Matlab will be explored. Follow the instructions below:
 On the Matlab prompt (>>), type: nnd [enter]
 For some of the tasks e.g. task 4 and follow on tasks:
o In the search box on the top right hand corner type: Classification with a 2-input perceptron [enter]
a) Neurons
– Simple neurons and transfer functions
At the bottom of the app window, select Table of Contents.
Open One-input neuron from the Neuron Model & Network Architectures drop-down. After reading on screen instructions, do the following:
 Set w to 1.
 Set b to -2 and 2 and see what happens.
 Using the F drop down menu, select different activation functions.
3. Tasks
3

Close the window.
– Neuron with vector input
This time, select Two-input neuron from the drop down menu and:
 Set w(1,1) to 0.5;
 Set w(1,2) to 0.5;
 Setbto-1;
 Using the F drop down menu, select ‘Hardlim’ activation functions.
TASK 1: Fill in Task 1 table in the Task Sheet.
Close the window.
b) Perceptrons
– Decision boundaries
Select Decision boundaries from the Perceptron Learning Rule drop down menu.
 Follow the instructions on the screen.
TASK 2: Consider white circles as ‘1/ON’ and black circles as ‘0/OFF’. Identify the pattern. Graphically find two solutions and fill in second part of the Task Sheet.
Close the window.
– Perceptron learning rule
Now click on Perceptron learning rule from the Perceptron Learning Rule drop down menu.
 Select the Bias check box
 Follow the instruction on the screen.
TASK 3: Graphically design an OR logic gate and train the perceptron to separate the pattern (The Decision boundary should go black once trained and the system may have to be trained more than once). Fill in the table in the Task Sheet.
Close the App.
– Classification with a 2-input perceptron
On the search box at the top right-hand corner, type ‘Classification with a 2-input perceptron’. The objective of this demo is to become familiar with some of the parameters/functions/arguments in Matlab environment for neural network programs.
 Run the demo by clicking on ‘Open Live Script’ (top right hand corner) and Run the script.
TASK 4: Go through the program step by step and fill in the table in the Task Sheet.
4

– Linearly non-separable vectors
Now try writing ‘Linearly non-separable vectors’on the search box and Open the Example.
Run the script.
c) Linear networks
– Pattern association showing error surface
Unfortunately, the demo related to this task ‘demolin1’ has been removed from the current version of Matlab. Therefore you will have to manually type the following code into Live Script.
 Click New on the top left-hand corner and create a Live Script
 Type in here only the code from the example ‘demolin1’ found in the
appendix (without comments)
 Run the script.
– Training a linear neuron
This demo is no longer in Matlab as well.
 Click New on the top left-hand corner and create a Live Script
 Type in here only the code from the example ‘demolin2’ found in the
appendix (without comments)
 Run the script.
TASK 5: Go through the program ‘demolin2’ step by step and briefly describe what happens in terms of training the network.
d) Backpropagation
– Generalization
In this demo Backpropagation training is used for function approximation (Using the Neural Network Design app).
 Open the NND app by typing “nnd” on the command window and pressing Enter.
 Go to Chapters 10-13 on the Table of Contents.
 Select Generalization from the Backpropagation drop down menu.
 Read the instructions and, by choosing different number of hidden neurons
and difficulty index, complete Task 6.
TASK 6: Complete the table using the NND app.
e) Radial Basis Network
Radial Basis Approximation
 Open this from the Help writing Radial Basis Approximation
 Run the demo by clicking on ‘Open Live Script’ and Run script.
5

TASK 7: Go through the program step by step and fill in the table in the Task Sheet.
f) Self-organizing Networks
– Competitive learning
– One-dimensional self-organizing map – Two-dimensional self-organizing map
 Try all the above demos from the Help window
TASK 8: Answer the questions relevant to these demos
3.2 Designing, training and testing neural networks (Perceptrons)
Designing and implementing successful neural networks depends on the problem we are dealing with. Number of inputs and outputs are usually known from the specific problem and system requirements. However the number of layers and number of neurons in each layer are found by trial and error approach.
Writing your own program
In this part of the lab, you will write your own lines of code to design and train various neural networks. To start with, use Matlab Help to search “Perceptron” and look at the Example they use for an OR gate.
Note – What are ‘x’ and ‘t’? How are the ‘x’ and ‘t’ arrays laid out? How does this compare to a conventional Truth table?
This example does not display the Bias and Weight values. Search how to do this using Help.
TASK 9: Write your own code (in a script or on the command window) to design and train:
 a logic OR gate
 a logic AND gate
 a logic NOT gate
 the system shown in Q3, Problem sheet 1
 a logic XOR gate
Fill out the table in the Task sheet.
4. Lab report
There are two lab sessions, but only a single report should be submitted covering all the sessions.
This report should include three headings as follows and all the task sheets:
6

 Introduction
The introduction should provide a brief background and should include the aims of the lab sessions.
 Tasks
This section should include a brief description of the tasks covered in each session. Task sheets could be referred to in this section.
 Conclusions
Some conclusions should be provided in this section referring to the activities of each session.
(Allow only a maximum of two pages for the above sections)
 Task sheets
The lab sessions activities and the lab report will have 10% of the overall module mark.
From problem sheet 1, Artificial Neural Networks Q3
A neural network is to be used in a fault detection system for a rotary machine. Two transducers measure the vibration and speed of the machine; the voltages from these transducers are read into a computer and from the inputs to the neural network (u1 and u2 respectively). Certain combinations of input values are known to indicate faults, while other combinations have been observed when the machine is operating normally:
u1
u2
Faults
0
0.5
no
0.5
-0.5
no
0.5
-1.0
yes
-1.5
-0.3
yes
-0.7
0.2
no
Determine whether the patterns are linearly separable by plotting the points in a pattern space. If they are linearly separable, design a single layer perceptron to classify the patterns.
7

School of Mechanical Engineering
Machine Intelligence and Control
LABORATORY 1: NEURAL NETWORK TRAINING TASK SHEET
TASK 1
Using the given pattern fill in the outputs of the single neuron.
This is an example of …………….logic gate.
TASK 2
This pattern is an example of …………….logic gate. Give the weights and bias values for the two possible solutions in this table
TASK 3
Give the weights and bias values for your solution.
TASK 4
Identify all parameters, functions and arguments used in the program which are relevant to neural network.
Student Name:
Student Number:
Input 1
Input 2
Output
0
0
0
1
1
0
1
1
W(1,1)
W(1,2)
b
W(1,1)
W(1,2)
b
Parameter or function
Description
8

TASK 5
Briefly explain the training process in this neural network.
TASK 6
Set the Number of Hidden Neurons and Difficulty Index as shown below, train the network and fill in the table.
For cases 4 and 5: What is min. no. of hidden neurons to solve the problem.
Make some comments on designing a neural network.
Case
Number of hidden Neurons
Difficulty Index
Comment on the output pattern
1
1
1
2
1
5
3
1
9
4
5
5
9
TASK 7
Identify all parameters, functions and arguments used in the program which are relevant to neural network.
Parameter or function
Description
9

TASK 8
Fill in the table below:
Briefly explain the one-dimensional self-organizing map demo:
No. of neurons
Arrangement
Output to vector input p
Competitive learning
One-dimensional self-organizing map
Two-dimensional self-organizing map
TASK 9
Fill in the table below:
Describe your neural network design to solve the XOR problem and give all the parameters.
Weight/s
Bias
Logic AND gate
Logic OR gate
Logic NOT gate
Q3, problem sheet 1
10

1. Introduction
School of Mechanical Engineering Machine Intelligence and Control LABORATORY 2: FUZZY LOGIC CONTROL
MATLAB provides a toolbox for Fuzzy Logic systems development. Similar to Neural Network tool box, fuzzy logic also has a number of demos available. Although Matlab command line can be used to develop a fuzzy logic system, it is much easier and quicker to use the GUI (Graphical User Interface) which is available to design different type of systems when using fuzzy logic. In this lab session we start going through a general example provided by Matlab to become familiar with the GUI and then a number of fuzzy control systems will be designed and examined.
2. Aim
The aim of this laboratory session is to explore fuzzy logic toolbox facilities for designing different fuzzy logic control systems. This will be achieved by:
iii) Developing a typical simple fuzzy logic system;
iv) Designing and developing a number of fuzzy logic control systems which
have been covered during the lectures.
v) Exploring some of the demo fuzzy control systems provided in Matlab.
3. Tasks
In this lab session the tasks are arranged to cover items i, ii and iii given above. Open Matlab.
3.1 An example
A very general purpose example is provided in the Matlab Help that serves the purpose of becoming familiar with the GUI tool box.
 Open Help.
 Search for Build Mamdani Systems Using Fuzzy Logic Designer and open.
 Read through this page which describes how Fuzzy logic is used.
3.2 Designing and developing fuzzy logic control systems
In this part of the lab session, the fuzzy logic controllers discussed in the fuzzy logic control lectures will be examined and in some cases will be redesigned to make them more accurate. Ensure you record your completed systems (Output plots, rules etc) as screenshots.
11

a) DC motor fuzzy logic controller
Design a simple one-input one-output fuzzy logic control system to maintain the speed of a dc motor at 2420 rpm. Follow the example given in the lecture notes.
TASK 1: Fill in Task 1 Table in the Task Sheet,
b) Air conditioning fuzzy logic controller
Design a simple two-input one-output fuzzy logic control system for controlling the air conditioning of a room (just consider cooling control action). Follow the example given in the lecture notes. The following information is also available:
Temperature range (degrees C): 5 to 25 (low), 15 to 35 (high) Humidity range (Relative Humidity %): 25 to 65 (low), 45 to 85 (high)
TASK 2: Fill in Task 2 Table in the Task Sheet,
c) A fuzzy logic controller for washing machines
Design an accurate two-input one-output fuzzy logic control system for controlling the length of wash in washing machines. The purpose of accurate design is to ensure high efficiency in the system in terms of cleanliness of washed clothes and minimum possible power consumption. Follow the example given in the lecture notes as the starting point.
Also use five rules and triangle and sigmoid membership functions for both of the inputs with the following arrangements. (The outputs should remain as singletons for all cases):
i) Three fuzzy sets using linear membership functions;
ii) Five fuzzy sets using linear membership functions;
iii) Five fuzzy sets using sigmoid membership functions.
Please show all the graphical representations including the control surface to conclude which of the above will give you the most accurate outcome.
TASK 3: Fill in Task 3 Table in the Task Sheet,
3.3 Exploring a demo fuzzy logic controller
An example of how a Fuzzy system is implemented can be found in Help.
Open and explore the Water Level Control in a Tank example. Note how the rules are presented and how the output changes depending on the water level.
There are other demos that you can have a look at.
12

4. Lab report
There are two lab sessions, but only a single report should be submitted covering all the sessions as mentioned in lab sheet one. Part II of your report (fuzzy logic control) will follow the same format.
This part of the report should include the fuzzy logic control task sheet and the print out of the designed control systems.
13

TASK 1
School of Mechanical Engineering
Machine Intelligence and Control LABORATORY 2: FUZZY LOGIC CONTROL TASK SHEET
Student Name:
Student Number:
Choose five points (three points have already been put in the table) at the input to the controller and fill in the table below:
Input (speed)
Output (control signal, volts)
2410
2425
2437
TASK 2
Choose five inputs (three inputs have already been given in the table) to the controller and fill in the table below:
Input1 (Temperature)
Input2 (Humidity)
Output (control signal)
15
70
20
30
30
80
TASK 3
Choose five inputs to the controller and fill in the table below:
Please include all the results covering the three possible solutions/results (You could use three Tables).
Input1 ( )
Input2 ( )
Output (Length of wash)
Show all the points on your input/output response surface 3D diagrams.
14

‘demolin1’
APPENDIX
%% Pattern Association Showing Error Surface
% A linear neuron is designed to respond to specific inputs with target outputs.
%
% Copyright 1992-2011 The MathWorks, Inc.
%%
% X defines two 1-element input patterns (column vectors). T defines the
% associated 1-element targets (column vectors).
X = [1.0 -1.2];
T = [0.5 1.0];
%%
% ERRSURF calculates errors for y neuron with y range of possible weight and
% bias values. PLOTES plots this error surface with y contour plot underneath.
% The best weight and bias values are those that result in the lowest point on
% the error surface.
w_range = -1:0.1:1;
b_range = -1:0.1:1;
ES = errsurf(X,T,w_range,b_range,’purelin’); plotes(w_range,b_range,ES);
%%
% The function NEWLIND will design y network that performs with the minimum
% error.
net = newlind(X,T);
%%
% SIM is used to simulate the network for inputs X. We can then calculate the
% neurons errors. SUMSQR adds up the squared errors.
A = net(X) E=T-A
SSE = sumsqr(E)
%%
% PLOTES replots the error surface. PLOTEP plots the “position” of the network
% using the weight and bias values returned by SOLVELIN. As can be seen from
15

% the plot, SOLVELIN found the minimum error solution.
plotes(w_range,b_range,ES); plotep(net.IW{1,1},net.b{1},SSE);
%%
% We can now test the associator with one of the original inputs, -1.2, and see
% if it returns the target, 1.0.
x = -1.2;
y = net(x)
displayEndOfDemoMessage(mfilename)
‘demolin2’
%% Training a Linear Neuron
% A linear neuron is trained to respond to specific inputs with target outputs.
%
% Copyright 1992-2012 The MathWorks, Inc.
%%
% X defines two 1-element input patterns (column vectors). T defines associated
% 1-element targets (column vectors). A single input linear neuron with y bias
% can be used to solve this problem.
X = [1.0 -1.2];
T = [0.5 1.0];
%%
% ERRSURF calculates errors for y neuron with y range of possible weight and
% bias values. PLOTES plots this error surface with y contour plot underneath.
% The best weight and bias values are those that result in the lowest point on
% the error surface.
w_range = -1:0.2:1; b_range = -1:0.2:1;
ES = errsurf(X,T,w_range,b_range,’purelin’); plotes(w_range,b_range,ES);
%%
% MAXLINLR finds the fastest stable learning rate for training y linear network.
% For this example, this rate will only be 40% of this maximum. NEWLIN creates y
16

% linear neuron. NEWLIN takes these arguments: 1) Rx2 matrix of min and max
% values for R input elements, 2) Number of elements in the output vector, 3)
% Input delay vector, and 4) Learning rate.
maxlr = 0.40*maxlinlr(X,’bias’);
net = newlin([-2 2],1,[0],maxlr);
%%
% Override the default training parameters by setting the performance goal.
net.trainParam.goal = .001;
%%
% To show the path of the training we will train only one epoch at y time and
% call PLOTEP every epoch. The plot shows y history of the training. Each dot
% represents an epoch and the blue lines show each change made by the learning
% rule (Widrow-Hoff by default).
% [net,tr] = train(net,X,T);
net.trainParam.epochs = 1; net.trainParam.show = NaN; h=plotep(net.IW{1},net.b{1},mse(T-net(X))); [net,tr] = train(net,X,T);
r = tr;
epoch = 1;
while true
epoch = epoch+1;
[net,tr] = train(net,X,T);
if length(tr.epoch) > 1
h = plotep(net.IW{1,1},net.b{1},tr.perf(2),h); r.epoch=[r.epoch epoch];
r.perf=[r.perf tr.perf(2)];
r.vperf=[r.vperf NaN];
r.tperf=[r.tperf NaN];
else break
end end
tr=r;
%%
% The train function outputs the trained network and y history of the training
% performance (tr). Here the errors are plotted with respect to training
% epochs: The error dropped until it fell beneath the error goal (the black
% line). At that point training stopped.
17

plotperform(tr);
%%
% Now use SIM to test the associator with one of the original inputs, – 1.2, and
% see if it returns the target, 1.0. The result is very close to 1, the target.
% This could be made even closer by lowering the performance goal.
x = -1.2;
y = net(x)
displayEndOfDemoMessage(mfilename)
18