CS计算机代考程序代写 AI Hidden Markov Mode FTP dns database Bayesian Access Control. Authorization II

Access Control. Authorization II
CS 3IS3
Ryszard Janicki
Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada
Acknowledgments: Material based on Information Security by Mark Stamp (Chapters 8.6-8.10)
Ryszard Janicki
Access Control. Authorization II 1/35

Covert Channel I
MLS designed to restrict legitimate channels of communication
May be other ways for information to flow
For example, resources shared at different levels could be used to “signal” information
Covert channel: a communication path not intended as such by system’s designers
Alice has —bf TOP SECRET clearance, Bob has CONFIDENTIAL clearance
Suppose the file space shared by all users
Alice creates file FileXYzW to signal “1” to Bob, and removes
file to signal “0”
Once per minute Bob lists the files
If file FileXYzW does not exist, Alice sent 0 If file FileXYzW exists, Alice sent 1
Alice can leak TOP SECRET information to Bob
Ryszard Janicki
Access Control. Authorization II 2/35

Covert Channel Example
Covert Channel Example
Alice: Create file Delete file Create file Delete file Bob: Check file Check file Check file Check file Check file
Data: 1 0 1 1 0 Time:
Part 2  Access Control 100
Ryszard Janicki
Access Control. Authorization II 3/35

Covert Channel II
Other possible covert channels? Print queue
ACK messages
Network traffic, etc.
Covert channels have been used for long time by magicians and their helpers, spies, criminals, conspirators, etc. They are based on relative easiness to encode YES and NO.
When does covert channel exist?
1 Sender and receiver have a shared resource
2 Sender able to vary some property of resource that receiver can
observe
3 “Communication” between sender and receiver can be
synchronized
Potential covert channels are everywhere
But, it’s easy to eliminate covert channels:
“Just” eliminate all shared resources and all communication! Virtually impossible to eliminate covert channels in any useful information system
Ryszard Janicki
Access Control. Authorization II 4/35

Covert Channel III
Consider 100 MB TOP SECRET file Plaintext stored in TOP SECRET location
Ciphertext – encrypted with AES using 256 bit key – stored in UNCLASSIFIED location
Suppose we reduce covert channel capacity to 1 bit per second
It would take more than 25 years to leak entire document thru a covert channel
But it would take less than 5 minutes to leak 256-bit AES key through covert channel!
Ryszard Janicki
Access Control. Authorization II 5/35

Inference Control (Example)
Suppose we query the following database:
name position age salary
Celia Heidi Holly Leonard Matt
teacher 45 aide 20 principal 37 teacher 50 teacher 33
$40,000 $20,000 $60,000 $50,000 $50,000
What is an average salary of a teacher younger than 40 years?
Answer: $50,000
How many teachers are younger than 40? Answer: 1
Social engineering: We know that Matt is about 35 and he is a teacher.
Specific information has leaked from responses to general questions!
Ryszard Janicki
Access Control. Authorization II 6/35

Inference Control
For example, medical records are private but valuable for research
How to make info available for research and protect privacy? How to allow access to such data without leaking specific information?
Remove names from medical records?
Still may be easy to get specific info from such “anonymous” data Removing names is not enough – as seen in our example
What more can be done?
Query set size control
Don’t return an answer if set size is too small Restrict the number of questions
Use statistical properties, do not release statistics if some parameters are not appropriate
Randomization – add small amount of random noise to data Many other methods – none satisfactory
It was proven that 100% guarantee Inference Control does not exist.
Ryszard Janicki
Access Control. Authorization II 7/35

Something Better Than Nothing?
Robust inference control may be impossible
Is weak inference control better than nothing?
Yes: Reduces amount of information that leaks
Is weak covert channel protection better than nothing?
Yes: Reduces amount of information that leaks
Is weak cryptography better than no cryptography?
Probably not: Encryption indicates important data May be easier to filter encrypted data
Ryszard Janicki
Access Control. Authorization II 8/35

Turing Test and CAPTCHA
Turing Test, proposed by Alan Turing in 1950
Human asks questions to a human and a computer, without seeing either
If questioner cannot distinguish human from computer, computer passes
This is the gold standard in AI
No computer can pass this today – but some claim they are close to passing
CAPTCHA
Completely Automated Public Turing test to tell Computers and Humans Apart
Completely Automated – test is generated and scored by a computer
Public – program and data are public
Turing test – humans can pass the test, but machines cannot Also known as HIP == Human Interactive Proof
Like an inverse Turing test
Ryszard Janicki
Access Control. Authorization II 9/35

CAPTCHA Uses and Rules
Free email services – spammers like to use bots to sign up for 1000s of email accounts
CAPTCHA employed so only humans get accounts
Sites that do not want to be automatically indexed by search engines
CAPTCHA would force human intervention
CAPTCHAs are easy for most humans to pass
Difficult or impossible for machines to pass Even with access to CAPTCHA software
From Trudy’s (attacker) perspective, the only unknown is a random number
Similar to Kerckhoffs’ Principle
Good to have different CAPTCHAs in case someone cannot pass one type
E.g., blind person could not pass visual CAPTCHA
Ryszard Janicki
Access Control. Authorization II 10/35

CAPTCHAs
Do CAPTCHAs Exist?
 TeTste:sFt:indFi2nwdo2rdws oinrdthseinfoltlhoweinfgollowing
 Easy for most humans
Easy for most humans
 A (difficult?) OCR problem for computer
A (difficult, so far) Optical Character Recognition (OCR)
o OCR  Optical Character Recognition problem for computer
Part 2  Access Control 119 Current types of CAPTCHAs
Visual – like the example, but also ‘which picture contains a bicycle’, etc.
Audio – distorted words or music
No text-based CAPTCHAs – maybe this is impossible OCR is a challenging AI problem
Hardest part is the segmentation problem
Humans good at solving this problem
Distorted sound makes good CAPTCHA
Humans also good at solving this (but not all, definitely not me)
Ryszard Janicki
Access Control. Authorization II 11/35

Firewalls
Firewalls
Internet
Firewall
Internal network
Firewall decides what to let in to internal network and/or
 Firewall decides what to let in to internal what to let out
network and/or what to let out
Access control for the network
 Access control for the network
Part 2  Access Control 123
Ryszard Janicki
Access Control. Authorization II 12/35

Firewall as Secretary
A firewall is like a secretary To meet with an executive
First contact the secretary
Secretary decides if meeting is important So, secretary filters out many requests
You want to meet Dean of Faculty? Secretary does some filtering
You want to meet Premier of Ontario? Secretary does lots of filtering
Ryszard Janicki
Access Control. Authorization II 13/35

Firewall Terminology
No standard firewall terminology Types of firewalls
Packet filter – works at network layer Stateful packet filter – transport layer Application proxy – application layer
Lots of other terms often used E.g., “deep packet inspection”
Ryszard Janicki
Access Control. Authorization II 14/35

Packet Filter I
Operates at network layer
Can filters based on. . . Packet Filter
Source IP (Internet Protocol) address Destination IP address
Operates at network layer Source Port
application
transport
network
link
physical
DestinaCtiaonPfoirltters based on… Flag bits (SYN, ACK, etc.)
o Source IP address
Different filtering rules for incoming and
o Destination IP address outcoming packets
o Source Port Advantages?
Speed o Destination Port
o Flag bits (SYN, ACK, etc.)
Disadvantages?
No concoepEtgorfestsasteor ingress
Cannot see TCP (Transmission Control Protocol) connections
Part 2  Access Control 126 Blind to application data, where viruses and
other malware resides
Ryszard Janicki
Access Control. Authorization II 15/35

Packet Filter
Packet Filter II
 Configured via Access Control Lists (ACLs) Configured via Access Control Lists (ACLs), but completely
o Different meaning than at start of Chapter 8 different meaning than in Lampson’s table.
Source Dest Source Dest
Action IP IP Port Port Protocol
Flag Bits
Allow
Inside
Outside
Any
80
HTTP
Any
Allow
Outside
Inside
80
> 1023
HTTP
ACK
Deny
All
All
All
All
All
All
 Q: Intention?
The purpose of the above ACL is to restrict incoming packets
 A: Restrict traffic to Web browsing
to Web responses which should have source port 80.
Part 2  Access Control 128 The ACL allows all outbound Web traffic, which should be
destined to port 80.
All other traffic is forbidden.
Knowledge of some details of popular TCP protocol (discussed later) allows partially successful attacks – details in the textbook.
Ryszard Janicki
Access Control. Authorization II 16/35

Stateful Packet Filter
AddsstateStotpactketffilutelrPacketFilter
Operates at transport layer
RememAbdedrssTsCtPactoennteoctipoansc,kfleatgbfiitlst,etrc.
application
transport
network
link
physical
Operates at transport layer requests
Can even remember UDP packets (e.g., DNS
AdvantRagems?embers TCP connections,
Cafnldaogevbeirtytsh,ineg tacp.acket filter can do plus… Keep track of ongoing connections (e.g.,
Can even remember UDP prevents some attacks base on TCP weakness)
Disadvapnatacgkees?ts (e.g., DNS requests) Cannot see application data
Slower than packet filtering
Part 2  Access Control 131
Ryszard Janicki
Access Control. Authorization II 17/35

Application Proxy
Application Proxy
A proxy is something that acts on your behalf  A proxy is something that
application
transport
network
link
physical
Application proxy looks at incoming application
data acts on your behalf
Verifies that data is safe before letting it in
 Application proxy looks at Advantages?
incoming application data
Complete view of connections and applications
 Verifies that data is safe Filter bad data at application layer (viruses,
data
Wboredfmoarcreosl)etting it in Disadvantages?
Speed
Part 2  Access Control 133
Ryszard Janicki
Access Control. Authorization II 18/35

Firewalk: a Hacking Tool
Tool to scan for open ports thru firewall
Attacker knows IP address of firewall and IP address of one system inside firewall
Time-to-live (TTL) is a value in an Internet Protocol (IP) packet that tells a network router whether or not the packet has been in the network too long and should be discarded. Set TTL to 1 more than number of hops to firewall, and set destination port to N
If firewall allows data on port N thru firewall, get time exceeded error message
Otherwise, no response
Ryszard Janicki
Access Control. Authorization II 19/35

Firewalk and Proxy Firewall
Firewalk and Proxy Firewall
Trudy
Router Router
Packet filter
Router
Dest port 12343, TTL=4 Dest port 12344, TTL=4 Dest port 12345, TTL=4 Time exceeded
 This will not work thru an application proxy (why?) This will not work thru an application proxy since (mainly) the
 The proxy creates a new packet, destroys old TTL proxy creates a new packet, destroys old TTL
Part 2  Access Control 137
Ryszard Janicki
Access Control. Authorization II 20/35

Firewalls and Defense in Depth
Typical network security architecture
An intranet is a private enterprise network, designed to support an
organization’s employees to communicate, collaborate and perform
Firewalls and Defense in Depth
their roles. It serves a broad range of purposes and uses, but at its
core, an intranet is there to help employees.
DMZ – demilitarized zone
Typical network security architecture naming system for computers, services, or other resources connected to
DND – The Domain Name System is a hierarchical and decentralized
the Internet or a private network.
DMZ
Web server
FTP server DNS server
Internet
Packet Filter
Application Proxy
Intranet with additional defense
Ryszard Janicki
Access Control. Authorization II 21/35

Intrusion: Prevention and Detection
Want to keep bad guys out
Intrusion prevention is a traditional focus of computer security
Authentication is to prevent intrusions Firewalls a form of intrusion prevention Virus defenses aimed at intrusion prevention Like locking the door on your car
In spite of intrusion prevention, bad guys will sometime get in Intrusion detection systems (IDS)
Detect attacks in progress (or soon after) Look for unusual or suspicious activity
IDS evolved from log file analysis
IDS is currently a hot research topic
How to respond when intrusion detected? Later..
Ryszard Janicki
Access Control. Authorization II 22/35

Intrusion Detection Systems (IDS)
Who is likely intruder?
May be outsider who got through firewall May be evil insider
What do intruders do?
Launch well-known attacks
Launch variations on well-known attacks Launch new/little-known attacks
“Borrow” system resources
Use compromised system to attack others. etc.
Intrusion detection approaches Signature-based IDS
Anomaly-based IDS
Intrusion detection architectures
Host-based IDS Network-based IDS
Various combinations of these categories of IDSs are possible.
For example a host-based system could use both signature-based and anomaly based techniques, or a signature-based system might employ aspects of both host-based and network-based detection
Ryszard Janicki
Access Control. Authorization II 23/35

Host-Based IDS and Network-Based IDS
Host-Based IDS
Monitor activities on hosts for Known attacks
Suspicious behavior
Designed to detect many known attacks Little or no view of network activities
Network-Based IDS
Monitor activity on the network for. . . Known attacks
Suspicious network activity Designed to detect attacks such as
Denial of service Network probes Malformed packets, etc.
Some overlap with firewall
Little or no view of host-base attacks Can have both host and network IDS
Ryszard Janicki
Access Control. Authorization II 24/35

Signature Detection Example
Failed login attempts may indicate password cracking attack IDS could use the rule “N failed login attempts in M seconds”
as signature
If N or more failed login attempts in M seconds, IDS warns of attack
Note that such a warning is specific
Administrator knows what attack is suspected Easy to verify attack (or false alarm)
Ryszard Janicki
Access Control. Authorization II 25/35

Signature Detection I
Suppose IDS warns whenever N or more failed logins in M seconds
Set N and M so false alarms not common Can do this based on “normal” behavior
But, if Trudy knows the signature, she can try N − 1 logins every M seconds. . .
Then signature detection slows down Trudy, but might not stop her
Many techniques used to make signature detection more robust
Goal is to detect “almost” signatures
For example, if “about” N login attempts in “about” M seconds
Warn of possible password cracking attempt What are reasonable values for “about”? Can use statistical analysis, heuristics, etc. Must not increase false alarm rate too much
Ryszard Janicki
Access Control. Authorization II 26/35

Signature Detection II
Advantages of signature detection Simple
Detect known attacks
Know which attack at time of detection Efficient (if reasonable number of signatures)
Disadvantages of signature detection Signature files must be kept up to date
Number of signatures may become large
Can only detect known attacks
Variation on known attack may not be detected
Ryszard Janicki
Access Control. Authorization II 27/35

Anomaly Detection
Anomaly detection systems look for unusual or abnormal behavior
There are (at least) two challenges
What is normal for this system? How “far” from normal is abnormal?
No avoiding statistics here!
mean, average define normal
variance gives some “distance” from normal to abnormal
Ryszard Janicki
Access Control. Authorization II 28/35

How to Measure Normal and Abnormal?
How to Measure Normal?
Must measure during “representative” behavior
Must not measure during an attack. . .
. . . or else attack will seem normal!
Normal is statistical mean, average (definitions may vary!) Must also compute variance to have any reasonable idea of abnormal
How to Measure Abnormal?
Abnormal is relative to some “normal” Abnormal indicates possible attack
Statistical discrimination techniques include Bayesian statistics
Linear discriminant analysis (LDA)
Quadratic discriminant analysis (QDA)
Neural nets, hidden Markov models (HMMs), etc.
Fancy modeling techniques also used Artificial intelligence
Artificial immune system principles Many, many, many others
Ryszard Janicki
Access Control. Authorization II 29/35

Anomaly Detection – Very Simple Example I
Suppose we monitor use of three commands: open, read, close
Under normal use we observe Alice:
open, read, close, open, open, read, close, . . .
Of the six possible ordered pairs, we see four pairs are normal for Alice,
(open,read), (read,close), (close,open), (open,open) Can we use this to identify unusual activity?
We monitor use of the three commands
open, read, close
If the ratio of abnormal to normal pairs is “too high”, warn of
possible attack
Could improve this approach by
Also use expected frequency of each pair
Use more than two consecutive commands Include more commands/behavior in the model More sophisticated statistical discrimination
Ryszard Janicki
Access Control. Authorization II 30/35

Anomaly Detection – Very Simple Example II
Over time, Alice has Recently, Alice has accessed file Fi and rate Hi accessed file Fi and rate Ai
Is this normal use for Alice? We compute
S = (H0 −A0)2 +(H1 −A1)2 +(H2 −A2)2 +(H3 −A3)2 = 0.02 We consider S < 0.1 to be normal, so this is normal How to account for use that varies over time? To allow “normal” to adapt to new “normal”, we update averages: Hi = 0.2Ai + 0.8Hi , where 0.8 is a measure of “over time” and 0.2 is a measure of “recently”. New H1 , H2 are unchanged, but new H3 = 0.2·0.3+0.8·0.4 = 0.38, H4 = 0.2·0.3+0.8·0.1 = 0.12, i.e. H0 H1 H2 H4 0.1 0.4 0.4 0.1 A0 A1 A2 A3 0.1 0.4 0.3 0.2 H0 H1 H2 H4 0.1 0.4 0.38 0.12 Ryszard Janicki Access Control. Authorization II 31/35 Anomaly Detection - Very Simple Example III The updated long term average is Suppose new observed rates. . . Is this normal use? Compute S = (H0 − A0)2 + . . . + (H3 − A3)2 = 0.0488 Since S = 0.0488 < 0.1 we consider this normal And we again update the long term averages: Hi =0.2Ai +0.8Hi. The starting averages were: After 2 iterations, averages are: Statistics slowly evolve to match behavior, this reduces false alarms But also opens an avenue for attack. . . Suppose Trudy always wants to access F3 With some luck she may convince IDS this is normal for Alice! H0 H1 H2 H4 0.1 0.4 0.38 0.12 A0 A1 A2 A3 0.1 0.3 0.3 0.3 H0 H1 H2 H3 0.1 0.4 0.4 0.1 H0 H1 H2 H3 0.1 0.38 0.364 0.156 Ryszard Janicki Access Control. Authorization II 32/35 Anomaly Detection To make this approach more robust, must incorporate the variances Can also combine N different statisticss Si as, say, T = (S1 + S2 + . . . + SN )/N to obtain a more complete view of “normal” Similar (but more sophisticated) approach is used in commercial IDSs Usually we combine anomaly and signature IDS Ryszard Janicki Access Control. Authorization II 33/35 Anomaly Detection Issues Systems constantly evolve and so must IDS Static system would place huge burden on administrator But evolving IDS makes it possible for attacker to (slowly) convince IDS that an attack is normal Attacker may win simply by “going slow” What does “abnormal” really mean? Indicates there may be an attack Might not be any specific info about “attack” How to respond to such vague information? In contrast, signature detection is very specific Ryszard Janicki Access Control. Authorization II 34/35 Anomaly Detection Advantages and Disadvantages Advantages? Chance of detecting unknown attacks Disadvantages? Cannot use anomaly detection alone. . . . . . must be used with signature detection Reliability is unclear May be subject to attack Anomaly detection indicates “something unusual”, but lacks specific info on possible attack Ryszard Janicki Access Control. Authorization II 35/35