FIT5003 Software Security
Security Testing
1
Security Testing
Security testing != traditional functionality testing
Software can be correct without being secure
Security bugs are different from traditional bugs. Security testing addresses software failures that have security implications
Security testing requires different mind set
2
Security Testing
Identify threats using Threat Modeling STRIDE, Still remember?
Quantify the risk associated for each threat
Prioritize the risks for mitigation
Test the correctness of the mitigation code
Runtime inspection by “Footprinting” is the process of discovering what system calls and system objects an application uses.
3
Security Testing (footprinting tools)
“Process Explorer” can view a number of handles including files, processes, threads, semaphores, events, ports, timer, etc
Tools such as “Process Monitor” can intercept file and registry APIs and records of the parameters a program sends to the OS.
“OLEview” provides the footprint of applications of “COM” interfaces (communications).
4
Security Testing (footprinting tools)
ps –aux is one of the command that will provide useful information for footprinting all the current processes in the system
“lsof” – list of open files – similar to ProcessExplorer in Windows.
“ktrace” in MacOS (“strace” in Linux) – to trace system calls and the return codes and “kdump” to dump the trace
5
Security Testing
Universe of all possible system capabilities
Capabilities the system should NOT have (Security testing)
Capabilities the system should have (Functional testing)
Coding Errors
Side-effect: Did A (function) while did B at the same time
6
Coding Errors (8 kingdoms of threats)
API abuse
Security features
Time and state
Error handling (or exception handling) Code quality
Encapsulation
Environment
Input validation and representation
Classification based on Gary McGraw, Building Security in Software Security
7
Coding Errors (8 kingdoms of threats)
API abuse
E.g. A network library (callee) expecting the caller to supply trustworthy DNS information in a library call.
8
Coding Errors (8 kingdoms of threats)
Security Features
If you use security features to protect software, understand clearly how they interact with the software security – getting them right is difficult.
9
Coding Errors (8 kingdoms of threats)
Time and State
Distributed computing: for modules (or processes) to communicate, state information must be shared.
10
Coding Errors (8 kingdoms of threats)
Error Handling
Error handler represent a class of contract with the programmer
handle errors correctly and safely, if possible.
if not safely, do not give out too much information (to possible attackers)
11
Coding Errors (8 kingdoms of threats)
Code Quality
If the software is reliable, then it has to be secure too.
12
Coding Errors (8 kingdoms of threats)
Encapsulation
Drawing strong boundaries between objects and setting barriers between them
Boundary allows to write secure code because it is simple Trust and trust models are required for boundary crossing
most important boundaries are between classes with various methods.
13
Coding Errors (8 kingdoms of threats)
Environment
Physical environment, the people who are managing and accessing the system resources such as programs, etc, which need to be controlled with a well defined security policy.
14
Coding Errors (8 kingdoms of threats)
Input validation and representation
Generally problems caused by metacharacters, alternative encodings, and numeric representation.
Major problem result from trusting input
15
Input Validation
Principle: Looking for all data and accept only valid ones. Typically incorporate codes that does “white list” and/or “black list” validation.
White list: list of valid inputs Black list: list of invalid inputs
Regular Expression
16
Regular Expression
Regular expression is a special text string for describing a search pattern
https://regex101.com
17
Integer Range Errors
Integer variables can overflow if one writes a value larger or smaller than the maximum possible value for the integer type
Maliciously chosen integer overflows can also change program behaviour, and can be exploitable by attackers!
unsigned int : 32 bit
Possible values: [0, 232-1] (hex [0x00000000,0xffffffff])
int : 32 bit
Possible values: [-231, 231-1] (hex [0x10000000,0x7fffffff])
MS bit = sign bit: Negative numbers -231 <= x < 0 represented as e.g. -1 rep. as 232-1 = 0xffffffff
short : 16 bit
Possible values: [-215, 215-1] (hex [0x1000,0x7fff])
unsigned short : 16 bit
Possible values: [0, 216-1] (hex [0x0000,0xffff])
x (mod 232)
18
Integer Range Errors
unsigned short num = 0xffff; // 0xffff = 65535 = max num = num + 2; (addition overflow)
printf("num = 0x%x\n", num);
Output: num = 0x01 // 65537 mod 65536 = 1
unsigned short num_mul = 0x4000; // 0x4000 = 65536/4 num = num * 4; (multiplication overflow) printf("num = 0x%x\n", num);
Output: num = 0x0 // 65536 mod 65536 = 0
19
Integer Range Errors
int catvars(char *buf1, char *buf2, unsigned int len1,
unsigned int len2){
char mybuf[256];
if((len1 + len2) > 256){
} return -1;
/* Check for buffer
overflow flawed because of
arithmetic overflow! */
memcpy(mybuf, buf1, len1);
memcpy(mybuf + len1, buf2, len2);
do_some_stuff(mybuf);
return 0;
}
len1 = 0x104 (= 256 + 4) len2 = 0xfffffffc (= 232 – 4)
(Len1 + len2) mod 2^32
= 0x100
= 256
Yet both len1 and len2 are larger than buffer length and both memcpy() calls overflow the mybuf buffer!
20
Integer Range Errors
Prevention: check inputs to make sure arithmetic does not overflow integers!
if ( (len1 > 256) || (len2 > 256) )
{
return -1; }
/* error: integer overflow
attempted! */
21
Integer Range Errors
int table[800];
int insert_in_table(int val, int pos)
{ table[pos]=val; } return0;
int copy_something(int *buf)
Negative indexing
/* pos = -1 */
{
for(i = 0; i <= 800; i++ )
{ kbuf[i] = buf[i];
} }
Off by one
/* flawed limit on i! */
int kbuf[800];
22
Security Testing (Methodology)
White Box:
Static Code Analysis
Black Box Fuzzing
23
Static Code Analysis
Goals:
Find common bugs quickly
Allow humans to focus on parts of code likely to be risky
Limitations
Cannot find design level vulnerabilities
Cannot make a judgement of importance of a found vulnerability
Only detect vulnerabilities in tool’s “rule database” Suffer from errors:
False positive: reported bugs are not really bugs False negative: missed reporting a real bug
24
Code Analysis
Program Analysis
(as part of static analysis)
int foo() {
int x;
int* y;
printf(x+*y);
}
Who calls foo? Who does foo call? Is x ever initialized? Can y ever be null? What will foo print?
25
Static Analysis
control-flow graphs: representation of (possible) control- flow in functions.
call graphs: representation of (possible) function calls.
disassembly: turn raw executables into assembly code.
decompilation: turn raw assembly code into source code.
26
Control-Flow Graph (CFG)
A way to represent the possible flow of control inside a function.
Nodes: called basic blocks. Each block consists of straight- line code ending (possibly) in a branch.
Edges: An edge A → B means that control could flow from A to B.
There is one unique entry node and one unique exit node.
27
Control-Flow Graph (CFG)
int foo() {
printf(“Boo!”);
}
printf(“Boo!”);
EXIT
ENTRY
28
Control-Flow Graph (CFG)
int foo() {
read(x);
if (x>0)
printf(x);
}
read(x)
B0: B1:
ENTRY
;
if (x>0) goto B2
B2:
B3:
printf(x);
EXIT
29
Control-Flow Graph (CFG)
ENTRY
int foo() {
read(x);
if (x>0)
printf(x);
else
printf(x+1);
}
B0:
B1: read(x)
;
if (x>0) goto B2
B2:
B4:
printf(x); printf(x+1);
B3: EXIT
30
Control-Flow Graph (CFG)
ENTRY
B2:
int
read(x);
while (x>0){
B0: B1:
foo() {
;
if (x<=0) goto B3
read(x)
printf
x=x
(x); 1;
- }
printf(x);
1; goto B1;
B3:
x=x
-
}
EXIT
31
Control-Flow Graph (CFG)
Exercise!
read(x);
while (X<10){
X←X-1;
A[X]←10;
if (X=4)
X←X-2; };
Y←X+5;
32
Control-Flow Graph (CFG)
Exercise!
read(x);
while (X<10){
X←X-1;
A[X]←10;
if (X=4)
B0: ENTRY
B1:
;
read(x)
if (x
0)
goto
X=x
>=
goto
B
2
B4: Y=x+5
B5: EXIT
33
1
B
4
B2:
X=X
–
1 A[X]=10
;
If (
X=4)
3
;
}; X←X-2; B3: Y←X+5;
–
Static Code Analysis
Simple (usually free) search-based tools
Examples: FlawFinder, RATS, ITS4, …
Search source file for “dangerous functions” known to cause common vulnerabilities
e.g. strcpy(), gets() for buffer overflows Produces list of “hits” and ranks them by risk Better than just pure search
Ignores commented code Ignores strings
Some risk ranking
But little attempt to analyze relationships within code 34
Static Code Analysis
Advanced Static Code Analyzers
Examples: Fortify, Coverity
Attempt to improve risk analysis and reduce false positive rate (less human fatigue) by deeper code analysis than simple analyzers, in particular:
Data Flow Analysis: Identifies user-controlled input that is involved in a dangerous operation (e.g. long user input data copied into fixed-size buffer)
Control Flow Analysis: Identified dangerous operation sequences (e.g. a file is configured properly before use).
35
Static Code Analysis
Let us look at a simple example (stackbuffer.c)
1 #include
2 #include
34 #ifdef _WIN32
5 #include
6 #else
7 #include
910 #define MAX_SIZE 128
11
12 void doMemCpy(char* buf, char* in, int chars) {
13
14 }
15
16 int main() {
17 char buf[64];
18 char in[MAX_SIZE];
19 int bytes;
20
21 printf(“Enter buffer contents:\n”); 22 read(0, in, MAX_SIZE-1);
23 printf(“Bytes to copy:\n”);
24 scanf(“%d”, &bytes);
25
26 doMemCpy(buf, in, bytes);
27
28 return 0;
29 }
memcpy(buf, in, chars);
36
Static Code Analysis
Simple Analyzer Results: FlawFinder
Flawfinder version 1.27, (C) 2001-2004 David A. Wheeler. Number of dangerous functions in C/C++ ruleset: 160 Examining stackbuffer.c
stackbuffer.c:13: [2] (buffer) memcpy:
Does not check for buffer overflows when copying to destination. Make
sure destination can always hold the source data.
stackbuffer.c:17: [2] (buffer) char:
Statically-sized arrays can be overflowed. Perform bounds checking,
use functions that limit length, or ensure that the size is larger than the maximum possible length.
stackbuffer.c:18: [2] (buffer) char:
Statically-sized arrays can be overflowed. Perform bounds checking,
use functions that limit length, or ensure that the size is larger than the maximum possible length.
stackbuffer.c:22: [1] (buffer) read:
Check buffer boundaries if used in a loop.
Hits = 4
Lines analyzed = 29 in 0.54 seconds (766 lines/second)
Physical Source Lines of Code (SLOC) = 22
Hits@level=[0] 0[1] 1[2] 3[3] 0[4] 0[5] 0 Hits@level+ = [0+] 4 [1+] 4 [2+] 3 [3+] 0 [4+] 0 [5+] Hits/KSLOC@level+ = [0+] 181.818 [1+] 181.818 [2+] 136.364 [3+]
0Minimum risk level = 1
Not every hit is necessarily a security vulnerability.
There may be other security vulnerabilities; review your code!
0
0 [4+]
0 [5+]
37
Static Code Analysis
Advanced Analyzer Results: Fortify
38
Fuzz Testing
Automaticaly generate test cases
Many slightly anomalous test cases are input
into a target interface
Application is monitored for errors
Inputs are generally either file based (.pdf, .png, .wav, .mpg)
Or network based… http, SNMP, SOAP
Or other…
e.g. crashme()
39
Fuzz Testing
Standard HTTP GET request GET /index.html HTTP/1.1
Anomalous requests
AAAAAA…AAAA /index.html HTTP/1.1
GET ///////index.html HTTP/1.1
GET %n%n%n%n%n%n.html HTTP/1.1 GET /AAAAAAAAAAAAA.html HTTP/1.1 GET /index.html HTTTTTTTTTTTTTP/1.1 GET /index.html HTTP/1.1.1.1.1.1.1.1
40
Fuzz Testing
Mutation Based – “Dumb Fuzzing” Generation Based – “Smart Fuzzing”
41
Fuzz Testing (Mutation)
Little or no knowledge of the structure of the inputs is assumed
Anomalies are added to existing valid inputs
Anomalies may be completely random or follow some
heuristics (e.g. shift character forward)
Examples:
Taof, GPF, ProxyFuzz, FileFuzz, Filep, etc.
42
Fuzz Testing (Mutation)
Example: fuzzing a pdf viewer
Google for .pdf (about 1 billion results)
Crawl pages to build a corpus
Use fuzzing tool (or script to) Grab a file
Mutate that file
Feed it to the program
Record if it crashed (and input that crashed it)
43
Fuzz Testing (Mutation)
Strengths
Super easy to setup and automate
Little to no protocol knowledge required Weaknesses
Limited by initial corpus
May fail for protocols with checksums, those which depend on challenge response, etc.
44
Fuzz Testing (Generation-based)
Test cases are generated from some description of the format: RFC, documentation, etc.
Anomalies are added to each possible spot in the inputs
Knowledge of protocol should give better results than random fuzzing
45
Fuzz Testing (Generation-based)
Strengths
Completeness
Can deal with complex dependencies e.g. checksums
Weaknesses
Have to have spec of protocol
Often can find good tools for existing protocols e.g. http, SNMP Writing generator can be labor intensive for complex
protocols
The spec is not the code
46
Pen Testing
Automated
Random test cases
Low Coverage
False Negatives
Fuzz Testing
Pen Testing
Manual or Semi-Automated
Targeted test cases
Security Testing
A penetration test is an attack on a computer system, network or Web application to find vulnerabilities that an attacker could exploit with the intention of finding security weaknesses, potentially gaining access to it, its functionality and data.
48
Pen Testing
External vs. Internal: Penetration Testing can be performed from the viewpoint of an external attacker or a malicious employee.
Overt vs. Covert: Penetration Testing can be performed with or without the knowledge of the IT department of the company being tested.
49
Pen Testing
Rationale
Improving the security of your site by breaking into it
A localized and time-constrained attempt to breach the information security architecture using the attacker’s techniques
Localized -> definition of scope
Time-constrained -> pen test does not last forever Attempt -> not a full security audit
attacker’s techniques -> definition of attacker’s role
50
Pen Testing
Goal
To Improve the information security awareness To assess risk
To mitigate risk immediately
To reinforce the information system process To assist in decision making processes
51
Pen Testing
Scope
Normal operational state Weakest/Strongest moment Periodically, random date within limits Before/After specific projects
52
Pen Testing
Methodology
Information Gathering
Information Analysis and Planning
Vulnerability Detection
Penetration
Attack/ Privilege Escalation
Analysis
and Clean Up Reporting
53
Pen Testing (Methodology)
• Organizational intelligence Gathering
• Access point discovery
• Network discovery
• Infrastructure fingerprinting
Information
Pen Testing (Methodology)
Information Gathering
WHOIS
From USA
Pen Testing (Methodology)
Information Analysis and Planning
• Understanding of component relationships • High level attack planning
• Target identification
• Time & effort estimation
• Alternative attacks
Pen Testing (Methodology)
• Automated vulnerability scanning Detection
• Manual scanning • In-house research • Target acquisition
Vulnerability
Pen Testing (Methodology)
• Known/available exploit selection Phase
• Exploit customization • Exploit development • Exploit testing
• Attack
Penetration
Pen Testing (Methodology)
• Final target compromise: SUCCESS! Privilege
Escalation Phase
Attack/
• Intermediate target: full compromise
• Intermediate target: partial compromise • Point of attack/attacker profile switching • Back to information gathering phase
Pen Testing (Methodology)
• Information gathering and consolidation Reporting
Analysis and Phase
• Analysis and extraction of general conclusions and recommendations
• Generation of deliverables
• Final presentation
Pen Testing (Methodology)
• Definition of specific clean up tasks Phase
• Definition of specific clean up procedures • Clean up execution
Clean Up
Pen Testing Tools
Metasploit is an open source platform for supporting vulnerability research, developing exploits and creating custom security tools
Kali Linux (Virtual Machine) is a Debian-derived Linux distribution, designed for digital forensics and penetration testing. Kali Linux is a supported platform of the Metasploit project’s Metasploit framework.
Pen Testing Tools
Maltego for information gathering
Hydra for brute force attack
Vega for vulnerability analysis of web apps Nmap, OpenVAS, OWASP Zap, and w3af