FIT3173: Security Testing
Dr Fariha Department of Software Systems and Cybersecurity
Faculty of Information Technology
Copyright By PowCoder代写 加微信 powcoder
Learning Outcomes of This Lecture
Know what is security testing and why is it different from functionality testing? Understand security testing approaches
Risk-based security testing
Penetration testing (i.e. black-box testing) and Grey-box testing Know how to use testing tools
Analyse testing specific types of applications
Develop security test plans
Source code review (i.e. white-box testing)
Security testing requires different mind set
Q: what kind of testing is appropriate to ensure software security?
Security Testing Security testing != traditional functionality testing
Software can be correct without being secure
Security bugs are different from traditional bugs
Security testing addresses software failures that have security implications
Security Testing Universe of all possible system capabilities
Capabilities the system should NOT have (Security testing)
Capabilities the system should have (Functional testing)
source: , Testing web security, p. 17
Aim is to discover the circumstances (yellow area) under which each of those functions can be performed.
Security Testing
Capabilities the system should NOT have (Security testing)
Capabilities the system should have (Functional testing)
Risk-based security testing
Source code review: “White-Box” testing
Testing is done at three levels
Manual: human peer review
Penetration testing: “Black-Box” testing, fuzzy testing
With a tool, e.g., static code analysis •
Simulate adversarial behaviours to evaluate security
Quantify the risk associated for each threat (as the ratio of threat’s impact/ ease of exploiting the threat)
Prioritise the risks for mitigation: decompose applications, identify component interfaces, and then identify vulnerability and risks
Risk-based Security Testing
Microsoft Security Development Lifecycle (SDL) Threat Modelling
Identify threats using Threat Modelling (introduced later in Week 11)
Risk-based testing is a well-known approach of security testing
Create Data Flow Diagram (DFD) and identify the security breach boundaries
Identify type of threats and check whether code for any of the possible mitigation method is incorporated.
Risk-based Security Testing
Source Code Review Code walkthrough – also known as “white box testing”
Look for bad practices of programming
Structural issues such as hard coded key/password, leaking information, etc.
Data flow issues, e.g., lack of or improper data validation.
Source code review concentrates on the language based security
Control flow issues – unreleased/unclosed resource streams, denial of services possibility, matching lock/unlock operations, etc.
Require strong experience with both software development and security Should be performed by someone outside the development team
Deliverable: a report targeted at the developers who will fix the identified vulnerabilities
Executive summary
List of technical issues identified Glossary, appendices, indexes, etc.
Source Code Review
Code Review with a Tool • Why a tool?
• Manually finding common bugs in huge pieces of code is too time consuming / error prone
• Can build in tool more common vulnerabilities than a human can keep in the mind…
• Humans better at code understanding / insight than manual search for a pattern • Static code analysers (FlawFinder, Fortify, …)
• Take code as input (static = no code execution)
• Search the code for common coding vulnerabilities • Produce results in output report file
Static Code Analysers
• Find common bugs quickly
• Allow humans to focus on parts of code likely to be risky
• Limitations
• Cannot find design level vulnerabilities
• Cannot make a judgement of importance of a found vulnerability • Only detect vulnerabilities in tool’s “rule database”
• Suffer from errors:
• False positive: reported bugs are not really bugs • False negative: missed reporting a real bug
Example: type checking
short s = 0;
int i = s;
short r = i;
error: possible loss of precision
False Positive
lf some vulnerabilities (zero-day vulnerabilities) are not included in the “rule database” (patterns) …
Call outside A environment is unknown
Other factors, e.g., scheduling of multiple threads
False Negative
Static Code Analysers • Simple (usually free) search-based tools
• Examples: FlawFinder, RATS, ITS4, …
• Search source file for “dangerous functions” known to cause common vulnerabilities, e.g., strcpy(), gets() for buffer overflows
• Produces list of “hits” and ranks them by risk • Better than just pure search
• Ignores commented code
• Risk ranking
• But little attempt to analyse relationships within code
Vulnerability Covered by FlawFinder
• Improper Input Validation
• Improper Limitation of a Pathname to a Restricted Directory (‘‘Path Traversal”)
• Improper Neutralisation of Special Elements used in an OS Command (‘‘OS Command Injection” ) • Improper Restriction of Operations within the Bounds of a Memory Buffer
• Buffer Copy without Checking Size of Input (‘‘Classic Buffer Overflow”)
• Buffer Over-read
• Uncontrolled Format String
• Execution with Unnecessary Privileges
• Use of a Broken or Risky Cryptographic Algorithm
• Concurrent Execution using Shared Resource with Improper Synchronisation (‘‘Race Condition”) • Insecure Temporary File
• Use of Potentially Dangerous Function
Advanced Static Code Analysers
Attempt to improve risk analysis and reduce false positive rate by deeper code analysis than simple analysers
Data Flow Analysis: Identifies user-controlled input that is involved in a dangerous operation (e.g., long user input data copied into fixed-size buffer)
Control Flow Analysis: Identified dangerous operation sequences (e.g., a file is configured properly before use)
• Challenges
Control Flow Integrity
If not, might be compromised
Idea: observe the program’s behaviour – is it doing what we expect it to?
Define “expected behaviour”
• Detect deviations from expectation efficiently
Avoid compromise of the control flow integrity detector
Control Flow Integrity
Control flow graph (CFG)
Define “expected behaviour”
Detect deviations from expectation efficiently
In-line reference monitor (IRM)
Avoid compromise of the detector
Sufficient randomness, immutability
Call Graph
Which functions call other functions
Control Flow Graph
Break into basic blocks Distinguish calls from returns
Monitor the control flow of the program and ensure that it only follows paths allowed by the CFG
CFI: Compliance with CFG Compute the call/return CFG in advance
During compilation, or from the binary
Assuming the code is immutable, the target address cannot be changed
Observation: Direct calls need not be monitored
Therefore: monitor only indirect calls
jmp, call, ret with non-constant targets
Control Flow Graph
Direct calls (always the same target)
Control Flow Graph
Indirect transfer (call via register, or ret)
Implement the monitor in-line, as a program transformation
Insert a label just before the target address of an indirect transfer Insert code to check the label of the target at each indirect transfer
Abort if the label does not match
The labels are determined by the CFG
In-line Monitor
• Constraints:
• return sites from calls to sort must share a label (L) • call targets gt and lt must share a label (M)
• remaining label unconstrained (N)
Defeat CFI? Inject code that has a legal label
Will not work because we assume non-executable data
Modify code labels to allow the desired control flow
Will not work because the code is immutable
Remote code injection, etc.
Nor data leaks or corruptions
CFI Assurances
CFI defeats control flow-modifying attacks
Heartbleed would not be prevented
Presence of self-modifying code
Control modification is allowed by graph
Code Review by Community
selected code review and pull requests from OpenSSL on GitHub
Limits of White Box Testing Tester must have knowledge of the program (e.g., “c” for buffer overflow)
Sometimes, it might not cover realistic cases/conditions in the program
Focus on the current implementation; missing security features cannot be discovered
Approaches for Penetration Testing
Source code reviews – white box testing
Penetration Testing – black box testing
Grey-box testing – combination of both
Evaluate the security of your application weaknesses by simulating malicious attacks
If we characterise functional testing as “testing for positives” then penetration testing is for “testing for negatives”.
Penetration Testing
It is an outside (rather than inside) testing approach
Art of testing an application without knowing the inner workings of the application
Capabilities the system should NOT have (Penetration testing)
Capabilities the system should
have (Functional testing)
Penetration test – point of view of a hacker
Penetration Testing
Function test – point of view of a normal user
It is a type of non-functional testing
application without any authorisation.
For example, one can check if anyone can hack the system or login to the
Penetration Testing
Testing for a negative poses a greater challenge compared to positive.
If a negative test does not discover any fault(s) -> offers only proof that no fault can occur under that (particular) test conditions.
Penetration Testing
But does NOT prove that no fault exist.
Penetration Testing
Enumerating actions with the intention to produce a fault.
• May not be a sound approach to gain the confidence of software security.
This can be done only by experience “hackers”
Q: How to enumerate (all) possible actions?
Penetration Testing • Exploratory – Manual
• Black box security testing without the aid of a tool and guided by the tester’s instinct and experience
• Systematic – Manual
• Black box testing without the aid of tool based upon pre-determined security test plan developed from requirements
• Fuzzing – Automated (a guest lecture on fuzzy testing)
• Black box security testing with the aid of tool to help reduce the repetitive nature of testing tasks .. fuzzing.
Fuzzing testing tools: submit malformed, malicious, and random data to a system’s entry points in an attempt to uncover faults.
Can be easily automated to try a large number of random / semi-random variations
Use combinations of “known-to-be-dangerous” values (known as fuzzy vectors) and random data
for integers: zero, possibly negative or very big numbers (e.g., -1,0,0×100,0x1000,0x7ffffffe,0x7fffffff)
Fuzzy Testing
Tools: google AFL, Microsoft’s SDL MiniFuzz File Fuzzer
for chars: escaped, interpretable characters / instructions (e.g., %s%p%x%d,. 1024d, %.2049d, %p%p%p%p, %x%x%x%x, %99999999999s)
Security Testing Plan Propose a security testing plan based on patterns
Pattern can be used many times but may not appear in exactly the same way.
Design pattern: A description of a recurring problem and a well-defined description of the core solution to the problem
Software security testing plan:
A description of a software vulnerability type that includes a template of a test case that exposes a vulnerability, typically by emulating what an attacker would do to exploit that vulnerability.
Pattern Catalog: A collection of related patterns that apply to the same domain and contain the same elements.
Pattern: Input Validation Vulnerability Test
• Keywords: record, enter, update, create, capture, store, edit, modify, specify, indicate, maintain, customise query, etc. • Targeted Vulnerabilities: SQL Injection, Path Traversal, PHP File Inclusion, etc.
• Test Procedure Template:
1. Authenticateas
2. Open the user interface for
Data Input Test Cases
Random Wrong type
Script Escaped
Wrong sign
Out of bounds Contents Special characters
HTML Slashes Quotes
Out-of-sync
High volume
Security data Data mutation
Applies to on-the- wire data
Exists Does not exist
Valid + Invalid
Short Zero length
No access Restricted access
techniques
source: Howard and LeBlanc, Writing Secure Code, Fig 19-1, p. 576
Pattern: Malicious File Tests
• Keywords: file, save, upload, receive, image, document, scan
• Targeted Vulnerabilities: Unrestricted upload of file with dangeroustype, download of code without integrity check
• Test Procedure Template:
1. Authenticateas
2. Open the user interface for
4. View or download the malicious file.
• Expected Results Template:
• The file should be rejected or should not be stored.
Pattern: Audit Tests
• Keywords: patient record, credit card, GPA, etc. • Targeted Vulnerabilities: Insufficient Logging
• Test Procedure Template:
1. Authenticateas
2. Open the user interface for
4. Authenticate as the security administrator.
5. Opentheauditrecordsfortoday’sdate.
• Expected Results Template:
• The audit records should show that
Typical Security Testing Plan Testing goals:
Describe the approach used by the testers during testing
• Organise and implement the testing process
Define the test deliverables Project (software) description
Describe functionality, architecture, components.. DFD, trusted boundaries..
Terminology
• Test plan – document that describes the what, why, who, when and how of a testing project • Test item – software program being tested (what is being tested)
• Test – evaluation of a test item with a clearly defined objective (what it is being tested for)
• e.g., “check that no unneeded services are running on any of the system’s servers” • Test case – detailed description of a test (how it is being tested)
• a set of inputs, execution conditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
• Test script – automated sequence of tasks
• Test procedure – manual sequence of tasks
• Test run – actual execution of test scripts and procedures
Terminology
• Defect (Bug) – nonconformance of the product to the functional/non-functional requirements of the specification.
• Audit – a review of a system in order to validate it. Generally, this either refers to code auditing or reviewing audit logs.
• Risk – a possibility of a negative or undesirable occurrence. There are two independent parts of risk: impact and likelihood.
• Impact – describes the negative effect that results from a risk being realized.
• Likelihood – describes the chance that a risk will be realized and the negative impact will occur.
Define security goals through understanding security requirements of the applications;
Testing Objectives
Identify the security threats;
• Validate that the security controls operate as expected;
Eliminate the impact of security issues on the safety and integrity of the product;
Guarantee that the product will function correctly under malicious attacks;
Roles and Responsibilities • Team lead
• Test process set up and adjust
• Test plan creation and test activities tracking • Test designer
• Security models creation
• Test cases and test suites creation and updating • Test engineer
• Run test cases
• Test results analysis and creation
An Example Test Plan
Test cases
Test scripts
Test items: 1. XXX
run XXX 10 0 run XXX126 run XXX127 …
run ZZZ A5c run ZZZ 126 …
An Example Test Plan
Test items:
1. Calc.exe 2. …
Test 1: Input must be numeric Test 2: Check division by zero
Test 3: Check for fractional results …
Test scripts:
Test cases:
open results.txt run calc O 5c run calc 5c \n run calc 10 0 run calc 0 10 run calc 5 3
Case 1: input O, 5c, \n Case 2: divide 10 by 0 Case 3: divide 0 by 10 Case 4: divide 5 by 3 …
close results.txt
Q: How many tests and test cases we shall propose for this program?
Categories of Software Attacks
Environment within which apps work
Software-dependency attacks
User-interface attacks
Including buffer overflows
Attacks against the application’s design
Trapdoors, debugging, test instrumentation
Attacks against the design implementation
Test Plan for Software-dependency Attacks
Block access to libraries
Third party modules (or libraries, code, etc.)
• Manipulate the application’s registry values
• Force the application to use corrupt files
Manipulate and replace files that the application creates, reads from, writes to, or executes
Force the application to operate in low memory, disk-space and network-availability conditions
Test Plan for User-Interface Attacks
Overflow input buffers
Malformed user input
Examine all common switches and options (incorrect input) Explore escape characters, character sets, and commands Testing for error handling
Test Plan for Design Attacks Vulnerabilities at design level
Try common defaults and test account names and passwords
Expose unprotected test APls Connect to all ports
Use alternate routes to accomplish the same task Force the system to reset values
Create loop conditions in any application that intercepts script, code, or other user-supplied logic
Test Plan for Implementation Attacks
Time of check and time of use
Create files with the same name as files protected with a higher classification
Look for temporary files and screen their contents for sensitive information Side-channel analysis
Check the state of a resource before using that resource, but the state can change between the check and the use.
Testing Specific Types of Applications
• COM, DCOM, ActiveX and RPC apps:
• Exercise the remote function calls with suitable malformed argument data
• File-based apps
• Try malformed data in files
• Try malformed file names and attributes
• Command-line apps
• Malformed argument contents, length, number of arguments • Consider argument types
• Web apps
• Conduct bypassing, escalation , and sensitive data disclosure techniques • Brute force; password complexity
Security testing is different to functional testing
A security tester can use one or more testing techniques, including penetration testing, source code reviews and automated tools
Base your test plans on threat analysis
Test for vulnerabilities specific to different types of applications
Take Away Message
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com