程序代写 FIT3173: Secure Software Development & Threat Modelling

FIT3173: Secure Software Development & Threat Modelling
Department of Software Systems and Cybersecurity Faculty of Information Technology

Learning Outcomes of This Lecture

Copyright By PowCoder代写 加微信 powcoder

• Understand secure software development process
• Know the definition of threat modelling and how to apply threat modelling • Identify threats using data flow diagram
• Analyse design flaws and code errors during design and implementation • Apply secure software development principles

Software Security (Recap)
• Software security is the idea of engineering software so that it continues to function correctly under malicious attack. – Graw
• A program is said to be secure if new states of the objects modified by the program are collectively in an acceptable state even in adversarial environment
• SAFE(TY) -> in acceptable state
• This has to happen every time when you use that software
• DEPENDABILITY
• Or you need to guarantee that this will happen
• RELIABILITY

Software (in)Security – Facts
• No silver bullets methods – Crypto or special security features do not magically solve all problems
• software security ≠ security software
• “if you think your problem can be solved by cryptography, you do not understand cryptography and you do not understand your problem” –
• Software Security is an emergent property of entire system – quality software

Security is always a secondary concern
• Primary goal of software is to provide some functionality or services; managing associated risks is a derived/secondary concern (Flawed approach)
• Add security once the functional requirements are satisfied
• Better approach: Build security in from the start Incorporate security- minded thinking into all phases of the development process
• There is often a trade-off/conflict between
• security, functionality & convenience, where security typically loses out

Functionality vs Security: Lost battles
• Operating systems (OS)
• with huge OS, with broad attack surface
• Programming languages
• with easy to use, efficient, but very insecure and error-prone mechanisms
• Web browsers
• with plug-ins (extensions) for various formats, javascript, ActiveX, VB Script…
• Email clients
• which automatically cope with all sorts of formats & attachments.

Problems are due to • lack of awareness
• of threats, but also of what should be protected • lack of knowledge
• of potential security problems, but also of solutions • compounded by complexity
• software written in complicated languages, using large APIs , and running on huge infrastructure
• people choosing functionality over security

Vulnerability Cycle
1. Someoneuncoversanddisclosesanewvulnerabilityinapieceof software.
2. Badguysquicklyanalysetheinformationandusethevulnerabilityto launch attacks against systems or networks.
3. Simultaneously,goodguys(we’llincludesecurityfolkswhoworkforthe vendor) start looking for a fix.
• They rally software development engineers in their respective organisations to analyse the vulnerability, develop a fix, test the fix in a controlled environment, and release the fix to the community of users who rely on the software.

Vulnerability Cycle
4. Ifthevulnerabilityisserious,ortheattacksaredramatic,thevariousmediamakesure that the public knows that a new battle is underway.
• The software developers at the organization that produced the product (and the vulnerability!) are deluged with phone calls from the media, wanting to find out what is going on.
5. Lotsoffolksgetveryworried.Pundits,cranks,finger-pointers,andcopycatsdotheir thing.
6. If a knee-jerk countermeasure is available and might do some good, it will be deployed. (For example, CIOs may direct that all email coming into an enterprise be shut off.)
• This type of countermeasure results in numerous and costly business interruptions at companies that rely on the software for conducting their business operations.

Vulnerability Cycle
7. Whenapatchisready,securityfolkswhopaycloseattentiontosuchmattersobtain,test, and apply the patch.
• Everyday system administrators and ordinary business folks may get the word and follow through as well. Perhaps, for a lucky few, the patch will be installed as part of an automated update feature.
• But inevitably, many affected systems and networks will never be patched during the lifetime of the vulnerability – or will only receive the patch as part of a major version upgrade
• CERT Australia: the national computer emergency response team (https://www.cert.gov.au)
8. Securitytechniciansexaminerelatedutilitiesandcodefragments(aswellasthenewpatch itself!) for similar vulnerabilities. At this point, the cycle can repeat.

Vulnerability Cycle

Software Development Process • Common phases in software development:
• Requirements
• Implementation
• Testing/assurance
• Phases of development apply to the whole project, its individual components, and its refinements/iterations
• Where does security engineering fit in? All phases

Secure SW Development Cycle

Threat Modelling (Architectural Risk Analysis)

Threat Model
• The threat model makes explicit the adversary’s assumed powers
• Consequence: The threat model must match reality, otherwise the risk analysis of the system will be wrong
• The threat model is critically important
• If you are not explicit about what the attacker can do, how can you assess whether your design will repel that attacker?

Threat Modelling Basics • What?
• A repeatable process to find and address all threats to your product • When?
• The earlier you start, the more time to plan and fix • Why?
• Requirement of secure software development • How?

STRIDE Approach
• Spoofing, Tampering, Repudiation, Information Disclosure, Denial of
Service, and Elevation of Privilege Threat
Security Property Authentication Integrity Non-repudiation Confidentiality
Tampering Repudiation Information disclosure Denial of service Elevation of privilege
Availability Authorisation
e your system into r
• To follow STRIDE, you decompose your system into relevant components, analyse each component for susceptibility to threats, and mitigate them

Example: Network User
• An (anonymous) user that can connect to a service via the network
• measure the size and timing of requests and responses • run parallel sessions
• provide malformed inputs, malformed messages
• drop or send extra messages!
• Example attacks: SQL injection, XSS, CSRF, buffer overrun payloads, …

Example: Snooping User
• Internet user on the same network as other users of some service
• For example, someone connected to an unencrypted Wi-Fi network at a coffee shop
• Thus, can additionally
• Read/measure others’ messages,
• Intercept, duplicate, and modify messages
• Example attacks: Session hijacking (and other data theft), denial of service

Example: Co-located User
• Internet user on the same machine as other users of some service • E.g., malware installed on a user’s laptop
• Thus, can additionally
• Read/write user’s files (e.g., cookies) and memory
• Snoop keypresses and other events Read/write the user’s display (e.g., to spoof)
• Example attacks: Password theft(and other credentials/secrets)

Threat-driven Design • Network-only attackers implies message traffic is safe
• No need to encrypt communications
• This is what telnet remote login software assumed • Snooping attackers means message traffic is visible
• So use encrypted communication protocols
• Co-located attacker can access local files, memory
• Cannot store unencrypted secrets, like passwords

Bad Model = Bad Security
• Any assumptions you make in your model are potential holes that the adversary can exploit!
• Mistaken assumptions
• Assumption: Encrypted traffic carries no information
• By analysing the size and distribution of messages, you can infer application state
• Assumption: Timing channels carry little information
• Timing measurements of previous RSA implementations could be used eventually reveal a remote SSL secret key

Threat Modelling Process

Process Diagramming • Use DFDs (Data Flow Diagrams)
• Include processes, data stores, data flows • Include trust boundaries
• Diagrams per scenario may be helpful
• Update diagrams as product changes
• Enumerate assumptions, dependencies
• Number everything (if manual)

Diagrams: Trust Boundaries
• Add trust boundaries that intersect data flows
• Trust boundaries represent the border between trusted and untrusted elements
• Trust is complex. You might trust your mechanic with your car, your dentist with your teeth, and your banker with your money, but you probably don’t trust your dentist to change your spark plugs.
• Points/surfaces where an attacker can interject
• Machine boundaries, privilege boundaries, integrity boundaries are examples of trust boundaries
• Threads in a native process are often inside a trust boundary, because they share the same privileges, rights, identifiers and access
• Processes talking across a network always have a trust boundary
• May create a secure channel, but they’re still distinct entities
• Encrypting network traffic is an ‘instinctive’ mitigation
• But doesn’t address tampering or spoofing

Diagram Elements: Examples

Diagram Layers • Diagram layersContext Diagram
• Very high-level; entire component / product / system • Level 1 Diagram
• High level; single feature / scenario • Level 2 Diagram
• Low level; detailed sub-components of features • Level 3 Diagram
• More detailed: rare to need more layers, except in huge projects or when you’re drawing more trust boundaries

Context Diagram – the Highest Layer

Level 1 Diagram

Apply STRIDE to Each Element

Mitigation Is the Point of Threat Modelling
• Mitigations are an area of expertise, such as networking, databases, or cryptography
• Amateurs make mistakes
• Mitigation failures will appear to work
• Until an expert looks at them
• We hope that expert will work for us
• When you need to invent mitigations, get expert help

Validating Threat Models
• Validate the whole threat model
• Does diagram match final code?
• Are threats enumerated?
• Minimum: STRIDE per element that touches a trust boundary
• Has Test / QA reviewed the model?
• Tester approach often finds issues with threat model or details
• Is each threat mitigated?
• Are mitigations done right?
• Did you check these before Final Security Review? • Shipping will be more predictable

Standard Mitigations

Tips of Finding a Good Model
• Compare against similar systems!
• What attacks does their design contend with?
• Understand past attacks and attack patterns! • How do they apply to your system?
• Challenge assumptions in your design
• What happens if an assumption is untrue? What would a breach potentially cost you?
• How hard would it be to get rid of an assumption, allowing for a stronger adversary?What would that development cost?

Defining Security Requirements • Many processes for deciding security requirements
• Example: General policy concerns
• Due to regulations/standards (HIPAA, GDPR, etc.)
• Due organisational values (e.g., valuing privacy) • Example: Policy arising from threat modelling
• Which attacks cause the greatest concern? Who are likely adversaries and what are their goals and methods?
• Which attacks have already occurred? Within the organisation, or elsewhere on related systems?

Abuse Cases
• Abuse cases illustrate security requirements
• Where use cases describe what a system should do, abuse cases describe what it should not do
• Example use case: The system allows bank managers to modify an account’s interest rate
• Example abuse case: A user is able to spoof being a manager and thereby change the interest rate on an account

Defining Abuse Cases
• Using attack patterns and likely scenarios, construct cases in which an adversary’s exercise of power could violate a security requirement
• Based on the threat model
• What might occur if a security measure was removed?
• Example: Co-located attacker steals password file and learns all user passwords
• Possible if password file is not encrypted
• Example: Snooping attacker replays a captured message, effecting a bank withdrawal
• Possible if messages are have no nonce

Design Defects
• Recall that software defects consist of both flaws and bugs • Flaws are problems in the design
• Bugs are problems in the implementation
• We avoid flaws during the design phase!
• According to Graw, 50% of security problems are flaws!

Top Design Flaws
• Assume trust, rather than explicitly give it or award it
• Use an authentication mechanism that can be bypassed or tampered with
• Authorise without considering sufficient context
• Confuse data and control instructions, and process control instructions from untrusted sources • Fail to validate data explicitly and comprehensively
• Fail to use cryptography correctly
• Fail to identify sensitive data and how to handle it
• Ignore the users
• Integrate external components without considering their attack surface
• Rigidly constrain future changes to objects and actors

Design vs. Implementation
• Many different levels of system design decisions
• Highest level: main actors (processes), interactions, and programming language(s) to use
• Next level: decomposition of an actor into modules/ components, identifying the core functionalities and how they work together
• Next level: how to implement data types and functions, e.g., purely functionally, or using parallelism, etc.

Principles in Secure Software Design • Learn from Mistakes
• E.g., how could it have been prevented? • Minimise attack surface
• E.g., how many network connections used? • Use secure defaults
• E.g., password aging and complexity should be enabled by default, and should not be turned off.
• Use defence in depth (Plan for failure)

Principles in Secure Software Design
• Use Least Privilege
• Give program rights necessary to do job but not more;
• Malware injecting code in your program will have the same rights!
• Separation of Privilege
• Backward compatibility is dangerous
• Assume external systems/entities are insecure

Principles in Secure Software Design
• Authorise after authentication
• Only receive control instructions from trusted sources
• Separate data and control instructions
• Ensure all data is validated
• Ensure cryptography is used correctly
• Identify all sensitive data and handle appropriately

Secure Software Design Principles • Fail to a Secure mode
• Don’t disclose more than you have to
• Don’t tell the attacker why the error occurred! • Security Features != Secure Features
• E.g., Crypto in itself may be useless – it has to be used correctly and to protect the right things!
• Don’t depend on security via obscurity
• encryption machine in WWII

Taxonomy of Software Coding Errors
• Input validation and representation • API abuse
• Security features
• Time and state
• Error handling (or exception handling) • Code quality
• Encapsulation
• Environment

Input Validation and Representation
• Generally problems caused by meta characters, alternative encodings, and numeric representation.
• Major problem result from trusting input

• An API is a contract between a caller and a callee.
• API abuses are caused by the caller failing to honour its end of this contract.
• E.g., A network library (callee) expecting the caller to supply trustworthy DNS information in a library call.

Security Features
• Software security is not same as the security of software
• If you use security features to protect software, understand clearly how they interact with the software security – getting them right is difficult.

Time and State
• Distributed computing is about time and space.
• For modules (or processes) to communicate, state information must be shared.

Error Handling
• To try to break software – just throw some junk input at the program and see what happens?
• Error handler represent a class of contract with the programmer
• handle errors correctly and safely, if possible.
• if not safely, do not give out too much information (to possible attackers)

Code Quality
• Security is a subset of reliability
• If the software is reliable, then it has to be secure too

Encapsulation
• is about drawing strong boundaries between objects as well setting barriers between them
• Boundary allows to write secure code because it is simple • Trust and trust models are required for boundary crossing
• most important boundaries are between classes with various methods

Environment
• Physical environment, the people who are managing and accessing the system resources such as programs, etc
• Can be controlled with a well defined security policy.

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com