代写代考 BS19].

Static Program Analysis
øller and . Schwartzbach February 10, 2022

Copyright © 2008–2021 øller and . Schwartzbach

Copyright By PowCoder代写 加微信 powcoder

Department of Computer Science Aarhus University, Denmark
This work is licensed under the Creative Commons Attribution-NonCommercial- NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

1 Introduction 1
1.1 ApplicationsofStaticProgramAnalysis . . . . . . . . . . . . . . 1 1.2 ApproximativeAnswers ………………….. 3 1.3 UndecidabilityofProgramCorrectness . . . . . . . . . . . . . . 6
2 A Tiny Imperative Programming Language 9
2.1 TheSyntaxofTIP……………………… 9
2.2 ExamplePrograms …………………….. 13
2.3 Normalization……………………….. 14
2.4 AbstractSyntaxTrees……………………. 15
2.5 ControlFlowGraphs ……………………. 16
3 Type Analysis 19
3.1 Types……………………………. 20
3.2 TypeConstraints ……………………… 22
3.3 SolvingConstraintswithUnification ……………. 25
3.4 RecordTypes………………………… 29
3.5 LimitationsoftheTypeAnalysis ……………… 33
4 Lattice Theory 37
4.1 MotivatingExample:SignAnalysis…………….. 37 4.2 Lattices…………………………… 38 4.3 ConstructingLattices……………………. 41 4.4 Equations,Monotonicity,andFixedPoints . . . . . . . . . . . . . 43
5 Dataflow Analysis with Monotone Frameworks 51
5.1 SignAnalysis,Revisited ………………….. 52 5.2 ConstantPropagationAnalysis ………………. 57 5.3 Fixed-PointAlgorithms…………………… 58

5.4 LiveVariablesAnalysis…………………… 62 5.5 AvailableExpressionsAnalysis ………………. 66 5.6 VeryBusyExpressionsAnalysis………………. 70 5.7 ReachingDefinitionsAnalysis……………….. 71 5.8 Forward,Backward,May,andMust ……………. 72 5.9 InitializedVariablesAnalysis ……………….. 75 5.10TransferFunctions …………………….. 76
6 Widening 79
6.1 IntervalAnalysis ……………………… 79 6.2 WideningandNarrowing …………………. 82
7 Path Sensitivity and Relational Analysis 89
7.1 ControlSensitivityusingAssertions ……………. 90 7.2 PathsandRelations…………………….. 91
8 Interprocedural Analysis 99
8.1 InterproceduralControlFlowGraphs . . . . . . . . . . . . . . . 99
8.2 ContextSensitivity …………………….. 103
8.3 ContextSensitivitywithCallStrings . . . . . . . . . . . . . . . . 104
8.4 Context Sensitivity with the Functional Approach . . . . . . . . . 107
9 Control Flow Analysis 111
9.1 ClosureAnalysisfortheλ-calculus…………….. 111 9.2 TheCubicAlgorithm……………………. 114 9.3 TIPwithFirst-ClassFunction ……………….. 116 9.4 ControlFlowinObjectOrientedLanguages . . . . . . . . . . . . 120
10 Pointer Analysis 123
10.1Allocation-SiteAbstraction…………………. 123 10.2Andersen’sAlgorithm …………………… 124 10.3Steensgaard’sAlgorithm………………….. 129 10.4InterproceduralPointerAnalysis ……………… 131 10.5NullPointerAnalysis……………………. 132 10.6Flow-SensitivePointerAnalysis………………. 135 10.7EscapeAnalysis………………………. 137
11 Abstract Interpretation 139
11.1ACollectingSemanticsforTIP ………………. 139 11.2AbstractionandConcretization………………. 145 11.3Soundness …………………………. 152 11.4Optimality…………………………. 158 11.5Completeness ……………………….. 160 11.6TraceSemantics………………………. 166
Index of Notation 171

CONTENTS iii Bibliography 173

Static program analysis is the art of reasoning about the behavior of computer programs without actually running them. This is useful not only in optimizing compilers for producing efficient code but also for automatic error detection and other tools that can help programmers. A static program analyzer is a pro- gram that reasons about the behavior of other programs. For anyone interested in programming, what can be more fun than writing programs that analyze programs?
As known from Turing and Rice, all nontrivial properties of the behavior of programs written in common programming languages are mathematically undecidable. This means that automated reasoning of software generally must involve approximation. It is also well known that testing, i.e. concretely running programs and inspecting the output, may reveal errors but generally cannot show their absence. In contrast, static program analysis can – with the right kind of approximations – check all possible executions of the programs and provide guarantees about their properties. One of the key challenges when developing such analyses is how to ensure high precision and efficiency to be practically useful. For example, nobody will use an analysis designed for bug finding if it reports many false positives or if it is too slow to fit into real-world software development processes.
These notes present principles and applications of static analysis of pro- grams. We cover basic type analysis, lattice theory, control flow graphs, dataflow analysis, fixed-point algorithms, widening and narrowing, path sensitivity, rela- tional analysis, interprocedural analysis, context sensitivity, control flow analysis, several flavors of pointer analysis, and key concepts of semantics-based abstract interpretation. A tiny imperative programming language with pointers and first-class functions is subjected to numerous different static analyses illustrating the techniques that are presented.
We take a constraint-based approach to static analysis where suitable constraint systems conceptually divide the analysis task into a front-end that generates constraints from program code and a back-end that solves the constraints to produce the analysis results. This approach enables separating the analysis

vi Preface
specification, which determines its precision, from the algorithmic aspects that are important for its performance. In practice when implementing analyses, we often solve the constraints on-the-fly, as they are generated, without representing them explicitly.
We focus on analyses that are fully automatic (i.e., not involving program- mer guidance, for example in the form of loop invariants or type annotations) and conservative (sound but incomplete), and we only consider Turing com- plete languages (like most programming languages used in ordinary software development).
The analyses that we cover are expressed using different kinds of constraint systems, each with their own constraint solvers:
• term unification constraints, with an almost-linear union-find algorithm, • conditional subset constraints, with a cubic-time algorithm, and
• monotone constraints over lattices, with variations of fixed-point solvers.
The style of presentation is intended to be precise but not overly formal. The readers are assumed to be familiar with advanced programming language concepts and the basics of compiler construction and computability theory.
The notes are accompanied by a web site that provides lecture slides, an implementation (in Scala) of most of the algorithms we cover, and additional exercises:
https://cs.au.dk/ ̃amoeller/spa/

Introduction
Static program analysis aims to automatically answer questions about the possi- ble behaviors of programs. In this chapter, we explain why this can be useful and interesting, and we discuss the basic characteristics of analysis tools.
1.1 Applications of Static Program Analysis
Static program analysis has been used since the early 1960’s in optimizing com- pilers. More recently, it has proven useful also for bug finding and verification tools, and in IDEs to support program development. In the following, we give some examples of the kinds of questions about program behavior that arise in these different applications.
Analysis for program optimization Optimizing compilers (including just-in- time compilers in interpreters) need to know many different properties of the program being compiled, in order to generate efficient code. A few examples of such properties are:
• Does the program contain dead code, or more specifically, is function f unreachable from main? If so, the code size can be reduced.
• Is the value of some expression inside a loop the same in every iteration? If so, the expression can be moved outside the loop to avoid redundant computations.
• Does the value of variable x depend on the program input? If not, it could be precomputed at compile time.
• Whatarethelowerandupperboundsoftheintegervariablex?Theanswer may guide the choice of runtime representation of the variable.
• Do p and q point to disjoint data structures in memory? That may enable parallel processing.

2 1 INTRODUCTION
Analysis for program correctness The most successful analysis tools that have been designed to detect errors (or verify absence of errors) target generic cor- rectness properties that apply to most or all programs written in specific pro- gramming languages. In unsafe languages like C, such errors sometimes lead to critical security vulnerabilities. In more safe languages like Java, such errors are typically less severe, but they can still cause program crashes. Examples of such properties are:
• Does there exist an input that leads to a null pointer dereference, division- by-zero, or arithmetic overflow?
• Are all variables initialized before they are read?
• Are arrays always accessed within their bounds?
• Can there be dangling references, i.e., use of pointers to memory that has been freed?
• Does the program terminate on every input? Even in reactive systems such as operating systems, the individual software components, for example device driver routines, are expected to always terminate.
Other correctness properties depend on specifications provided by the program- mer for the individual programs (or libraries), for example:
• Are all assertions guaranteed to succeed? Assertions express program specific correctness properties that are supposed to hold in all executions.
• IsfunctionhasNextalwayscalledbeforefunctionnext,andisopenalways called before read? Many libraries have such so-called typestate correctness properties.
• Does the program throw an ActivityNotFoundException or a SQLiteException for some input?
With web and mobile software, information flow correctness properties have become extremely important:
• Can input values from untrusted users flow unchecked to file system operations? This would be a violation of integrity.
• Can secret information become publicly observable? Such situations are violations of confidentiality.
The increased use of concurrency (parallel or distributed computing) and event- driven execution models gives rise to more questions about program behavior:
• Are data races possible? Many errors in multi-threaded programs are caused by two threads using a shared resource without proper synchro- nization.
• Can the program (or parts of the program) deadlock? This is often a concern for multi-threaded programs that use locks for synchronization.

1.2 APPROXIMATIVE ANSWERS 3
Analysis for program development Modern IDEs perform various kinds of program analysis to support debugging, refactoring, and program understand- ing. This involves questions, such as:
• Which functions may possibly be called on line 117, or conversely, where can function f possibly be called from? Function inlining and other refac- torings rely on such information.
• At which program points could x be assigned its current value? Can the value of variable x affect the value of variable y? Such questions often arise when programmers are trying to understand large codebases and during debugging when investigating why a certain bug appears.
• What types of values can variable x have? This kind of question often arises with programming languages where type annotations are optional or entirely absent, for example OCaml, JavaScript, or Python.
1.2 Approximative Answers
Regarding correctness, programmers routinely use testing to gain confidence
that their programs work as intended, but as famously stated by Dijkstra [Dij70]:
“Program testing can be used to show the presence of bugs, but never to show their absence.” Ideally we want guarantees about what our programs may do for all possible inputs, and we want these guarantees to be provided automatically, that is, by programs. A program analyzer is such a program that takes other programs as input and decides whether or not they have a certain property.
Reasoning about the behavior of programs can be extremely difficult, even for small programs. As an example, does the following program code terminate on every integer input n (assuming arbitrary-precision integers)?
while (n > 1) {
if(n%2==0) //ifniseven,divideitbytwo
n = n / 2;
else // if n is odd, multiply by three and add one
n = 3 * n + 1; }
In 1937, Collatz conjectured that the answer is “yes”. As of 2020, the conjecture has been checked for all inputs up to 268, but nobody has been able to prove it for all inputs [Roo19].
Even straight-line programs can be difficult to reason about. Does the fol- lowing program output true for some integer inputs?
x = input; y = input; z = input;
output x*x*x + y*y*y + z*z*z == 42;

4 1 INTRODUCTION
This was an open problem since 1954 until 2019 when the answer was found after over a million hours of computing [BS19].
Rice’s theorem [Ric53] is a general result from 1953 which informally states that all interesting questions about the input/output behavior of programs (written in Turing-complete programming languages1) are undecidable. This is easily seen for any special case. Assume for example the existence of an analyzer that decides if a variable in a program has a constant value in any execution. In other words, the analyzer is a program A that takes as input a program T , one of T ’s variables x, and some value k, and decides whether or not x’s value is always equal to k whenever T is executed.
Is the value of variable x always equal to k when T is executed?
We could then exploit this analyzer to also decide the halting problem by using as input the following program where TM(j) simulates the j’th Turing machine on empty input:
x = 17; if (TM(j)) x = 18;
Here x has a constant value 17 if and only if the j’th Turing machine does not halt on empty input. If the hypothetical constant-value analyzer A exists, then we have a decision procedure for the halting problem, which is known to be impossible [Tur37].
At first, this seems like a discouraging result, however, this theoretical result does not prevent approximative answers. While it is impossible to build an analysis that would correctly decide a property for any analyzed program, it is often possible to build analysis tools that give useful answers for most realistic programs. As the ideal analyzer does not exist, there is always room for building more precise approximations (which is colloquially called the full employment theorem for static program analysis designers).
Approximative answers may be useful for finding bugs in programs, which may be viewed as a weak form of program verification. As a case in point, consider programming with pointers in the C language. This is fraught with dangers such as null dereferences, dangling pointers, leaking memory, and unintended aliases. Ordinary compilers offer little protection from pointer errors. Consider the following small program which may perform every kind of error:
int main(int argc, char *argv[]) {
if (argc == 42) {
char *p,*q;
printf(“%s”,p);
1From this point on, we only consider Turing complete languages.

1.2 APPROXIMATIVE ANSWERS 5
q = (char *)malloc(100);
p = (char *)malloc(100);
p = (char *)malloc(100);
strcat(p,q);
assert(argc > 87);
Standard compiler tools such as gcc -Wall detect no errors in this program. Finding the errors by testing might miss the errors (for this program, no errors are encountered unless we happen to have a test case that runs the program with exactly 42 arguments). However, if we had even approximative answers to questions about null values, pointer targets, and branch conditions then many of the above errors could be caught statically, without actually running the program.
Exercise 1.1: Describe all the pointer-related errors in the above program.
Ideally, the approximations we use are conservative (or safe), meaning that all errors lean to the same side, which is determined by our intended application. As an example, approximating the memory usage of programs is conservative if the estimates are never lower than what is actually possible when the programs are executed. Conservative approximations are closely related to the concept of soundness of program analyzers. We say that a program analyzer is sound if it never gives incorrect results (but it may answer maybe). Thus, the notion of soundness depends on the intended application of the analysis output, which may cause some confusion. For example, a verification tool is typically called sound if it never misses any errors of the kinds it has been designed to detect, but it is allowed to produce spurious warnings (also called false positives), whereas an automated testing tool is called sound if all reported errors are genuine, but it may miss errors.
Program analyses that are used for optimizations typically require soundness. If given false information, the optimization may change the semantics of the program. Conversely, if given trivial information, then the optimization fails to do anything.
Consider again the problem of determining if a variable has a constant value. If our intended application is to perform constant propagation optimization, then the analysis may only answer yes if the variable really is a constant and must answer maybe if the variable may or may not be a constant. The trivial solution is of course to answer maybe all the time, so we are facing the engineering challenge of answering yes as often as possible while obtaining a reasonable

6 1 INTRODUCTION analysis performance.
yes, definitely!
maybe, don’t know
Is the value of variable x always equal to k when T is executed?
In the following chapters we focus on techniques for computing approxima- tions that are conservative with respect to the semantics of the programming language. The theory of semantics-based abstract interpretation presented in Chapter 11 provides a solid mathematical framework for reasoning about ana- lysis soundness and precision. Although soundness is a laudable goal in analysis design, modern analyzers for real programming languages often cut corners by sacrificing soundness to obtain better precision and performance, for example when modeling reflection in Java [LSS+15].
1.3 Undecidability of Program Correctness
(This section requires familiarity with the concept of universal Turing machines; it is not a prerequisite for the following chapters.)
The reduction from the halting problem presented above shows that some static analysis problems are undecidable. However, halting is often the least of the concerns programmers have about whether their programs work correctly. For example, if we wish to ensure that the programs we write cannot crash with null pointer errors, we may be willing to assume that the programs do not also have problems with infinite loops.
Using a diagonalization argument we can show a very strong result: It is impossible to build a static program analysis that can decide whether a given program may fail when executed. Moreover, this result holds even if the analysis is only required to work for programs

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com