CS计算机代考程序代写 data structure Java flex database F# algorithm prolog information theory scheme IOS compiler Modeling and Verifying Security Protocols with the Applied Pi Calculus and ProVerif

Modeling and Verifying Security Protocols with the Applied Pi Calculus and ProVerif
This article may be used only for the purpose of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without ex- plicit Publisher approval.

Contents
1 Introduction 2 1.1 Verifyingsecurityprotocols …………….. 2 1.2 StructureofProVerif ……………….. 10 1.3 Comparisonwithprevioussurveys. . . . . . . . . . . . . . 11
2 The Protocol Specification Language 12
2.1 Core language: syntax and informal semantics . . . . . . . 12
2.2 Anexampleofprotocol ………………. 20
2.3 Corelanguage:typesystem…………….. 23
2.4 Corelanguage:formalsemantics ………….. 24
2.5 Extensions…………………….. 29
3 Verifying Security Properties 42 3.1 Adversary …………………….. 42 3.2 Secrecy………………………. 43 3.3 Correspondences………………….. 64 3.4 Equivalences……………………. 70 3.5 Usageheuristics ………………….. 81
4 Link with the Applied Pi Calculus 83

5 Applications 88 5.1 Casestudies ……………………. 88 5.2 Extensions…………………….. 89 5.3 ProVerifasback-end ……………….. 90
6 Conclusion 92 Acknowledgments 95
Appendices 96
A Proof of Theorem 3.5 97
B Proofs for Chapter 4 100 B.1 ProofofProposition4.1………………. 100 B.2 ProofofPropositions4.2and4.3. . . . . . . . . . . . . . 107 B.3 Relating definitions of observational equivalence . . . . . . 114
References 118

Modeling and Verifying Security
Protocols with the Applied Pi
Calculus and ProVerif
Bruno Blanchet1
1INRIA Paris, France; Bruno.Blanchet@inria.fr
ABSTRACT
ProVerif is an automatic symbolic protocol verifier. It sup- ports a wide range of cryptographic primitives, defined by rewrite rules or by equations. It can prove various security properties: secrecy, authentication, and process equivalences, for an unbounded message space and an unbounded number of sessions. It takes as input a description of the protocol to verify in a dialect of the applied pi calculus, an exten- sion of the pi calculus with cryptography. It automatically translates this protocol description into Horn clauses and determines whether the desired security properties hold by resolution on these clauses. This survey presents an overview of the research on ProVerif.
Bruno Blanchet (2016), “Modeling and Verifying Security Protocols with the Applied Pi Calculus and ProVerif”, : Vol. 1, No. 1, pp 1–135. DOI: 10.1561/3300000004.

1 Introduction
1.1 Verifying security protocols
The verification of security protocols has been an active research area since the 1990s. This topic is interesting for several reasons. Security protocols are ubiquitous: they are used for e-commerce, wireless net- works, credit cards, e-voting, among others. The design of security protocols is notoriously error-prone. This point can be illustrated by attacks found against many published protocols. For instance, a famous attack was discovered by Lowe, 1996 against the Needham-Schroeder public-key protocol (Needham and Schroeder, 1978) 17 years after its publication. Attacks are also found against many protocols used in practice. Important examples are SSL (Secure Sockets Layer) and its successor TLS (Transport Layer Security), which are used for https:// connexions. The first version dates back to 1994, and since then many attacks were discovered, fixed versions were developed, and new attacks are still regularly discovered (Beurdouche et al., 2015; Adrian et al., 2015). Moreover, security errors cannot be detected by functional test- ing, since they appear only in the presence of a malicious adversary. These errors can also have serious consequences. Hence, the formal verification or proof of protocols is particularly desirable.
2

1.1. Verifying security protocols 3
1.1.1 Modeling security protocols
In order to verify protocols, two main models have been considered:
• In the symbolic model, often called Dolev-Yao model and due to Needham and Schroeder, 1978 and Dolev and Yao, 1983, cryp- tographic primitives are considered as perfect blackboxes, modeled by function symbols in an algebra of terms, possibly with equa- tions. Messages are terms on these primitives and the adversary can compute only using these primitives. This is the model usually considered by formal method practitioners.
• In contrast, in the computational model, messages are bitstrings, cryptographic primitives are functions from bitstrings to bitstrings, and the adversary is any probabilistic Turing machine. This is the model usually considered by cryptographers.
The symbolic model is an abstract model that makes it easier to build automatic verification tools, and many such tools exist: AVISPA (Ar- mando et al., 2005), FDR (Lowe, 1996), Scyther (Cremers, 2008), Tamarin (Schmidt et al., 2012), for instance. The computational model is closer to the real execution of protocols, but the proofs are more difficult to automate; we refer the reader to (Blanchet, 2012a) and to Chapter 6 for some information on the mechanization of proofs in the computational model.
Most often, the relations between cryptographic primitives given in the symbolic model also hold in the computational model.1 In this case, an attack in the symbolic model directly leads to an attack in the computational model, and a practical attack. However, the converse is not true in general: a protocol may be proved secure in the symbolic model, and still be subject to attacks in the computational model. For this reason, the computational soundness approach was introduced: it proves general theorems showing that security in the symbolic model implies security in the computational model, modulo additional assump- tions. However, since the two models do not coincide, this approach
1Sometimes, one may also overapproximate the capabilities of the adversary in the symbolic model.

4 Introduction
typically requires strong assumptions on the cryptographic primitives (for instance, encryption has to hide the length of the messages) and on the protocol (for instance, absence of key cycles, in which a key is
encrypted under itself; correctly generated keys, even for the adversary). This approach was pioneered by Abadi and Rogaway, 2002. This work triggered much research in this direction; we refer to (Cortier et al., 2011) for a survey.
Even though the computational model is closer to reality than the symbolic model, we stress that it is still a model. In particular, it does not take into account side channels, such as timing and power consumption, which may give additional information to an adversary and enable new attacks. Moreover, one often studies specifications of protocols. New attacks may appear when the protocol is implemented, either because the specification has not been faithfully implemented, or because the attacks rely on implementation details that do not appear at the specification level.
In this survey, we focus on the verification of specifications of proto- cols in the symbolic model. Even though it is fairly abstract, this level of verification is relevant in practice as it enables the discovery of many attacks.
1.1.2 Target security properties
Security protocols can aim at a wide variety of security goals. The main security properties can be classified into two categories, trace properties and equivalence properties. We define these categories and mention two particularly important examples: secrecy and authentication. These are two basic properties required by most security protocols. Some protocols, such as e-voting protocols (Delaune et al., 2009), require more complex and specific security properties, which we will not discuss.
Trace and equivalence properties
Trace properties are properties that can be defined on each execution trace (each run) of the protocol. The protocol satisfies such a property when it holds for all traces. For example, the fact that some states are

1.1. Verifying security protocols 5
unreachable is a trace property.
Equivalence properties mean that the adversary cannot distinguish
two processes (that is, protocols). For instance, one of these processes can be the protocol under study, and the other one can be its specification. Then, the equivalence means that the protocol satisfies its specification. Therefore, equivalences can be used to model many subtle security properties. Several variants exist (observational equivalence, testing equivalence, trace equivalence) (Abadi and Gordon, 1999; Abadi and Gordon, 1998; Abadi and Fournet, 2001). Observational equivalence provides compositional proofs: if a protocol P is equivalent to P′, P can be replaced with P′ in a more complex protocol. However, the proof of equivalences is more difficult to automate than the proof of trace properties: equivalences cannot be expressed on a single trace, they require relations between traces (or processes).
Secrecy
Secrecy, or confidentiality, means that the adversary cannot obtain some information on data manipulated by the protocol. Secrecy can be formalized in two ways:
• Most often, secrecy means that the adversary cannot compute exactly the considered piece of data. In this survey, this prop- erty will simply be named secrecy, or when emphasis is needed, syntactic secrecy.
• Sometimes, one uses a stronger notion, strong secrecy, which means that the adversary cannot detect a change in the value of the secret (Abadi, 1999; Blanchet, 2004). In other words, the adversary has no information at all on the value of the secret.
The difference between syntactic secrecy and strong secrecy can be illustrated by a simple example: consider a piece of data for which the adversary knows half of the bits but not the other half. This piece of data is syntactically secret since the adversary cannot compute it entirely, but not strongly secret, since the adversary can see if one of the bits it knows changes. Syntactic secrecy cannot be used to

6 Introduction
express secrecy of data chosen among known constants. For instance, talking about syntactic secrecy of a boolean true or false does not make sense, because the adversary knows the constants true and false from the start. In this case, one has to use strong secrecy: the adversary must not be able to distinguish a protocol using the value true from the same protocol using the value false. These two notions are often equivalent (Cortier et al., 2007), for atomic data (data that cannot be split into several pieces, such as nonces, which are random numbers chosen independently at each run of the protocol) and for probabilistic cryptographic primitives. Syntactic secrecy is a trace property, while strong secrecy is an equivalence property.
Authentication
Authentication means that, if a participant A runs the protocol appar- ently with a participant B, then B runs the protocol apparently with A, and conversely. One often requires that A and B also share the same values of the parameters of the protocol.
Authentication is generally formalized by correspondence proper- ties (Woo and Lam, 1993; Lowe, 1997), of the form: if A executes a certain event e1 (for instance, A terminates the protocol with B), then B has executed a certain event e2 (for instance, B started a session of the protocol with A). There exist several variants of these properties. For instance, one may require that each execution of e1 corresponds to a distinct execution of e2 (injective correspondence) or, on the contrary, that if e1 has been executed, then e2 has been executed at least once
(non-injective correspondence). The events e1 and e2 may also include more or fewer parameters depending on the desired property. These properties are trace properties.
1.1.3 Symbolic verification
Basically, to verify protocols in the symbolic model, one computes the set of terms (messages) that the adversary knows. If a message does not belong to this set, then this message is secret. The difficulty is that this set is infinite, for two reasons: the adversary can build terms

1.1. Verifying security protocols 7
of unbounded size, and the considered protocol can be executed any number of times. Several approaches can be considered to solve this problem:
• One can bound the size of messages and the number of executions of the protocols. In this case, the state space is finite, and one can apply standard model-checking techniques. This is the approach taken by FDR (Lowe, 1996) and by SATMC (Armando et al., 2014), for instance.
• If we bound only the number of executions of the protocol, the state space is infinite, but under reasonable assumptions, one can show that the problem of security protocol verification is decidable: protocol insecurity is NP-complete (Rusinowitch and Turuani, 2003). Basically, the non-deterministic Turing machine guesses an attack and polynomially checks that it is actually an attack against the protocol. There exist practical tools that can verify protocols in this case, using for instance constraint solving as in Cl-AtSe (Turuani, 2006) or extensions of model checking as in OFMC (Basin et al., 2005).
• When the number of executions of the protocol is not bounded, the problem is undecidable (Durgin et al., 2004) for a reasonable model of protocols. Hence, there exists no automatic tool that always terminates and solves this problem. However, there are several approaches that can tackle an undecidable problem:
– One can rely on help from the user. This is the approach taken for example by Isabelle (Paulson, 1998), which is an interactive theorem prover, Tamarin (Schmidt et al., 2012), which just requires the user to give a few lemmas to help the tool, or Cryptyc (Gordon and Jeffrey, 2004), which relies on typing with type annotations.
– One can have incomplete tools, which sometimes answer “I don’t know” but succeed on many practical examples. For instance, one can use abstractions based on tree-automata to

8
Introduction
represent the knowledge of the adversary (Monniaux, 2003; Boichut et al., 2006).
– One can allow non-termination, as in Maude-NPA (Meadows, 1996; Escobar et al., 2006).
The symbolic protocol verifier ProVerif represents protocols by Horn clauses, in the line of ideas by Weidenbach, 1999: Horn clauses are first order logical formulas, of the form F1 ∧ · · · ∧ Fn ⇒ F , where F1, . . . , Fn, F are facts. This representation introduces abstractions. It is still more precise than tree-automata because it keeps relational information on messages. However, using this approach, termination is not guaranteed in general.
Let us compare ProVerif with some other tools that verify protocol specifications in the symbolic model. AVISPA (Armando et al., 2005) is a platform that offers four different protocol verification back-ends: SATMC (Armando et al., 2014) for bounded attack depth (which im- plies bounded sessions and messages), Cl-AtSe (Turuani, 2006) and OFMC (Basin et al., 2005; Mödersheim and Viganò, 2009) for bounded sessions, and TA4SP (Boichut et al., 2006) for unbounded sessions. In contrast, ProVerif focuses only on the case of unbounded sessions, and the Horn-clause abstraction it uses is more precise than the tree- automata abstraction of TA4SP, as mentioned above. SATMC supports basic cryptographic primitives that can be defined by rewrite rules. Cl-AtSe additionally supports exclusive or, Diffie-Hellman exponentia- tion (including equations of the multiplicative group modulo p), and associative concatenation. OFMC supports cryptographic primitives defined by finite equational theories (theories under which every term has a finite equivalence class) and subterm convergent theories (theories generated by rewrite rules that are convergent, that is, terminating and confluent, and whose right-hand side is either a subterm of the left-hand side or a constant). However, in order to guarantee termination, it bounds the number of instantiations of variables. TA4SP handles alge- braic properties of exponentiation and exclusive or. ProVerif supports cryptographic primitives defined by rewrite rules and by equations that satisfy the finite variant property (Comon-Lundh and Delaune, 2005),

1.1. Verifying security protocols 9 which excludes associativity. AVISPA focuses on trace properties, while
ProVerif can also verify some equivalence properties.
Maude-NPA (Meadows, 1996; Escobar et al., 2006) relies on narrow- ing in rewrite systems. It is fully automatic and supports an unbounded number of sessions, but in contrast to ProVerif, it does not make any abstraction. Hence, it is sound and complete, but may not terminate. It supports cryptographic primitives defined by convergent rewrite rules plus associativity and commutativity (Escobar et al., 2007), as well as homomorphic encryption (Escobar et al., 2011), while ProVerif does not support associativity nor homomorphic encryption. It initially focused on reachability properties and was recently extended to prove some equivalences (Santiago et al., 2014), using the same idea as ProVerif
(see §3.4).
Scyther (Cremers, 2008) is fully automatic, always terminates, and can provide three different results: verification for an unbounded number of sessions, attack, or verification for a bounded number of sessions. It supports only a fixed set of cryptographic primitives (symmetric and asymmetric encryption and signatures). It proves secrecy and authentication properties. A version named scyther-proof generates Isabelle proofs of security of the verified protocols (Meier et al., 2010).
Tamarin (Schmidt et al., 2012) verifies protocols for an unbounded number of sessions, but often relies on the user to provide some lemmas in order to guide the proof. It initially proved trace properties expressed in temporal first-order logic, and was recently extended to prove some equivalences (Basin et al., 2015), using the same idea as ProVerif. It supports cryptographic primitives defined by subterm convergent equa- tions, Diffie-Hellman exponentiation, bilinear pairings, and associative and commutative operators (Schmidt et al., 2014). It also supports mutable state and loops; the lemmas provided by the user basically give loop invariants. Protocols in Tamarin are specified as multiset rewriting systems; Kremer and Künnemann, 2014 wrote a translator from an extension of the applied pi calculus with state.
The rest of this survey focuses on ProVerif. We refer the reader to (Blanchet, 2012b) for a more complete survey of security protocol verification.

10
Introduction
Protocol:
Pi calculus + cryptography
Properties to prove:
Secrecy, authentication, …
Horn clauses
Derivability queries
1.2
Automatic translator
Resolution with selection
Attack reconstruction
Figure 1.1: Structure of ProVerif
Structure of ProVerif
No derivation:
The property is true
Derivation:
Attack at the Horn clause level
Attack at the pi calculus level
The property is false
The structure of ProVerif is represented in Figure 1.1. ProVerif takes as input a model of the protocol in an extension of the pi calculus with cryptography, similar to the applied pi calculus (Abadi and Fournet, 2001; Abadi et al., 2016) and detailed in the next chapter. It supports a wide variety of cryptographic primitives, modeled by rewrite rules or by equations. ProVerif also takes as input the security properties that we want to prove. It can verify various security properties, including se- crecy, authentication, and some observational equivalence properties. It automatically translates this information into an internal representation by Horn clauses: the protocol is translated into a set of Horn clauses, and the security properties to prove are translated into derivability queries on these clauses. ProVerif uses an algorithm based on resolution
False attack
“I don’t know”

1.3. Comparison with previous surveys 11
with free selection to determine whether a fact is derivable from the clauses. If the fact is not derivable, then the desired security property is proved. If the fact is derivable, then there may be an attack against the considered property: the derivation may correspond to an attack, but it may also correspond to a “false attack”, because the Horn clause representation makes some abstractions. These abstractions are key to the verification of an unbounded number of sessions of protocols.
Chapter 2 presents the protocol specification language of ProVerif. Chapter 3 explains how ProVerif verifies the desired security properties. Chapter 4 relates the protocol specification language of ProVerif to the applied pi calculus (Abadi and Fournet, 2001; Abadi et al., 2016). Finally, Chapter 5 summarizes some applications of ProVerif and Chapter 6 concludes.
1.3 Comparison with previous surveys
Previous surveys on ProVerif (Blanchet, 2011; Blanchet, 2014) focus only on secrecy. The general protocol verification survey Blanchet, 2012b also outlines the verification of secrecy in ProVerif. Previous journal papers present individual features of the tool: secrecy (Abadi and Blanchet, 2005a), correspondences (Blanchet, 2009), and equivalences Blanchet et al., 2008. Our habilitation thesis (Blanchet, 2008b), in French, presents a general survey of ProVerif that includes secrecy, correspondences, and equivalences.
This survey is the first one to present all these features in English, in a common framework. Moreover, it includes features that never appeared in previous surveys: the extended destructors of (Cheval and Blanchet, 2013), the proof of equivalences using swapping (Blanchet and Smyth, 2016), as well as the link with the applied pi calculus
(Chapter 4), which was never published before.

2
The Protocol Specification Language
This chapter presents the protocol specification language used by ProVerif, by giving its syntax and semantics. Our presentation is based on earlier ones (Abadi and Blanchet, 2005a; Blanchet, 2009; Blanchet, 2014), with some features added: extended destructors (Cheval and Blanchet, 2013), support for equations and phases (Blanchet et al., 2008), enriched terms, pattern-matching, and table of keys (Blanchet et al., 2016).
2.1 Core language: syntax and informal semantics
Figure 2.1 gives the syntax of terms (data, messages), expressions (computations on terms), and processes (programs) of the input lan- guage of ProVerif. Terms are typed. The types T include channel for channels, bool for boolean values, and bitstring for bitstrings. The user may declare other types. ProVerif checks that processes given by the user are well-typed. This is helpful in order to detect mistakes in protocol specifications. However, by default, it ignores types for the verification of security properties. This behavior allows the adversary to bypass the type system, therefore enabling the detection of type-flaw
12

2.1. Core language: syntax and informal semantics 13
M,N ::= terms
x, y, z variable a,b,c,k,s name
f(M1,…,Mn)
D ::= M
h(D1, . . . , Dn) fail
P, Q ::= 0
out(N,M);P in(N,x : T);P P | Q
!P
new a : T ; P
let x : T = D in P else Q if M then P else Q
constructor application
expressions term
function application failure
processes nil
output
input
parallel composition replication restriction
expression evaluation conditional
Figure 2.1: Syntax of the core language
attacks (Heather et al., 2000). For this reason, we remove types in the definition of the formal semantics and in the verification of security properties. If desired, ProVerif can also be configured to take into ac- count types in the verification of security properties. We focus on the default configuration in this survey.
The identifiers a, b, c, k, and similar ones range over names, and x, y, and z range over variables. Names represent atomic data, such as keys and nonces. Variables can be substituted by terms. Names and variables are declared with their type. The syntax also assumes a set of function symbols for constructors and destructors; we often use f for a constructor, g for a destructor, and h for a constructor or a destructor. Function symbols are declared with types: h(T1, . . . , Tn) : T means that the function h takes n arguments of types T1, . . . , Tn respectively, and

14 The Protocol Specification Language
returns a result of type T.
Constructors are used to build terms. Therefore, terms are variables,
names, and constructor applications of the form f(M1,…,Mn). For instance, symmetric encryption is typically represented by a constructor senc, and the term senc(c, k) represents the encryption of c under the key k. On the other hand, destructors do not appear in terms, but manipulate terms in expressions. They are functions on terms that processes can apply, via the expression evaluation construct. A destruc- tor g is defined by a finite ordered list of rewrite rules def(g) of the form g(U1,…,Un) → U where U1,…,Un,U are may-fail terms, that is, terms M , the constant fail, which represents the failure of a computation, or may-fail variables u. Hence, we have two kinds of variables: ordinary variables, or simply variables, which can be substituted by terms, and may-fail variables, which can be substituted by may-fail terms. In the rewrite rules, U1,…,Un,U do not contain names, and the variables of U also occur in U1,…,Un. The types of U1,…,Un must be the types of the arguments of g, and the type of U must be the type of the result of g. Destructors occur in expressions D, which evaluate either to a term M or to the special constant fail. To evaluate g(D1, . . . , Dn), we first evaluate D1,…,Dn, which yields U1,…,Un, where each Ui is either a term M or the constant fail. Then, we consider the first rewrite rule g(U1′,…,Un′ ) → U′ in def(g). If there exists a substitution σ such that σUi′ = Ui for all i ∈ {1, . . . , n}, then this rewrite rule applies and g(D1,…,Dn) evaluates to σU′. Otherwise, the first rewrite rule does not apply, and we consider the second one, and so on. If no rewrite rule applies, the evaluation of g(D1, . . . , Dn) fails: it evaluates to the constant fail.
Using constructors and destructors, we can represent data structures, such as tuples, and cryptographic operations, for instance as follows:
• tupleT1,…,Tn(M1,…,Mn) is the tuple of the terms M1,…,Mn, where tupleT1,…,Tn is a constructor that takes arguments of types T1, . . . , Tn and returns a result of type bitstring, that is, the type declaration of tupleT1,…,Tn is tupleT1,…,Tn (T1, . . . , Tn) : bitstring. (We often abbreviate tupleT1,…,Tn (M1, . . . , Mn) to (M1, . . . , Mn).) The n projections are destructors ithT1,…,Tn for i ∈ {1,…,n},

2.1. Core language: syntax and informal semantics 15 defined by
ithT1,…,Tn (tupleT1,…,Tn (x1, . . . , xn)) → xi .
• senc(M,N) is the symmetric (shared-key) encryption of the mes- sage M under the key N, where senc(bitstring,key) : bitstring is a constructor. The corresponding destructor sdec(bitstring, key) : bitstring is defined by
sdec(senc(x, y), y) → x .
Thus, sdec(M′,N) returns the decryption of M′ if M′ is a message
encrypted under N. Otherwise, it fails.
• In order to represent asymmetric (public-key) encryption, we may use two constructors pk(skey) : pkey and aenc(bitstring,pkey) : bitstring, where skey is the type of secret keys and pkey is the type of public keys: pk(M) builds a public key from a secret key M and aenc(M,N) encrypts M under the public key N. The corresponding destructor adec(bitstring, skey) : bitstring is defined by
adec(aenc(x, pk(y)), y) → x . (2.1)
It decrypts the ciphertext aenc(x,pk(y)) using the secret key y corresponding to the public key pk(y) used to encrypt this ciphertext.
• Cryptographically, a secure asymmetric encryption scheme must be probabilistic. We can represent a probabilistic encryption scheme by adding a third argument to the constructor aenc to rep- resent the randomness. This constructor becomes aenc(bitstring, pkey, rand) : bitstring and the rewrite rule for adec becomes
adec(aenc(x, pk(y), z), y) → x . (2.2)
For protocols that do not test equality of ciphertexts, for secrecy and authentication, one can use the simpler, deterministic model of encryption (Cortier et al., 2006). However, for equivalence properties, or for protocols that test equality of ciphertexts, using

16
The Protocol Specification Language
probabilistic encryption does make a difference. Probabilistic versions can also be defined for symmetric encryption (above) and for digital signatures (below) (Blanchet et al., 2016, §4.2.5); we omit such models here.
• For digital signatures, we may use two constructors pk(skey) : pkey as above and sign(bitstring,skey) : bitstring, where sign(M, N) represents the message M signed with the signature key N. The corresponding destructors check(bitstring, pkey) : bitstring and getmess(bitstring) : bitstring are defined by:
check(sign(x, y), pk(y)) → x , (2.3) getmess(sign(x, y)) → x . (2.4)
The destructor check verifies that the signature sign(x,y) is a correct signature under the secret key y, using the public key pk(y). When the signature is correct, it returns the message that was signed. The destructor getmess always returns the message that was signed. (This encoding of signatures assumes that the message that was signed can be recovered from the signature.)
• We may represent a one-way hash function by the construc- tor h(bitstring) : bitstring. There is no corresponding destructor; so we model that the term M cannot be retrieved from its hash h(M ).
• Boolean constants true : bool and false : bool are nullary construc- tors, and equality between terms may be defined as a destructor equal(T,T) : bool with rewrite rules:
equal(x, x) → true , equal(x, y) → false .
(2.5)
The second rewrite rule applies only when the first one does not apply, that is, when x ̸= y. We will write M = N for equal(M,N). Disequality as well as boolean operations (nega- tion not, conjunction &&, disjunction ||) may also be defined as destructors. For instance, conjunction can be defined as a

2.1. Core language: syntax and informal semantics 17 destructor and(bool, bool) : bool with rewrite rules:
and(true, u) → u , and(x, u) → false .
This destructor returns its second argument when its first argu- ment is true, and false when its first argument is a term different from true. (We consider all boolean terms that are not true as false.) The variable u is a may-fail variable, so that the rewrite rules apply even if the second argument fails. Therefore, and(false, fail) evaluates to false. Hence, and may not fail even if its second argu- ment fails. Intuitively, the second argument need not be evaluated when the first argument is not true. If we replaced u with an ordinary variable y, and would fail when one of its arguments fails, corresponding to the intuition that we would evaluate both arguments before computing the conjunction. We write M && N for and(M,N) for readability.
We can also define an if-then-else destructor ifthenelse(bool,T, T ) : T with rewrite rules:
ifthenelse(true, u, u′) → u , ifthenelse(x, u, u′) → u′ .
This destructor returns its second argument when its first argu- ment is true. It returns its third argument when its first argument is a term different from true. Otherwise, that is, when its first argument fails, it fails. Similarly to the conjunction above, the variables u and u′ are may-fail variables, so that the rewrite rules apply even if the second or third argument fails. As a consequence, ifthenelse(true,M,fail) and ifthenelse(false,fail,M) both evaluate to M. Hence, ifthenelse may not fail even if its second or third argument fails. Intuitively, the third argument need not be evalu- ated when the first argument is true, and the second argument need not be evaluated when the first argument is not true.
• Type-converter functions are unary functions, whose only goal is to convert a term from a type to another, so that it has the right

18
The Protocol Specification Language
type to be passed to other functions. Type-converter functions are declared with the annotation typeConverter so that ProVerif can recognize them. Since it ignores types for the verification of security properties, ProVerif removes type-converter functions at this stage.
For instance, we may define a type-converter function k2b(key) : bitstring, which converts terms from type key to type bitstring, so that keys can be encrypted or signed by the functions defined above. The inverse function b2k(bitstring) : key is defined by the rewrite rule
b2k(k2b(x)) → x .
The function k2b always succeeds: all keys are bitstrings; while the function b2k succeeds only when its argument is of the form k2b(M), that is, the terms k2b(M) are the only terms of type bitstring that are keys and can be converted to type key.
Thus, destructors defined by rewrite rules support many of the opera- tions common in security protocols. This way of specifying cryptographic primitives also has limitations, though: for example, modular exponen- tiation cannot be directly represented in this framework. To overcome this limitation, ProVerif supports equations, as explained in §2.5.1.
Most constructs of processes in the syntax of Figure 2.1 come from the pi calculus (Milner et al., 1992).
• The nil process 0 does nothing.
• The input process in(N,x : T);P inputs a message on channel N,
and executes P with x bound to the input message.
The output process out(N,M);P outputs the message M on the
channel N and then executes P.
Here, we use an arbitrary term N to represent a channel: N can be a name, a variable, or a constructor application. The calculus is monadic (in that the messages are terms rather than tuples of terms), but a polyadic calculus can be simulated since tuples are terms. It is also synchronous (in that a process P is executed after the output of a message). We may omit P when it is 0.

2.1. Core language: syntax and informal semantics 19
• The process P | Q is the parallel composition of P and Q.
• The replication !P represents an unbounded number of copies of P in parallel. It makes it possible to represent an unbounded number of executions of the protocol.
• The restriction new a : T;P creates a new name a of type T, and then executes P . It can model the creation of a fresh key or nonce.
• The process let x : T = D in P else Q tries to evaluate D; if D evaluates to a term M, then x is bound to M and P is executed; if the evaluation of D fails, then Q is executed. The type T may be omitted when it can be determined from D, that is, when D is not fail.
• The conditional if M then P else Q executes P if M is true (or is a variable bound to true); it executes Q if M is different from true.
We may omit an else branch when it consists of 0.
The name a is bound in the process new a : T;P. The variable x is
boundinP intheprocessesin(N,x:T);P andletx:T =DinP elseQ. Processes are considered equal modulo renaming of bound names and variables. We write fn(P) and fv(P) for the sets of names and variables free in P, respectively. A process is closed if it has no free variables; it may have free names. The protocol to verify is represented by a closed process P0. Free names are declared with their type. They may be public or private: the adversary initially has access to the public free names, but not to the private ones. We write Npub for the set of public free names and Npriv for the set of private free names, so we have fn(P0) ⊆ Npub ∪ Npriv. (One may declare free names not present in P0, but used for instance in security queries.) Similarly, function symbols may be public or private: the adversary has access to the public function symbols, but not to the private ones.
We write {M1 /x1 , . . . ,Mn /xn } for the substitution that replaces x1, . . . , xn with M1, . . . , Mn, respectively. When D is some term, expression, or process, we write D{M1/x1,…,Mn /xn} for the result of applying this substitution to D, but we write σD when the substitution is

20 The Protocol Specification Language
simply denoted σ. Except when stated otherwise, substitutions always map variables (not names) to terms. Furthermore, substitutions never substitute fail or a may-fail variable for an ordinary variable.
The calculus of ProVerif resembles the applied pi calculus (Abadi and Fournet, 2001; Abadi et al., 2016). Both calculi are extensions of the pi calculus with (fairly arbitrary) functions on terms. However, there are also important differences between these calculi. The first one is that ProVerif uses destructors in addition to the equational theories of the applied pi calculus, but does not support all equational theories. (§ 2.5.1 explains how ProVerif handles equations.) The second difference is that ProVerif has a built-in error-handling construct (the else branch of the expression evaluation), whereas in the applied pi calculus the error- handling must be done “by hand”. For instance, in ProVerif, the process let x = sdec(M, k) in P else out(c, error) outputs the error message when the decryption of M fails. In the applied pi calculus, decryption always succeeds, but may return junk, and one has to perform an additional test if one wants to check that M is a ciphertext under the key k. Chapter 4 provides a more detailed comparison with the applied pi calculus.
2.2 An example of protocol
We use as a running example a simplified version of the Denning- Sacco key distribution protocol (Denning and Sacco, 1981), omitting certificates and timestamps:
Message 1. A → B : aenc(sign(k2b(k),sskA),pkB) Message 2. B → A : senc(s, k)
This protocol is illustrated in Figure 2.2. It involves two principals A and B. The key sskA is the secret, signature key of A and spkA its public, signature verification key. (We add an s prefix for signature keys to distinguish them from encryption keys.) The key skB is the secret, decryption key of B and pkB its public, encryption key. The key k is a fresh session key created by A. A sends this key, signed with its private key sskA, and encrypted under the public key of B, pkB. The key k is first converted to a bitstring by k2b, so that it has the right type for

2.2. An example of protocol
21
k fresh
A (Alice)
aenc(sign(k2b(k),sskA),pkB)
senc(s, k)
Figure 2.2: An example of protocol
signing it. When B receives this message, B decrypts it and assumes, seeing the signature, that the key k has been generated by A. Then B sends a secret s encrypted under k. Only A should be able to decrypt the message and get the secret s. (The second message is not really part of the protocol; we use it to check if the key k can be used to exchange secrets between A and B. In fact, there is an attack against this protocol (Abadi and Needham, 1996), so s will not remain secret. The attack is explained in Example 3.2.)
This protocol can be encoded by the following process:
P0 = new sskA : skey; new skB : skey; let spkA = pk(sskA) in let pkB = pk(skB) in out(c,spkA);out(c,pkB); (PA(sskA,pkB) | PB(skB,spkA))
PA(sskA,pkB) = ! new k : key; out(c,aenc(sign(k2b(k),sskA),pkB)); in(c, x : bitstring); let z = sdec(x, k) in 0
PB(skB,spkA) = ! in(c,y : bitstring);let y′ = adec(y,skB) in let xk = b2k(check(y′, spkA)) in out(c, senc(s, xk))
Such a process can be given as input to ProVerif. This process first creates the secret keys sskA and skB, computes the corresponding public keys spkA and pkB, and sends these keys on the public channel c, so that the adversary has these public keys. Then, it runs the processes PA and PB in parallel. These processes correspond respectively to the roles of A and B in the protocol. They both start with a replication, which makes it possible to model an unbounded number of sessions of the protocol.
B (Bob)

22 The Protocol Specification Language
The process PA executes the role of A: it creates a fresh key k, converts it to a bitstring by k2b, signs it with its secret key sskA, then encrypts this message under pkB, and sends the obtained message on channel c. PA then expects the second message of the protocol on channel c, stores it in x and decrypts it. If decryption succeeds, the result (normally the secret s) is stored in z.
The process PB receives the first message of the protocol on channel c, stores it in y, decrypts it with skB, and verifies the signature with spkA. If these verifications succeed, B believes that xk is a key shared between A and B, and it sends the secret s encrypted under xk. If the protocol is correct, s should remain secret. In this example, there are two free names, c and s; c is public and s is private, so Npub = {c} and Npriv = {s}.
This model of the protocol is weak, because A and B talk only to each other: they do not interact with other, possibly dishonest participants. We can strengthen the model as follows. We replace the process PA with the following process:
PA(sskA,pkB) = ! in(c,xpkB : pkey);new k : key; out(c, aenc(sign(k2b(k), sskA), xpkB ));
in(c, x : bitstring); let z = sdec(x, k) in 0
This process PA first receives on the public channel c the key xpkB , which is the public key of A’s interlocutor in the protocol. This message is not part of the protocol; it allows the adversary to choose with whom A is going to execute a session. In a standard session of the protocol, this key is pkB, but the adversary can also choose another key, for instance one of his own keys, so that A can interact with the adversary playing the role of a dishonest participant. Then PA executes the role of A as before, with the key xpkB instead of pkB. The process PB does not need to be modified because B sends the second message senc(s, k) only if its interlocutor is the honest participant A. (Otherwise, the secret would obviously be leaked.) Hence the signature is verified with the key spkA of A and not with an arbitrary key chosen by the adversary.
The above model still assumes for simplicity that A and B each play only one role of the protocol. One could easily write an even more

2.3. Core language: type system 23
general model in which they play both roles, or one could provide the adversary with an interface that allows it to dynamically create new protocol participants. We consider such a model in §2.5.5.
2.3 Core language: type system
Processes must be well-typed in the type system of Figure 2.3. For simplicity, all bound variables and names are renamed so that they are pairwise distinct and distinct from free names. The type system uses a type environment Γ that maps variables and names to their type. This type environment initially contains the types of the free names of the closed process under consideration. The type system defines three judgments: the judgment Γ ⊢ M : T means that the term M is well-typed of type T in the type environment Γ; the judgment Γ ⊢ D : T means that the expression D is well-typed of type T in the type environment Γ; and the judgment Γ ⊢ P means that the process P is well-typed in the type environment Γ.
The typing rules are straightforward. The type of variables and names is obtained from the type environment Γ. Function applications h(D1, . . . , Dn) are well-typed when their arguments are well-typed, of the type expected by the function h, and the type of h(D1, . . . , Dn) is the type of the result of h. The constant fail can be of any type. The rules require that input and output channels be of type channel, that output messages be well-typed, and that conditions be of type bool.
Example 2.1. It is easy to see that the process P0 of §2.2 is well-typed in the type environment Γ = c : channel, s : bitstring, that is, Γ ⊢ P0.
As mentioned in §2.1, ProVerif checks that processes given by the user are well-typed, but it ignores types and removes type-converter functions for the verification of security properties.
Example 2.2. With types and type-converter functions removed, the process P0 of §2.2 (version in which A may interact with dishonest participants) becomes:
P0 = new sskA;new skB;let spkA = pk(sskA) in let pkB = pk(skB) in out(c,spkA);out(c,pkB);(PA(sskA,pkB) | PB(skB,spkA))

24 The Protocol Specification Language
(x : T) ∈ Γ (a : T) ∈ Γ Γ⊢x:T Γ⊢a:T
f(T1,…,Tn):T Γ⊢M1 :T1 … Γ ⊢ f(M1,…,Mn) : T
h(T1,…,Tn):T Γ⊢D1 :T1 … Γ ⊢ h(D1,…,Dn) : T
Γ⊢Mn :Tn Γ⊢Dn :Tn
Γ ⊢ fail : T Γ⊢N :channel Γ⊢M :T
Γ⊢P Γ⊢N :channel Γ,x:T ⊢P
Γ⊢0
Γ ⊢ out(N,M);P Γ ⊢ in(N,x : T);P
Γ ⊢ P Γ ⊢ Q Γ ⊢ P Γ, a : T ⊢ P
Γ ⊢ P | Q Γ ⊢ !P Γ ⊢ new a : T ; P
Γ⊢D:T Γ,x:T⊢P Γ⊢Q Γ ⊢ let x : T = D in P else Q
Γ⊢M:bool Γ⊢P Γ⊢Q Γ ⊢ if M then P else Q
Figure 2.3: Type system
PA(sskA,pkB)=!in(c,xpkB);new k;
out(c, aenc(sign(k, sskA), xpkB )); in(c, x); let z = sdec(x, k) in 0
PB(skB,spkA) = ! in(c,y);let y′ = adec(y,skB) in let xk = check(y′, spkA) in out(c, senc(s, xk))
ProVerif verifies this process when it is given the process of §2.2. 2.4 Core language: formal semantics
The formal semantics of this language is defined in Figure 2.4. The definition proceeds in two steps. First, we define the semantics of

2.4. Core language: formal semantics
25
M⇓M
fail ⇓ fail
h(D1,…,Dn)⇓σUj′ if and only if
D1 ⇓U1, …, Dn ⇓Un,
def(h) consists of the rewrite rules
h(Ui′,1,…,Ui′,n) → Ui′ for i ∈ {1,…,k},
σUj′,1 = U1, …, σUj′,n = Un, and
for all i ≤ j, for all σ′, σ′Ui′,1 ̸= U1, …, σ′Ui′,n ̸= Un.
E, P ∪ { 0 } → E, P
E, P ∪ { P | Q } → E, P ∪ { P, Q }
E, P ∪ { !P } → E, P ∪ { P, !P }
(Npub, Npriv), P ∪ { new a; P } → (Npub, Npriv ∪ {a′}), P ∪ { P {a′ /a} }
where a′ ∈/ Npub ∪ Npriv (Red Res) E,P ∪{out(N,M);Q,in(N,x);P } → E,P ∪{Q,P{M/x}}
E,P∪{letx=DinP elseQ}→E,P∪{P{M/x}} if D⇓M
E,P∪{letx=DinP elseQ}→E,P∪{Q} if D ⇓ fail
E, P ∪ { if true then P else Q } → E, P ∪ { P } E, P ∪ { if M then P else Q } → E, P ∪ { Q }
(Red I/O)
(Red Eval 1)
(Red Eval 2) (Red Cond 1) (Red Cond 2)
if M ̸= true
Figure 2.4: Operational semantics
(Red Nil) (Red Par) (Red Repl)

26 The Protocol Specification Language
expressions: the relation D ⇓ U means that the closed expression D evaluates to the closed may-fail term U, which may be a closed term M or the constant fail. To ease the definition of this relation, we associate to each constructor f of arity n a list of rewrite rules def(f) that contains the identity rewrite rule
f(x1,…,xn) → f(x1,…,xn).
Furthermore, for each constructor or destructor h of arity n, we add
the following rewrite rule as final rewrite rule of def(h):
h(u1,…,un) → fail, (2.6)
where u1, . . . , un are may-fail variables. This rewrite rule expresses that the function h returns fail when no other rewrite rule applies. In particular, constructors return fail when one of their argument fails. In Figure 2.4, the first two rules of the definition of ⇓ express that a closed term and fail evaluate to themselves; the third rule deals with function application. It first evaluates the arguments of the function D1,…,Dn toU1,…,Un respectively.Then,itappliesthej-threwrite rule of h, h(Uj′,1,…,Uj′,n) → Uj′, instantiated with the substitution σ, so h(U1,…,Un) = h(σUj′,1,…,σUj′,n) reduces into σUj′. The last line checks that the rewrite rules before the j-th cannot be applied.
Example 2.3. For instance, we have adec(aenc(sign(k,sskA),pk(skB)),skB)⇓sign(k,sskA)
by the rewrite rule (2.1). Hence check(adec(aenc(sign(k,sskA),pk(skB)),skB),pk(sskA))⇓k
by the rewrite rule (2.3). On the other hand, adec(aenc(sign(k,sskA),pk(skB)),sskA)⇓fail
by the rewrite rule (2.6). (The rewrite rule (2.1) cannot be applied because the decryption key does not match the encryption key.) Hence,
check(adec(aenc(sign(k,sskA),pk(skB)),sskA),pk(sskA))⇓fail by the rewrite rule (2.6) again.

2.4. Core language: formal semantics 27
Second, we define the semantics of processes, by reduction of se- mantic configurations. A semantic configuration is a pair E,P where the environment E is a pair of two finite sets of names (Npub,Npriv) and P is a finite multiset of closed processes. The set Npub contains the public names, the set Npriv contains the private names, and the multiset of processes P contains the processes currently running.1 The configuration ({a1, . . . , an}, {b1, . . . , bm}), {P1, . . . , Pk} corresponds in- tuitively to the process new b1; . . . new bm; (P1 | . . . | Pk). (Recall that types are ignored, so we need not type the names b1, . . . , bm.) A con- figuration (Npub,Npriv),P is valid when Npub and Npriv are disjoint and fn(P) ⊆ Npub ∪ Npriv. We only consider valid configurations. The reduction relation → on semantic configurations is defined in Figure 2.4. The reduction rules define the semantics of each language construct. The rule (Red Nil) removes processes 0, since they do nothing. The rule (Red Par) expands parallel compositions. The rule (Red Repl) cre- ates an additional copy of a replicated process; since this rule can be applied again on the resulting configuration, it allows creating an un- bounded number of copies of the replicated process. The rule (Red Res) creates a fresh name a′, substitutes it for a, and adds it to the private names Npriv. The fresh name a′ is required to occur neither in Npriv, which contains the initial private free names as well as any fresh name created by a previous application of (Red Res), nor in Npub, which contains the public free names. The rule (Red I/O) allows communi- cation between processes. The message M is sent by the output and received by the input in the variable x, provided the output and input channels are equal. The rules (Red Eval 1) and (Red Eval 2) define the semantics of expression evaluations. They evaluate D. In case of success,
(Red Eval 1) runs P with the result M of the evaluation substituted for x. In case of failure, (Red Eval 2) runs Q. The rules (Red Cond 1) and (Red Cond 2) define the semantics of conditionals. When M is true, (Red Cond 1) runs P. When M is different from true, (Red Cond 2) runs Q.
1In (Blanchet, 2009), the environment E is simply a set of names, corresponding to Npub ∪ Npriv. Indeed, the distinction between public and private names is not essential for trace properties. It is useful for treating equivalences in §3.4.

28 The Protocol Specification Language
The process P0, which represents the protocol to verify, is usually run in parallel with an adversary Q. In this case, the initial configuration is (Npub, Npriv), {P0, Q}.
Example 2.4. Let us consider the process P0 of Example 2.2. This process cannot run alone since it sends messages on channel c without any process to receive them. Hence, we must run P0 in parallel with an adversary Q. As an example, we consider the following adversary:
Q = in(c, xspkA ); in(c, xpkB ); out(c, xpkB ) ,
which receives the two public keys, and sends the B’s public key to A, so that A runs a session with B. This system runs a correct session of the protocol between A and B as follows:
({c}, {s}), {P0, Q}
→∗ E1,{out(c,pk(sskA));out(c,pk(skB));
(PA(sskA,pk(skB)) | PB(skB,pk(sskA))),Q}
by (Red Res) twice and (Red Eval 1) twice
→∗ E1,{PA(sskA,pk(skB)),PB(skB,pk(sskA)),
out(c,pk(skB))} by (Red I/O) twice and (Red Par)
→∗ E1,P ∪{PA′ ,PB′ ,out(c,pk(skB))} by (Red Repl) twice where
E1 = ({c},{s,sskA,skB})
P = {PA(sskA,pk(skB)),PB(skB,pk(sskA))}
P′ = in(c, x ); new k; out(c, aenc(sign(k, ssk ), x )); P ′′ A pkB A pkB A
P′′ = in(c,x);let z = sdec(x,k) in 0 A
PB′ = in(c,y);let y′ = adec(y,skB) in
let xk = check(y′, pk(sskA)) in out(c, senc(s, xk)) .
By(Red I/O),PA′ reduceswithout(c,pk(skB));by(Red Nil),weremove the omitted final 0. With E2 = ({c},{s,sskA,skB,k}), we obtain
E ,P ∪ {new k;out(c,aenc(sign(k,ssk ),pk(sk )));P′′,P′ } 1 ABAB

2.5. Extensions 29 →E ,P∪{out(c,aenc(sign(k,ssk ),pk(sk )));P′′,P′ }
2 ABAB
by (Red Res) (A creates the fresh key k)
→E ,P∪{P′′, 2A
let y′ = adec(aenc(sign(k,sskA),pk(skB)),skB) in let xk = check(y′, pk(sskA)) in out(c, senc(s, xk))}
by (Red I/O) (A sends message 1 to B) →∗ E ,P ∪{P′′,out(c,senc(s,k))} by (Red Eval 1) twice
2A
→ E2, P ∪ {let z = sdec(senc(s, k), k) in 0, 0}
by (Red I/O) (B sends message 2 to A) → E2, P ∪ {0, 0} by (Red Eval 1) →∗ E2, P by (Red Nil) twice
There exist other approaches to define the semantics of such a language. We discuss one such approach in Chapter 4.
2.5 Extensions
This section presents the main extensions of the core language imple- mented in ProVerif, and gives their operational semantics, by extending the semantics given in §2.4.
2.5.1 Equations
In addition to cryptographic primitives defined by rewrite rules, ProVerif also supports primitives defined by equations. We consider an equational theory E that consists of a finite set of equations M = N between terms M, N without names and of the same type. Equality modulo the equational theory E is obtained from these equations by reflexive, symmetric, and transitive closure, closure under application of function symbols, and closure under substitution of terms for variables. We write M =E N an equality modulo E, and M ≠ E N a disequality modulo E. We only consider non-trivial equational theories, that is, for each type T, there exist terms M and N of type T such that M ̸=E N.
As an example, the Diffie-Hellman key agreement (Diffie and Hell- man, 1976) can be modeled using equations. The Diffie-Hellman key

30 The Protocol Specification Language
agreement relies on the following property of modular exponentiation: (ga)b = (gb)a = gab in a cyclic multiplicative subgroup G of Z∗p, where p is a large prime number and g is a generator of G, and on the assumption that it is difficult to compute gab from ga and gb, without knowing the
random numbers a and b (computational Diffie-Hellman assumption), or on the stronger assumption that it is difficult to distinguish ga, gb, gab from ga, gb, gc without knowing the random numbers a, b, and c (deci- sional Diffie-Hellman assumption). These properties are exploited to establish a shared key between two participants A and B of a protocol: A chooses randomly a and sends ga to B; symmetrically, B chooses randomly b and sends gb to A. A can then compute (gb)a, since it has a and receives gb, while B computes (ga)b. These two values are equal, so they can be used to compute the shared key. The adversary, on the other hand, has ga and gb but not a and b so by the computational Diffie-Hellman assumption, it cannot compute the key. (This exchange resists passive attacks only; to resist active attacks, we need additional ingredients, for instance signatures.) We can model the Diffie-Hellman key agreement by the equation (Abadi and Fournet, 2001; Abadi et al., 2007)
exp(exp(g, x), y) = exp(exp(g, y), x) (2.7)
where g : G is a constant and exp(G, Z) : G is modular exponentiation. Obviously, this is a basic model: it models the main functional equation but misses many algebraic relations that exist in the group G.
We can also model a symmetric encryption scheme in which decryp- tion always succeeds (but may return a meaningless message) by the equations
sdec(senc(x, y), y) = x senc(sdec(x, y), y) = x
(2.8)
where senc(bitstring,key) : bitstring and sdec(bitstring,key) : bitstring are constructors. In this model, decryption always succeeds, because sdec(M,N) is always a term, even when M is not of the form senc(M′, N). The first equation is standard; the second one avoids that the equality test senc(sdec(M,N),N) = M reveals that M is a ciphertext under N: in the presence of the second equation, this equality always

2.5. Extensions 31
holds, even when M is not a ciphertext under N. These equations are satisfied by block ciphers, which are bijective.
Equations allow one to model cryptographic primitives that cannot be modeled by destructors with rewrite rules. However, rewrite rules are easier to handle for ProVerif, so they should be preferred when they are sufficient to express the desired properties.
The formal semantics of the language is extended to equations as follows. We extend the evaluation of expressions by considering equality modulo E:
h(D1,…,Dn)⇓σUj′ if and only if D1 ⇓U1, …, Dn ⇓Un,
def(h) consists of the rewrite rules
h(Ui′,1,…,Ui′,n) → Ui′ for i ∈ {1,…,k},
σUj′,1 =E U1, …, σUj′,n =E Un, and
for all i ≤ j, for all σ′, σ′Ui′,1 ̸=E U1, …, σ′Ui′,n ̸=E Un.
We also extend the reduction rules by considering equality modulo E: E,P ∪{out(N,M);Q,in(N′,x);P } → E,P ∪{Q,P{M/x}}
if N =E N′
E, P ∪ { if M then P else Q } → E, P ∪ { P }
if M =E true
E, P ∪ { if M then P else Q } → E, P ∪ { Q }
if M ̸=E true
(Red I/O′) (Red Cond 1′)
(Red Cond 2′)
To handle equations, ProVerif translates them into a set of rewrite rules associated to constructors. For instance, the equations (2.8) are translated into the rewrite rules
senc(x, y) → senc(x, y) sdec(x, y) → sdec(x, y) senc(sdec(x, y), y) → x sdec(senc(x, y), y) → x
(2.9)
while the equation (2.7) is translated into
exp(x, y) → exp(x, y) exp(exp(g, x), y) → exp(exp(g, y), x) (2.10)
Intuitively, these rewrite rules allow one, by applying them exactly once for each constructor, to obtain the various forms of the terms modulo

32 The Protocol Specification Language
the considered equational theory.2 These forms are named variants, and since the number of rewrite rules must be finite, ProVerif only supports equational theories that have the finite variant property (Comon-Lundh and Delaune, 2005). Constructors are then evaluated similarly to de- structors. The only difference is that the rewrite rules that come from equations are not ordered: all applicable rewrite rules are applied. With Abadi and Fournet, we have formally defined when a set of rewrite rules models an equational theory, and designed algorithms that translate equations into rewrite rules that model them (Blanchet et al., 2008, §5). Then, each trace in the calculus with equational theory corresponds to a trace in the calculus with rewrite rules, and conversely (Blanchet et al., 2008, Lemma 1).3 Hence, we reduce to the calculus with rewrite rules. The main advantage of this approach is that resolution can still use ordinary syntactic unification (instead of having to use unification modulo the equational theory), and therefore remains efficient: it avoids the explosion of the number of clauses that occurs when many unifiers modulo
This approach still has limitations: associative operations, such as ex- clusive or, are not supported, because they would require an infinite num- ber of rewrite rules. It may be possible to handle these operations using unification modulo the equational theory instead of syntactic unification, at the cost of a larger complexity. In the case of a bounded number of sessions, exclusive or is handled in (Comon-Lundh and Shmatikov, 2003; Chevalier et al., 2005) and a more complete theory of modular exponen- tiation is handled in (Chevalier et al., 2003). A unification algorithm for modular exponentiation is presented in (Meadows and Narendran, 2002). For an unbounded number of sessions, extensions of the Horn clause
2The rewrite rules like sdec(x,y) → sdec(x,y) are necessary so that sdec al- ways succeeds when its arguments succeed. Thanks to this rule, the evaluation of sdec(M,N) succeeds and leaves this term unchanged when M is not of the form senc(M′,N).
3More precisely, the disequality tests in D ⇓ U and (Red Cond 2′) must still be performed modulo the equational theory, even in the calculus with rewrite rules. The impact on the performance of the resolution algorithm is limited because disequalities are less common than other facts and, when there are several unifiers modulo the equational theory in a disequality, that leads to several disequalities in a single clause, not to several clauses.

2.5. Extensions 33
approach have been proposed. Küsters and Truderung, 2008 support exclusive or provided one of its two arguments is a constant in the clauses that model the protocol. Küsters and Truderung, 2009 support Diffie-Hellman key agreements with more detailed algebraic relations (including equations of the multiplicative group modulo p), provided the exponents are constants in the clauses that model the protocol. By extending that line of research, Pankova and Laud, 2012 support bilinear pairings, provided the exponents are constants in the clauses that model the protocol. These approaches proceed by transforming the initial clauses into richer clauses on which the standard resolution algorithm is applied. The tools Maude-NPA (Meadows, 1996; Escobar et al., 2006) and Tamarin (Schmidt et al., 2012) support equational theories that ProVerif does not support, at the cost of a more costly verification or by requiring user intervention. Maude-NPA supports equations defined by convergent rewrite rules plus associativity and commutativity (Escobar et al., 2007) (which includes Diffie-Hellman exponentiation and exclusive or), as well as homomorphic encryption (Escobar et al., 2011). Tamarin supports subterm convergent equations, Diffie-Hellman exponentiation (including equations of the multiplicative group modulo p), bilinear pairings, and associative and commutative operators (Schmidt et al., 2014).
2.5.2 Enriched terms
ProVerif allows all occurrences of terms M in processes (input and output channels, output messages, and conditions) to be replaced with expressions D, which may contain destructors, hence the syntax of processes becomes:
P, Q ::= …
out(D, D′); P in(D,x : T);P
if D then P else Q
processes
output input conditional
Each such expression D is evaluated; when the evaluation fails, the process does nothing; when it succeeds with result M, it executes the

34 The Protocol Specification Language process with M instead of D. Hence the formal semantics can be defined
as follows:
E, P ∪ { out(D′, D); Q, in(D′′, x); P } → E, P ∪ { Q, P {M /x} }
if D⇓M, D′ ⇓M′, D′′ ⇓M′′, and M′ =E M′′ E, P ∪ { if D then P else Q } → E, P ∪ { P }
if D ⇓ M and M =E true
E, P ∪ { if D then P else Q } → E, P ∪ { Q }
if D ⇓ M and M ̸=E true
(Red I/O′′) (Red Cond 1′′)
(Red Cond 2′′)
Alternatively, processes with such expressions can easily be encoded into the core calculus by introducing additional expression evaluations. For instance, in(D, x : T ); P can be encoded as
let y = D in in(y,x : T);P .
Furthermore, ProVerif allows expressions to contain some constructs from processes: restriction, expression evaluation, and conditional, so that the syntax of expressions is extended as follows:
D ::= …
new a : T ; D
let x : T = D in D′ else D′′ if D then D′ else D′′
expressions
restriction
expression evaluation conditional
Processes that contain such expressions are handled by transforming them into processes without such expressions, by moving restrictions, expression evaluations, and conditionals from expressions to processes.
2.5.3 Pattern-matching
We enrich the input and expression evaluation constructs with pattern- matching as follows:
P, Q ::= …
in(D, pat); P
let pat = D in P else Q
processes
input
expression evaluation

2.5. Extensions 35 where patterns pat are defined by the following grammar:
pat ::= x : T
=D f(pat1,…,patn)
patterns variable
equality test data constructor
The pattern x : T matches any term and stores it in variable x. The type T can be omitted when it can be inferred. The variables x inside a pattern must be pairwise distinct.
The pattern =D matches only terms that are equal to the result of the evaluation of D. When the evaluation of D fails, the pattern- matching fails.
The pattern f(pat1, . . . , patn) matches terms of the form f(M1, . . . , Mn), when pati matches Mi for all i ≤ n. This pattern can be used only when f is data constructor. A data constructor is a constructor f of arity n that comes with associated destructors gi for i ∈ {1, . . . , n} defined by gi(f(x1,…,xn)) → xi. Data constructors are typically used for representing data structures. Tuples are examples of data constructors. Pattern-matching with data constructors allows one to extract the contents of the data structures. To be able to pattern-match on data constructors without considering the equational theory, we require that the equational theory satisfies the following condition: for all data constructors f, f(M1,…,Mn) =E M′ if and only if there exist M1′,…,Mn′ such that M′ = f(M1′,…,Mn′) and Mi =E Mi′ for all i ∈ {1,…,n}.
When the pattern-matching fails, the expression evaluation let pat = D in P else Q runs Q. For an input in(D, pat); P , the communication is executed even when the pattern-matching fails, but the receiver process does nothing after the input. In other words, in(D, pat); P is an abbreviation for in(D,x : T);let pat = x in P else 0 where T is the type of the pattern pat.
Example 2.5. We consider again the protocol of §2.2. We may add the public key of B inside the first message, which becomes
Message 1. A → B : aenc(sign((pkB,k),sskA),pkB)

36 The Protocol Specification Language
k fresh
aenc(sign((pkB,k),sskA),pkB)
senc(s, k)
A (Alice)
Figure 2.5: Modified example of protocol
The modified protocol is illustrated in Figure 2.5. The processes PA and PB are then modified accordingly:
PA(sskA,pkB) = ! in(c,xpkB : pkey);new k : key; out(c, aenc(sign((xpkB , k), sskA), xpkB )); in(c, x : bitstring); let z = sdec(x, k) in 0
PB(skB,spkA) = ! in(c,y : bitstring);let y′ = adec(y,skB) in
let (=pk(skB),xk : key) = check(y′,spkA) in out(c,senc(s,xk))
The process PA signs the pair (xpkB , k). When PB verifies the signature, it uses pattern-matching to verify that the signed message is a pair whose first component is its own public key pkB = pk(skB), and to store the second component in the variable xk. Because of the overloading of tuples, the type of xk, namely key, must be explicitly mentioned, so that ProVerif knows that the pair (_, _) stands for tuplepkey,key(_, _).
Pattern-matching can be encoded into the core language using equality tests and the destructors associated to data constructors, as shown by the following example.
Example 2.6. The process PB of Example 2.5 can be encoded into the calculus without pattern-matching as follows:
PB(skB,spkA) = ! in(c,y : bitstring);let y′ = adec(y,skB) in let z = check(y′,spkA) in if 1thpkey,key(z) = pk(skB) then let xk = 2thpkey,key(z) in out(c, senc(s, xk))
This process uses the extension of §2.5.2 to allow destructors in if 1thpkey,key(z) = pk(skB) then. It can be encoded in the core calculus as
B (Bob)

2.5. Extensions 37 follows:
PB(skB,spkA) = ! in(c,y : bitstring);let y′ = adec(y,skB) in
let z = check(y′,spkA) in let z′ = (1thpkey,key(z) = pk(skB)) in if z′ then let xk = 2thpkey,key(z) in out(c, senc(s, xk))
Obviously, the formal semantics of calculus can also be extended with pattern-matching as well as the following extensions described in this section. We omit these formal details for simplicity.
2.5.4 Phases
Some situations can be modeled with several phases or stages (Blanchet et al., 2008, §8). For instance, the protocol may run in a first phase, and a long-term key may be compromised in a later phase. One may hope that the secret messages exchanged in the first phase remain secret even after the compromise of the key. This property is named forward secrecy. Voting protocols may also include several phases, for instance a registration phase, a voting phase, and a tallying phase (Kremer and Ryan, 2005). ProVerif includes a phase construct to model such situations:
P, Q ::= processes …
phase n; P phase
The phase construct acts as a global synchronization. The processes initially run in phase 0. Then at some point, phase 1 starts. All processes that did not reach a phase n construct with n ≥ 1 are discarded, and processes that start with phase 1 run. Phases continue being incremented in the same way.
Example 2.7. The following process
P0 = new sskA : skey; (P | phase 1; out(c, sskA)) ,
where P does not contain phases, models a protocol P that runs in phase 0, using the secret key sskA. This key is then compromised in phase 1, by sending it on the public channel c.

38 The Protocol Specification Language 2.5.5 Tables
Tables are often useful in the modeling of security protocols. For instance, an SSH client stores a list of names and keys of the servers it has contacted so far; this list can be modeled as a table. ProVerif supports specific constructs for modeling tables:
P, Q ::= processes …
insert tbl(D1, . . . , Dn); P insertion in a table get tbl(pat1, . . . , patn) suchthat D in P else Q
lookup in a table
The process insert tbl(D1,…,Dn);P inserts the record (M1,…,Mn) in the table tbl, where D1, . . . , Dn evaluate to M1, . . . , Mn respectively, then runs P . The process get tbl(pat1, . . . , patn) suchthat D in P else Q looks for a record (M1, . . . , Mn) in the table tbl that matches the patterns
(pat1, . . . , patn) and such that the condition D evaluates to true. When such a record is found, it runs P with the variables of pat1,…,patn bound to the corresponding values. Otherwise, it runs Q. The condition “suchthat D” can be omitted when D is true. The branch “else Q” can be omitted when Q is 0.
Example 2.8. We consider again the protocol of §2.2. Instead of adding the public key of B to the first message, as in Example 2.5, we may add the name of B. The first message would then become
Message 1. A → B : aenc(sign((B,k),sskA),pkB).
However, with such a protocol, the participants need to relate the keys (e.g. pkB) to the identities (e.g. B). We establish this relation using certificates, in the same way as in the original Denning-Sacco key distri-
butionprotocol(DenningandSacco,1981):weusesign((e,B,pkB),sskS) as a certificate that pkB is the public encryption key of B and sign((s, A,spkA),sskS) as a certificate that spkA is the public signature verifi- cation key of A, where sskS is the signature key of a trusted certificate authority S (server). The protocol becomes:

2.5. Extensions
39
A, B
cA,cB
k freshcA,cB,aenc(sign((B,k),sskA),pkB)
S (server)
A (Alice)
{s}k
Figure 2.6: Protocol with certificates
B (Bob)
Message1. A→S:A,B
Message 2. Message 3. Message 4.
S → A : cA,cB
A → B : cA,cB,aenc(sign((B,k),sskA),pkB) B → A : senc(s, k)
where cA = sign((s,A,spkA),sskS) and cB = sign((e,B,pkB),sskS). This protocol is illustrated in Figure 2.6. A first asks the server for the certificate for its own signature key spkA and for the encryption key pkB of B. The second certificate allows A to obtain the key pkB. Then it sends the message aenc(sign((B,k),sskA),pkB) with the certificates to B. B uses cA to obtain A’s signature verification key spkA. It decrypts and verifies the signature as before; it additionally verifies that B is the first component of the signed message.
A ProVerif model for this protocol is shown in Figure 2.7. This model considers two participants Alice and Bob, which can play both roles A and B of the protocol. The process P0 first generates the encryption and signature keys for Alice and Bob, and inserts them in a table keys, which contains triples (identity, encryption key, signature verification key) for each participant. It also generates the signature keys for the server S, and publishes all public keys by sending them on the public channel c. Finally, it runs the processes that model Alice, Bob, and

40
The Protocol Specification Language
P0 = new sskA : skey; let spkA = pk(sskA) in new skA : skey; let pkA = pk(skA) in insert keys(Alice, pkA, spkA);
new sskB : skey;let spkB = pk(sskB) in new skB : skey;let pkB = pk(skB) in insert keys(Bob,pkB,spkB);
new sskS : skey;let spkS = pk(sskS) in out(c,(pkA,spkA,pkB,spkB,spkS)); (PA(spkS,Alice,sskA,spkA) | PA(spkS,Bob,sskB,spkB) |
PB(spkS,Bob,skB,pkB) | PB(spkS,Alice,skA,pkA) |
PS(sskS) | PK)
PK =!in(c,(h:host,pk:pkey,spk:pkey));
if h ̸= Alice && h ̸= Bob then insert keys(h, pk, spk) PS(sskS) = ! in(c,(h1 : host,h2 : host));
get keys(=h1, xx, spk1) in get keys(=h2, pk2, yy) in
out(c,(sign((s,h1,spk1),sskS),sign((e,h2,pk2),sskS))) PA(spkS,A,sskA,spkA) = ! in(c,(cA : bitstring,cB : bitstring));
if (s,A,spkA) = check(cA,spkS) then
let (=e,B,pkB) = check(cB,spkS) in new k : key; out(c,(cA,cB,aenc(sign((B,k),sskA),pkB))); in(c, x : bitstring); let z = sdec(x, k) in 0
PB(spkS,B,skB,pkB) =
! in(c, (cA : bitstring, cB : bitstring, y : bitstring));
if (e,B,pkB) = check(cB,spkS) then
let (=s,A,spkA) = check(cA,spkS) in let y′ = adec(y,skB) in let (=B, xk : key) = check(y′, spkA) in
if A = Alice || A = Bob then out(c, senc(s, xk ))
Figure 2.7: ProVerif example with tables

2.5. Extensions 41
S, as well as the process PK. This process allows the adversary to register its own keys for participants other than Alice and Bob by inserting these keys in the table keys. Therefore, the adversary can implement any number of dishonest participants to the protocol. The process PS for the server receives two participant names h1 and h2, obtains their keys from the table keys, and outputs a certificate for the signature verification key of h1 and one for the encryption key of h2. The process PA(spkS,A,sskA,spkA) models a participant A with secret signature key sskA and public signature verification key spkA, running role A of the protocol. We suppose that the adversary sends the first message of the protocol himself. (This is possible since the participant names are public.) So this message does not appear explicitly in PA. The process PA receives message 2 containing the certificates, checks them, outputs message 3, and receives message 4. The process PB(spkS,B,skB,pkB) models a participant B with secret decryption key skB and public encryption key pkB, running role B of the protocol. It receives message 3, checks the certificates, and when its interlocutor A is honest (that is, this interlocutor is Alice or Bob), it sends a secret s encrypted under the shared key xk as message 4. This message is only sent when the interlocutor is honest, because it is perfectly normal that the adversary can decrypt the last message when the interlocutor of B is dishonest, so the secrecy of s would not be preserved in this case.
Tables can be encoded in the core calculus using private channels. ProVerif provides a specific construct as it is frequently used and it is probably easier to understand for users.

3
Verifying Security Properties
This chapter presents the method used by ProVerif for verifying pro- tocols. This method based on Horn clauses. The idea of using Horn clauses for verifying protocols was introduced by Weidenbach, 1999. We extended his work by defining a systematic translation from a formal model of protocols to clauses (while he built the clauses manually) and by proving properties other than secrecy. We first formalize the notion of an adversary, then deal with the various security properties verified by ProVerif, starting with the simplest one, secrecy, then considering more complex ones, correspondences and equivalences. The main reference for the proof of secrecy and correspondences in ProVerif is (Blanchet, 2009) and for equivalences (Blanchet et al., 2008). For simplicity, this chapter only deals with the core calculus of §2.1. The results can be adapted to the extended calculus, and ProVerif supports that calculus.
3.1 Adversary
We assume that the protocol is executed in the presence of an adversary that intercept all messages, compute, and send all messages it has, following the Dolev-Yao model (Dolev and Yao, 1983). In our calculus,
42

3.2. Secrecy 43
an adversary can be represented by any process that has the set of public free names Npub in its initial knowledge. (Although the initial knowledge of the adversary contains only names in Npub, one can give any terms to the adversary by sending them on a channel in Npub.)
Definition 3.1. Let Npub be a finite set of names. The closed process Q is an Npub-adversary if and only if fn(Q) ⊆ Npub and all function symbols in Q are public. (The process Q is not necessarily well-typed.)
3.2 Secrecy
This section deals with the verification of secrecy, the most basic security property. We first define secrecy formally, then explain how to verify it by translating the protocol into Horn clauses and using resolution on these clauses.
3.2.1 Definition
Intuitively, a process P preserves the secrecy of M when M cannot be output on a public channel, in a run of P with any adversary. Formally, we define that a trace outputs M as follows:
Definition 3.2. We say that a trace Tr = (Npub, Npriv), P0 →∗ E′, P′ outputs M publicly if and only if Tr contains a reduction E,P ∪ {out(N,M);Q,in(N,x);P } → E,P∪{Q,P{M/x}} for some E, P, N, x, P, Q, with N ∈ Npub.
We can finally define secrecy:
Definition 3.3. The closed process P0 preserves the secrecy of the closed term M from Npub if and only if for some Npriv disjoint from Npub such that fn(M) ∪ fn(P0) ⊆ Npub ∪ Npriv, for any Npub-adversary Q, for any trace Tr = (Npub, Npriv), {P0, Q} →∗ E′, P′, the trace Tr does not output M publicly.
The choice of Npriv does not matter, provided the conditions of Definition 3.3 are satisfied.

44 Verifying Security Properties 3.2.2 From the pi calculus to Horn clauses
Given a closed process P0 in the language of Chapter 2 and a set of names Npub representing the initial knowledge of the adversary, ProVerif builds a set of Horn clauses, representing the protocol P0 in parallel with any Npub-adversary. This translation was originally given in (Abadi and Blanchet, 2005a); we extend it to the richer destructors of (Cheval and Blanchet, 2013). We suppose that the bound names of P0 are pairwise distinct and distinct from the free names and the names in Npub.
In these clauses, messages are represented by patterns p, defined by the following grammar:
p ::= patterns
x, y, z, i variable
a[p1,…,pn] name
f(p1,…,pn) constructor application
Patterns are terms; we use the word patterns to distinguish them from terms of the process calculus. Patterns differ from those terms by the representation of names. We assign a pattern a[p1, . . . , pn] to each name a of P0. We treat a as a function symbol, and write a[p1, . . . , pn] rather than a(p1, . . . , pn) only to distinguish functions that come from names from other functions. If a is a free name, then its pattern is a[ ]. To define the patterns of bound names, we first assign a distinct, fresh session identifier variable i to each replication of P0. (We will use a distinct value for i for each copy of the replicated process.) If a is bound by a restriction new a in P0, then its pattern takes as arguments the terms received as inputs and the session identifiers of replications above the restriction. For example, in the process !in(c, x); new a; P , each name created by new a is represented by a[i, x] where i is the session identifier for the replication and x is the message received as input in in(c, x). Session identifiers enable us to distinguish names created in different copies of processes. Hence, each name created in the process calculus is represented by a different pattern in the Horn clauses.
The evaluation of expressions may fail. We reflect that in the Horn clauses by introducing may-fail patterns, which can be the constant fail or a may-fail variable in addition to an ordinary pattern:

3.2. Secrecy
45
mp ::= p
u
fail
may-fail pattern pattern
may-fail variable failure
As in messages and may-fail messages, a may-fail variable u can be instantiated by a pattern or fail, whereas a variable x cannot be instan- tiated by fail.
The clauses use facts defined by the following grammar:
F ::= attacker(mp)
facts
adversary knowledge message on a channel disequality
message(p, p′)
∀v􏰁, (mp1, . . . , mpn) ̸= (mp′1, . . . , mp′n)
The fact attacker(mp) means that the adversary may have mp. The fact message(p,p′) means that the message p′ may appear on channel p. The fact ∀v􏰁,(mp1,…,mpn) ̸= (mp′1,…,mp′n) means that (mp1,…, mpn) ̸= (mp′1,…,mp′n) for all values of variables in v􏰁, where v􏰁 may contain variables as well as may-fail variables. We write v for a variable or a may-fail variable. We omit ∀v􏰁 when v􏰁 contains no variable. We omit the parentheses around mp1,…,mpn and mp′1,…,mp′n when n = 1. We write m􏰃p as an abbreviation for (mp1, . . . , mpn).
The clauses are of the form F1 ∧…∧Fn ⇒ F, where F1,…,Fn,F are facts. They comprise clauses for the adversary and clauses for the protocol, defined below. These clauses form the set RP0,Npub,Npriv.
Clauses for the adversary
The abilities of the adversary are represented by the following clauses: For each a ∈ Npub, attacker(a[ ]) (Init)
attacker(b0[x]) where b0 occurs neither in P0 nor in Npub ∪ Npriv (Rn)
attacker(fail) (Rfail)

46 Verifying Security Properties
For each public function h, such that def(h) consists of the rewrite rules h(Ui,1,…,Ui,n) → Ui for i ∈ {1,…,k}, the variables in these rewrite rules are renamed so that distinct rewrite rules use different variables, and v􏰁i = fv(Ui) ∪ 􏰈nl=1 fv(Ui,l),
for each i ∈ {1,…,k},
attacker(Ui,1) ∧ . . . ∧ attacker(Ui,n) ∧
􏰍 ∀v􏰁j,(Ui,1,…,Ui,n) ̸= (Uj,1,…,Uj,n) ⇒ attacker(Ui) j 1, then 􏰇i∈I Pi = 􏰅􏰇i∈I1 Pi􏰆 | 􏰅􏰇i∈I2 Pi􏰆 for two disjoint non-empty sets I1 and I2 such that I = I1 ∪ I2. By (Abadi et al., 2016, Lemma B.21(1)), one of the following four cases holds:
1. 2.
3.
􏰇i∈I1 Pi →⋄ P′′ and P′ ≡ P′′ | 􏰅􏰇i∈I2 Pi􏰆 for some P′′;
􏰇i∈I2 Pi →⋄ P′′ and P′ ≡ P′′ | 􏰅􏰇i∈I1 Pi􏰆 for some P′′;
􏰇 N(x) ′′ 􏰇 νx.N⟨x⟩ ′′ ′ ′′ ′′ i∈I1 Pi −−−→⋄ P , i∈I2 Pi −−−−→⋄ A ,andP ≡νx.(P |A )
for some P′′, A′′, x, and ground term N;

B.1. Proof of Proposition 4.1 103 􏰇 νx.N⟨x⟩ ′′ 􏰇 N(x) ′′ ′ ′′ ′′
4. i∈I1 Pi −−−−→⋄ A , i∈I2 Pi −−−→⋄ P ,andP ≡νx.(P |A ) for some P′′, A′′, x, and ground term N.
In the first two cases, we conclude by induction hypothesis. In the last two cases, we conclude by P1.
By applying P2 to 􏰇ni=1[Pi] →⋄ P ′, either there exist i ∈ {1, . . . , n} and P′′ such that [Pi] →⋄ P′′ and P′ ≡ P′′ | 􏰇j̸=i[Pj], or there exist
′′ ′′ N(x) i,j ∈ {1,…,n}, P , A , x, and a ground term N such that [Pi] −−−→⋄
′′ νx.N⟨x⟩′′ ′ ′′′′􏰇
P , [Pj] −−−−→⋄ A , and P ≡ νx.((P | A ) | k̸=i,j[Pk]). In the first
case, we obtain Prop(E, P, Pred, P ) with Pred = {Pi}, and in the second case, we obtain Prop(E,P,Pred,P) with Pred = {Pi,Pj}.
As in (Abadi et al., 2016), we define the size of processes by induction onthesyntax,suchthatsize(!P)=1+2×size(P)and,whenP isnot a replication, size(P) is one plus the size of the immediate subprocesses of P. When P = {P1,…,Pn} is a multiset of processes, we define size(P) = size(P1) + · · · + size(Pn).
Lemma B.3. If [E,P] →⋄ P′, then E,P →∗ E′,P′ and P′ ≡⋄ [E′,P′] for some E′, P′.
Proof. Let E = (Npub,{a􏰁}) and P = {P1,…,Pn}. By Lemma B.2, there exists Pred such that Prop(E, P, Pred, P ′). We proceed by induc- tion on size(Pred).
• Case Pred = {Pi}, [Pi] →⋄ P ′′, and P ′ ≡ νa􏰁.(P ′′ | 􏰇j̸=i[Pj ]) for some i ∈ {1,…,n} and P′′. By (Abadi et al., 2016, Lemma B.21), we have the following cases:
1. [Pi] = Q1 | Q2 for some Q1 and Q2, and one of the following cases holds:
(a) Q1 →⋄ Q′1 and P′′ ≡ Q′1 | Q2 for some Q′1,
N(x) νx.N⟨x⟩ ′′ (b) Q1 −−−→⋄ A, Q2 −−−−→⋄ B, and P
≡ νx.(A | B) for and two symmetric cases obtained by swapping Q1 and Q2.
some A, B, x, and ground term N,

104
Proofs for Chapter 4 By definition of [·], we have P = Q′′ | Q′′, Q = [Q′′], and
i1211
Q = [Q′′] for some Q′′ and Q′′. By (Red Par), E,P → E,P′
2212
whereP′ =P\{P}∪{Q′′,Q′′}.WehaveProp(E,P′,P′ ,P′) i12 red
with P′ = {Q′′} (case a), P′ = {Q′′} (symmetric of red 1 red 2
case a), or P′ = {Q′′,Q′′} (case b and its symmetric), red 12
and size(P′ ) < size(Pred) in all cases, so we conclude by red induction hypothesis. 2. [Pi] = νn.Q, Q →⋄ Q′, and P′′ ≡ νn.Q′ for some n, Q, and Q′. We have Pi = new n;Q′′ and Q = [Q′′] for some Q′′. By (Red Res), E, P → E′, P′ where E′ = (Npub, {a ̃} ∪ {n′}), P′ = P \ {Pi} ∪ 􏰏Q′′{n′/n}􏰐, and n′ ∈/ Npub ∪ {a ̃}. We have [Q′′{n′ /n}] = Q{n′ /n} →⋄ Q′{n′ /n}. (We show by in- duction on the derivation of Q →⋄ Q′ that, if Q →⋄ Q′, then σQ →⋄ σQ′ where σ is a bijective renaming.) So we have Prop(E′,P′,P′ ,P′) with P′ = 􏰏Q′′{n′/n}􏰐 and red red size(P′ ) < size(Pred), so we conclude by induction hypoth- red esis. 3. [Pi]=!Q,Q|Q→⋄ Q′,andP′′ ≡Q′|!QforsomeQand Q′. We have Pi = !Q′′ and Q = [Q′′] for some Q′′. By (Red Repl) twice, E,P →∗ E,P′ where P′ = P ∪ {Q′′,Q′′}. Since Q | Q →⋄ Q′, by (Abadi et al., 2016, Lemma B.21), one of the following cases holds: (a) Q →⋄ Q′1 and Q′ ≡ Q′1 | Q for some Q′1, N(x) νx.N⟨x⟩ ′ (b) Q −−−→⋄ A, Q −−−−→⋄ B, and Q ≡ νx.(A|B) for some A, B, x, and ground term N, so we have Prop(E, P′, P′ , P ′) with P′ = {Q′′} (case a) red red or P′ = {Q′′,Q′′} (case b), and size(P′ ) < size(Pred) in red red all cases, so we conclude by induction hypothesis. 4. [Pi] = if M = N then Q1 else Q2 and either M =E N and P′′ ≡Q1,orM ̸=E N andP′′ ≡Q2,forsomeM,N,Q1, and Q2. B.1. Proof of Proposition 4.1 105 We have Pi = if M =N then Q′1 else Q′2, Q1 = [Q′1], and Q2 = [Q′2] for some Q′1 and Q′2. If M =E N, then by (Red Cond 1′′), E,P → E,P′ where P′ = P \ {Pi} ∪ {Q′1}. We have [E, P′] = new a􏰁; ([Q′1] | 􏰇j̸=i[Pj ]) = new a􏰁; (Q1 | 􏰇j̸=i[Pj ]) and P ′ ≡ νa􏰁.(P ′′ | 􏰇j̸=i[Pj]) ≡ [E,P′]. If M ̸=E N, then by (Red Cond 2′′), E,P → E,P′ where P′ = P \ {Pi} ∪ {Q′2}. We have [E, P′] = new a􏰁; ([Q′2] | 􏰇j̸=i[Pj ]) = new a􏰁; (Q2 | 􏰇j̸=i[Pj ]) and P ′ ≡ νa􏰁.(P ′′ | 􏰇j̸=i[Pj]) ≡ [E,P′]. N(x) ′′ νx.N⟨x⟩ ′′ • Case Pred = {Pi,Pj}, [Pi] −−−→⋄ P , [Pj] −−−−→⋄ A , and P′ ≡νa􏰁,x.((P′′|A′′)|􏰇k̸=i,j[Pk])forsomei,j∈{1,...,n},P′′, A′′, x, and ground term N. By (Abadi et al., 2016, Lemma B.18) N(x) ′′ applied to the transition [Pi] −−−→⋄ P , we have the following cases: 1. [Pi]=Q1|Q2 andeitherQ1 −−−→⋄ A andP ≡A |Q2,or N(x) ′ ′′ ′ N(x)′ ′ ′ Q2 −−−→⋄ A andA≡Q1|A,forsomeQ1,Q2,andA. We have Pi = Q′1 | Q′2, Q1 = [Q′1], and Q2 = [Q′2] for some Q′1 and Q′2. By (Red Par), E,P → E,P′ where P′ = P \ {Pi}∪{Q′1,Q′2}. We have Prop(E,P′,P′ ,P′) with P′ = red red {Q′,P}orP′ ={Q′,P}andsize(P′ )