COMP9418: Advanced Topics in Statistical Machine Learning
Jointree Algorithm
Instructor: University of Wales
Copyright By PowCoder代写 加微信 powcoder
Introduction
§ In this lecture, we will study a variation of VE known as jointree algorithm
§ Also known as clique-tree and tree-clustering algorithm
§ Jointree can be understood in terms of factor elimination
§ It improves VE complexity by answering multiple queries
§ It forms the basis of a class of approximate algorithms we will discuss later in this course
§ We will start describing the idea of inference by factor elimination § In the sequence, we formalise the jointree algorithm using these ideas
Introduction
§ Given a network we want to compute posterior marginals for each of its n variables
§ VE can compute a single marginal in 𝑂(𝑛 exp(𝑤)), where 𝑤 is the width of the elimination order
§ We can run VE 𝑛 times, leading to a total complexity of 𝑂(𝑛! exp 𝑤 )
§ The 𝑛! term can be problematic even when the treewidth is small
§ Jointree can avoid this complexity leading to a 𝑂 𝑛 exp 𝑤 time and space complexity
§ Bayesian networks, it will compute the posterior marginals for all network families (a variable and its parents)
§ For Markov networks, it will provide posterior marginals for all cliques (clique-tree algorithm)
Factor Elimination
§ We want to compute prior marginals over some variable 𝑄 § VE eliminates every other variable
§ Factor elimination will eliminate all factors except for one that contain 𝑄
§ The elimination of factor 𝑓! from a set of factors 𝑺 is a two–step process
§ Eliminate all variables 𝑽 that appear only in factor 𝑓!
§ Multiply the result ∑𝑽 𝑓! by some other factor 𝑓# in the set 𝑺
Factor Elimination Algorithm: FE1
Input: Network 𝑁, a variable 𝑄 in the network Output: prior marginal 𝑃(𝑄)
𝑺 ← factors of network 𝑁
𝑓! ← a factor in 𝑺 that contains variable 𝑄 while 𝑆 has more than one factor do
remove a factor 𝑓” ≠ 𝑓! from set 𝑺
𝑽 ← variables that appear in factor 𝑓” but not in 𝑺
𝑓# ←𝑓#∑𝑽𝑓” forsomefactor𝑓# ∈𝑺 return project(𝑓! , 𝑄)
• This algorithm makes use of the factor operation project(𝑓, 𝑸), which simply sums out all variables not in 𝑸:
project(𝑓,𝑸) ≝ : 𝑓 %&!'() )+𝑸
Factor Elimination: Correctness
§ Factor elimination is simply a variation of VE
§ While VE eliminates one variable at a time, FE eliminates a set of variables 𝑽 at once
§ As these variables appear only in 𝑓!, we replace 𝑓! by a new factor ∑𝑽 𝑓! § We take an extra step and multiply the new factor by other factor 𝑓#
§ At each iteration, the number of factors in 𝑺 decreases by 1
§ After enough iterations, 𝑺 will contain a single factor that contains 𝑄 § Eliminating all other variables but 𝑄 provides the answer to the query
§ Two open choices
§ Which factor 𝑓! to eliminate next § Which factor 𝑓# to multiply by
Any set of choices will provide a valid answer But some choices will be computationally better
Factor Elimination: Example
Suppose we want to answer 𝑃(𝐶) for this Bayesian network
𝐴𝑓”=Θ(𝐴) 𝐴 𝐵 𝑓#=Θ#|” 𝐵 𝐶 𝑓%=Θ%|# 𝑎.6 𝑎𝑏.9 𝑏𝑐.3 𝑎+.4 𝑎𝑏+.1 𝑏𝑐̅.7
𝑎+𝑏.2 𝑏+𝑐.5 𝑎+ 𝑏+ .8 𝑏+ 𝑐̅ .5
Factor Elimination: Example
𝐴𝑓”=Θ(𝐴) 𝐴 𝐵 𝑓#=Θ#|” 𝐵 𝐶 𝑓%=Θ%|# 𝑎.6×𝑎𝑏.9 𝑏𝑐.3 𝑎+.4 𝑎𝑏+.1 𝑏𝑐̅.7
𝑎+𝑏.2 𝑏+𝑐.5 𝑎+ 𝑏+ .8 𝑏+ 𝑐̅ .5
We start eliminating 𝑓-. As𝐴isin𝑓- and𝑓.,𝑽=∅
Factor Elimination: Example
𝐴𝑓”=Θ(𝐴) 𝐴 𝐵 𝑓#=Θ#|” 𝐵 𝐶 𝑓%=Θ%|# 𝑎.6×𝑎𝑏.9 𝑏𝑐.3 𝑎+.4 𝑎𝑏+.1 𝑏𝑐̅.7
𝑎+𝑏.2 𝑏+𝑐.5
We start eliminating 𝑓-. As𝐴isin𝑓- and𝑓.,𝑽=∅
𝐴 𝐵 𝑓”𝑓# 𝑎 𝑏 .54 𝑎 𝑏+ .06 𝑎+ 𝑏 .08 𝑎+ 𝑏+ .32
𝐵 𝐶 𝑏 𝑐 𝑏 𝑐̅ 𝑏+ 𝑐 𝑏+ 𝑐̅
Factor Elimination: Example
Now, we eliminate 𝑓-𝑓.. 𝑽 = {𝐴}
𝐴 𝐵 𝑓”𝑓# 𝑎 𝑏 .54 𝑎 𝑏+ .06 𝑎+ 𝑏 .08 𝑎+ 𝑏+ .32
𝐵 ∑”𝑓”𝑓# 𝑏 .62
𝐵 𝐶 𝑓% 𝑏𝑐.3 𝑏 𝑐̅ .7 𝑏+ 𝑐 .5 𝑏+ 𝑐̅ .5
Factor Elimination: Example
𝐵 ∑”𝑓”𝑓# 𝑏 .62
𝐵 𝐶 𝑓% 𝑏𝑐.3 𝑏 𝑐̅ .7 𝑏+ 𝑐 .5 𝑏+ 𝑐̅ .5
We multiply ∑- 𝑓- 𝑓. into 𝑓/
𝐵 𝐶 𝑓%∑”𝑓”𝑓# 𝑏𝑐 .186
𝑏 𝑐̅ .434 𝑏+𝑐 .190 𝑏+ 𝑐̅ .190
Factor Elimination: Example
Finally, we eliminate all other variables to obtain the answer
𝐵 ∑”𝑓”𝑓# 𝑏 .62
𝐵 𝐶 𝑓% 𝑏𝑐.3 𝑏 𝑐̅ .7 𝑏+ 𝑐 .5 𝑏+ 𝑐̅ .5
𝐵 𝐶 𝑓%∑”𝑓”𝑓# 𝑏𝑐 .186
𝑏 𝑐̅ .434 𝑏+𝑐 .190 𝑏+ 𝑐̅ .190
𝐶 ∑#𝑓%∑”𝑓”𝑓# 𝑐 .376
Elimination Trees
§ In variable elimination, the elimination order was used to specify an elimination strategy
§ The amount of work performed by VE was determined by the quality (width) of the order
§ In factor elimination, we use trees to specify an elimination strategy
§ Each organization of factors into a tree structure represents a particular strategy
§ The quality of such trees (also called width) can be used to quantify the amount of work performed
§ The figure shows one such tree § We call it elimination tree
𝑓(𝐵, 𝐶, 𝐷)
Elimination Trees
§ An elimination tree for a set of factors 𝑺 is a pair (𝑇, 𝜙) where 𝑇 is a tree
§ Each factor in 𝑺 is assigned to exactly one node in 𝑇
§ We use 𝜙& to denote the product of factors assigned to node 𝑖 in
§ We also use vars(𝑖) to denote the variables in factor 𝜙&
§ In this figure, the elimination tree (𝑇, 𝜙) has five nodes which are in one-to-one correspondence with the given factors
§ A node in an elimination tree may have multiple factors assigned to it or no factors at all
§ For many examples, there will be a one-to-one correspondence between factors and nodes in an elimination tree
𝑓(𝐵, 𝐶, 𝐷)
Elimination Trees
§ When computing the marginal over variables 𝑸, we need to choose a special node 𝑟
§ This node 𝑟, called a root, must be chosen such that 𝑸 ⊆ 𝑣𝑎𝑟𝑠(𝑟)
§ For example, if 𝑸 = {𝐶}, we can choose nodes 3, 4 and 5 can act as roots
§ A root node is not strictly needed but will simplify the discussion
𝑓(𝐵, 𝐶, 𝐷)
Elimination Trees
§ Given an elimination tree and a corresponding root 𝑟, our elimination strategy is
§ Eliminate factor 𝜙! only if it has a single neighbor 𝑗 and 𝑖 ≠ 𝑟
§ Sum out variables 𝑽 that appear in 𝜙! but not in the rest of the tree
§ Multiply the result ∑𝑽 𝜙! into the factor 𝜙# associated with its single neighbor 𝑗
𝑓(𝐵, 𝐶, 𝐷)
Factor Elimination Algorithm: FE2
Input: Network 𝑁, a set of variables 𝑸 in the network, an elimination tree (𝑇, 𝜙), a root node 𝑟
Output: prior marginal 𝑃(𝑸)
while tree 𝑇 has more than one node do
remove a node 𝑖 ≠ 𝑟 having a single neighbour 𝑗 from 𝑇 𝑽 ← variables appearing in 𝜙” but not in remaining tree 𝑇
𝜙# ← 𝜙# ∑𝑽 𝜙” return project(𝜙! , 𝑸)
• We still need to make a choice of which node to remove since we may have more than one node i in the tree that satisfies the stated properties
• However, the choice made at this step does not affect the amount of work done by the algorithm
Elimination Trees: Example
Let us suppose we want to compute the marginal over variable 𝐶. We set 𝑟=3
Eliminate 𝑓0 𝑓1 ← 𝑓1 ∑∅ 𝑓0
𝑓(𝐵, 𝐶, 𝐷) 𝑓(𝐶, 𝐸) 𝑓(𝐵, 𝐶, 𝐷) 𝑓(𝐶, 𝐸)
Elimination Trees: Example
Eliminate 𝑓1 𝑓3 ← 𝑓3 ∑∅ 𝑓1
𝑓(𝐵, 𝐶, 𝐷) 𝑓(𝐶, 𝐸) 𝑓(𝐴, 𝐵, 𝐶, 𝐷) 𝑓(𝐶, 𝐸)
Elimination Trees: Example
𝑓(𝐴, 𝐵, 𝐶, 𝐷) 𝑓(𝐶, 𝐸)
Eliminate 𝑓3 𝑓4 ← 𝑓4 ∑.5 𝑓3
Elimination Trees: Example
Eliminate 𝑓6 𝑓4 ← 𝑓4 ∑7 𝑓6
Elimination Trees: Example
The final factor is over variables 𝐴, 𝐶. Eliminating 𝐴 gives the desired result.
Eliminate 𝑓6 𝑓4 ← 𝑓4 ∑7 𝑓6
Elimination Trees and Runtime
§ With VE, any elimination order will lead to correct results
§ Yet a specific order may have a considerable effect on the amount of work
§ FE is similar
§ Any elimination tree will lead to correct results
§ Yet some trees will lead to less work
§ We will return to this topic later
𝑓(𝐵, 𝐶, 𝐷)
𝑓(𝐶, 𝐸) 𝑓(𝐴)
𝑓(𝐵, 𝐶, 𝐷)
§ The separator of edge 𝑖 − 𝑗 in an elimination tree is a set of variables defined as follows
𝑺!C =𝑣𝑎𝑟𝑠(𝑖,𝑗)∩𝑣𝑎𝑟𝑠(𝑗,𝑖)
§ 𝑣𝑎𝑟𝑠(𝑖, 𝑗) are variables that appear on the 𝑖–side of edge 𝑖 − 𝑗
§ 𝑣𝑎𝑟𝑠(𝑗, 𝑖) are variables that appear on the 𝑗–side of edge 𝑖 − 𝑗
§ When variables 𝑽 are summed out of factor 𝑓! before it is eliminated, the resulting factor is guaranteed to be over separator 𝑺!C
𝑓(𝐵, 𝐶, 𝐷)
𝑓(𝐴, 𝐶) 𝐴𝐶 𝐶
Separator: Example
3 𝑓(𝐴, 𝐶) 𝐴𝐶 𝐶
Eliminate 𝑓0
𝑓1(𝐴,𝐵) ← 𝑓1(𝐴)∑∅ 𝑓0(𝐴,𝐵)
𝑓(𝐵, 𝐶, 𝐷) 𝑓(𝐶, 𝐸) 𝑓(𝐵, 𝐶, 𝐷) 𝑓(𝐶, 𝐸)
Separator: Example
Eliminate 𝑓1 𝑓3 𝐴,𝐵,𝐶,𝐷
← 𝑓3 ( 𝐵 , 𝐶 , 𝐷 ) ∑ ∅ 𝑓1 ( 𝐴 , 𝐵 )
𝑓(𝐵, 𝐶, 𝐷)
𝑓(𝐴, 𝐵, 𝐶, 𝐷)
Separator: Example
3 𝑓(𝐴, 𝐶) Eliminate 𝑓3 3 𝑓(𝐴, 𝐶) 𝐴𝐶𝐶 𝑓4𝐴,𝐶 𝐶
4 5 ← 𝑓4(𝐴,𝐶)∑.,5 𝑓3(𝐴,𝐵,𝐶,𝐷) 5
𝑓(𝐴, 𝐵, 𝐶, 𝐷)
𝑓(𝐶, 𝐸) 𝑓(𝐶, 𝐸)
Separator: Example
Eliminate 𝑓6 𝑓4 𝐴,𝐶
← 𝑓4(𝐴, 𝐶) ∑7 𝑓6(𝐶, 𝐸)
§ The cluster of a node 𝑖 in an elimination tree is a set of variables defined as follows:
𝑪! =𝑣𝑎𝑟𝑠 𝑖 ∪4C𝑺!C
§ The width of an elimination tree is the size of its largest cluster − 1
§ This elimination tree has width = 3
𝑓(𝐵, 𝐶, 𝐷)
§ The cluster of a node 𝑖 in an elimination tree is a set of variables defined as follows:
𝑪! =𝑣𝑎𝑟𝑠 𝑖 ∪4C𝑺!C
§ The width of an elimination tree is the size of its largest cluster − 1
§ This elimination tree has width = 3
§ And this one has width = 2
𝑓(𝐴, 𝐶) 𝐵𝐶 𝐶
𝑓(𝐵, 𝐶, 𝐷)
§ Two key observations about clusters
§ When we are about to eliminate node 𝑖, the variables
of factor 𝜙! are exactly the cluster of node 𝑖, 𝑪!
§ The factor 𝜙L must be over the cluster of root 𝑟, 𝑪L
§ Hence, FE2 can be used to compute the marginal over any subset of cluster 𝑪D
§ These observations allow us to rephrase FE2
§ The new formulation takes advantages of both separators and clusters
𝑓(𝐵, 𝐶, 𝐷)
𝑓(𝐶, 𝐸) 𝐴𝐵
Factor Elimination Algorithm: FE3
Input: Network 𝑁, a set of variables 𝑸 in the network, an elimination tree (𝑇, 𝜙), a root node 𝑟 in 𝑇 where 𝑸 ⊆ 𝑪!
Output: prior marginal 𝑃(𝑸)
𝑪” is the cluster of node 𝑖 in tree 𝑇
𝑺”# is the separator of edge 𝑖 − 𝑗 in tree 𝑇 while tree 𝑇 has more than one node do
remove a node 𝑖 ≠ 𝑟 having a single neighbour 𝑗 from 𝑇 𝜙# ← 𝜙#project(𝜙”,𝑺”#)
return project(𝜙! , 𝑸)
Message-passing Formulation
§ We will now rephrase our algorithm using a message passing paradigm
§ It will allow us to execute the algorithm without destroying the elimination tree in the process
§ This is important when computing multiple marginals
§ We can save intermediate computations and reuse them across
different queries
§ This reuse will be the key to achieving the complexity
§ Given an elimination tree of width 𝑤 we will be able to compute the marginal over every cluster in 𝑂(𝑚 exp(𝑤)) time and space, where 𝑚 is the number of nodes in the elimination tree
Message-passing Formulation
§ Given an elimination tree (𝑇 , 𝜙) with root 𝑟
§ For each node 𝑖 ≠ 𝑟 in the elimination tree, there
is a unique neighbor of 𝑖 that is closest to root 𝑟
§ A node 𝑖 will be eliminated from the tree only after all its neighbors, except the one closest to the root, have been eliminated
§ When a node 𝑖 is about to be eliminated, it will have a single neighbor 𝑗. Its current factor will have all variables but the separator 𝐒!# eliminated and it will be multiplied by factor 𝑗
Elimination tree with directed edges pointing to neighbour closest to root node 7
Message-passing Formulation
§ Now, we view the elimination of node 𝑖 with a single neighbor 𝑗 as a process of passing a message 𝑀!C from node 𝑖 to 𝑗
§ When 𝑗 receives a message, it multiplies it into its current factor 𝜙#
§ Node 𝑖 cannot send a message to 𝑗 until it has received all messages from neighbors 𝑘 ≠ 𝑗
§ After 𝑖 receives these messages, its current factor will be 𝜙! ∏PQ# 𝑀P!
§ The message 𝑖 send to 𝑗 will be ∑𝑪!\𝑺!” 𝜙! ∏PQ# 𝑀P!
Elimination tree with directed edges pointing to neighbour closest to root node 7
Message-passing Algorithm
§ We can now formulate FE as message-passing algorithm
§ To compute the marginal over some variables 𝑸
§ Select a root 𝑟 in the elimination tree such that
§ Push messages towards the root 𝑟
§ When all messages into the root are available, we multiply them by 𝜙L and eliminate variables not in 𝑸
§ If our elimination tree has 𝑚 nodes and 𝑚 − 1 edges, then 𝑚 − 1 messages need to be sent
678 9 10 11
10 messages are sent toward root node 7 in this 11-node elimination tree
Message-passing and Reuse
§ Suppose we want to compute the marginal over some other cluster 𝐶! , 𝑖 ≠ 𝑟
§ We choose 𝑖 as the new root and repeat the message-passing process
§ Some additional messages need to be passed, but not as many as 𝑚 − 1, assuming we saved the messages sent to 𝑟
§ In the figure, out 10 messages sent toward node 6, eight messages have already been computed when 7 was the root
678 9 10 11
Message-passing and Reuse
§ The key observation is that if we choose every node as a root, the total number of messages is 2(𝑚 − 1)
§ There are 𝑚 − 1 edges and two distinct messages per edge
§ These messages are usually computed in two phases
§ Inward: we direct messages toward a root 𝑟
§ Outward: we direct messages away from the root 𝑟
678 9 10 11
Outward phase with node 7 as root
Message-passing: Complexity
§ A message from 𝑖 to 𝑗 is computed by multiplying a few factors
§ The factor that results from the multiplication must be over the cluster
§ Hence, the complexity of both multiplication and summation is
𝑂(exp(𝑤)), where 𝑤 is the size of cluster 𝐶!
§ The width of an elimination tree is the size of its maximal cluster −1
§ Hence, if 𝑤 is the width, then the cost of any message is 𝑂(exp(𝑤))
§ Since we have 2(𝑚 − 1), the total cost is 𝑂(𝑚 exp(𝑤))
Joint Marginals and Evidence
§ Given some evidence 𝑒 we want to use factor elimination to compute the joint marginal 𝑃(𝐶! , 𝑒) for each cluster 𝐶! in the elimination tree, we can
1. Reduce each factor 𝑓 given the evidence 𝒆, leading to a set of reduced factors 𝑓𝒆
2. Introduce an evidence indicator 𝜆T for every variable 𝐸 in evidence 𝒆. 𝜆T is a factor over variable 𝐸 that captures the value of 𝐸 in evidence e: 𝜆T(𝑒) = 1 if 𝑒 is consistent with evidence 𝒆 and 𝜆T(𝑒) = 0 otherwise
𝒆: {𝐴 = 𝑡𝑟𝑢𝑒, 𝐵 = 𝑓𝑎𝑙𝑠𝑒}
𝐴 𝜆” 𝑎1 𝑎+ 0
𝐵 𝜆# 𝑏 0 𝑏+ 1
Joint Marginals and Evidence
§ The first method is more efficient if we plan to compute marginals with respect to only one piece of evidence 𝒆
§ The second method is more efficient to compute marginals with respect to multiple pieces of evidence, 𝒆G ,…,𝒆H
§ While trying to reuse messages across different pieces of evidence
§ This method is implemented by assigning the evidence indicator 𝜆T to a node 𝑖 in the elimination tree while ensuring that𝐸 ∈ 𝐶!.
§ As a result, the clusters and separators of the elimination tree will remain intact and so will its width
𝒆: {𝐴 = 𝑡𝑟𝑢𝑒, 𝐵 = 𝑓𝑎𝑙𝑠𝑒}
𝐴 𝜆” 𝑎1 𝑎+ 0
𝐵 𝜆# 𝑏0 𝑏+ 1
Factor Elimination Algorithm: FE
Input: Network 𝑁, a set of variables 𝑸 in the network, an elimination tree (𝑇, 𝜙) Output: joint marginal 𝑃(𝑪” , 𝒆) for each node 𝑖 in the elimination tree
for each variable 𝐸 in evidence 𝒆 do
𝑖←nodeintree𝑇suchthat𝐸∈ 𝐶”
𝜆7 ← evidence indicator for variable 𝐸
choose a root node 𝑟 in the tree 𝑇
pull/collect messages towards root 𝑟 push/distribute messages away from root 𝑟 return 𝜙” ∏9 𝑀9″ for each node 𝑖 in the tree 𝑇
# 𝜆7 = 1 if 𝑒~𝒆 and 𝜆7 𝑒 = 0 otherwise # entering evidence at node 𝑖
# joint marginal 𝑃(𝑪” , 𝒆)
• This algorithm uses the second method for accommodating evidence
• It computes joint marginals using two phases of message passing
• If we save the messages across different runs of the algorithm, then we can reuse these messages as long as they
are not invalidated when the evidence changes
• When the evidence at node 𝑖 changes, we need to invalidate all messages that depend on the factor at that node
• These messages happen to be the ones directed away from node 𝑖 in the elimination tree 42
Polytree Algorithm
§ When the network has a polytree structure
§ We can use an elimination tree that corresponds to the polytree structure
§ This special case of the algorithm is known as polytree algorithm or belief propagation algorithm. We will discuss them later
§ If 𝑘 is the maximum number of parents in any node in the polytree, then 𝑘 is also the width of the elimination tree
§ The time and space complexity are 𝑂(𝑛 exp(𝑘)), where 𝑛 is the number of nodes in the polytree
§ There are different methods for constructing elimination trees
§ But the method we discuss next will be based on an influential tool
known as a jointree
§ It is this tool that gives factor elimination its traditional name: the jointree
§ It is possible to phrase the factor elimination algorithm directly on jointrees without explicit mention elimination trees
§ This is indeed how the algorithm is classically described and we provide such a description
§ We start defining a jointree
§ A jointree for a network 𝐺 is a pair
(𝑇, 𝑪) where 𝑇 is a tree and 𝑪 is a function that maps each node 𝑖 in the tree 𝑇 into a label 𝑪!, called cluster.
§ The jointree must satisfy the properties
§ The cluster 𝑪! is a set of nodes from 𝐺
§ Each factor in 𝐺 must appear in some cluster 𝑪!
§ If a variable appears in two clusters 𝑪! and 𝑪𝒋, it must appear in every cluster 𝑪P on the path
A FDG1 2ACE BC
connecting nodes 𝑖 and 𝑗 in the jointree. This is G known as jointree or running intersection
§ The separator of edge 𝑖 − 𝑗 in a jointree is denoted by 𝑺!C and
defined as 𝑪! ∩ 𝑪C
§ The width of a jointree is defined as
the size of its largest cluster minus one
§ A jointree is minimal if it ceases to be a jointree for 𝐺 once we remove a variable from one of its cluster
§ Left jointree is minimal but the right one is not
FDG 1 2 ACE FDGH 1 2 ACE DF AE DFH AE
4 AEF ADFH 3
AD EF AD EFH
ABD 5 6 EFH ABD 5 6 EFH 46
The Jointree Algorithm
§Theclassicaljointreealgorithmis:
§ Construct jointree (𝑇, 𝐶) for a given
§ Assign each factor 𝜙! to a cluster
that contains 𝑣𝑎𝑟𝑠(𝜙!)
§ Assign each evidence indicator 𝜆o to
a cluster that contains 𝑋
§ Propagate messages in the jointree
between the clusters
§ After passing two messages per edge in the jointree, we can compute the marginals 𝑃(𝑪, 𝒆) for every cluster 𝑪
A 𝜙;(𝐺,𝐷,𝐹) FDG B C DF
𝜙/ 𝐶,𝐴 𝜙7(𝐴,𝐶,𝐸)
𝜙- 𝐴 𝜙: (𝐹, 𝐴)
𝜙. 𝐵, 𝐴 𝜙5(𝐷, 𝐴. 𝐵)
EFH 𝜙 (𝐻, 𝐸, 𝐹) <
Message-passing: Example
𝜙G 𝐴,𝐵 ,𝜙O 𝐵,𝐶 ,𝜙P 𝐶,𝐷 ,𝜙Q 𝐷,𝐸
1: A,B 2: B,C 3: C,D 4: D,E
Φ1=𝜙1 𝐴,𝐵 Φ0=𝜙0 𝐵,𝐶 Φ4=𝜙4 𝐶,𝐷 Φ3=𝜙3 𝐷,𝐸
Message-passing: Example
1: A,B 2: B,C 3: C,D 4: D,E Φ1=𝜙1 𝐴,𝐵 Φ0=𝜙0 𝐵,𝐶 Φ4=𝜙4 𝐶,𝐷 Φ3=𝜙3 𝐷,𝐸
Message-passing: Example
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com