COMP9418: Advanced Topics in Statistical Machine Learning
Graph Decomposition
Instructor: University of Wales
Copyright By PowCoder代写 加微信 powcoder
Introduction
§ In this lecture, we will study the relationships between Variable Elimination and Jointrees
§ Both can be viewed as decomposing a network in a systematic way § VE removes one variable at a time while FE removes one factor at a
§ We will provide more formal definitions to concepts seen in previous lectures about these topics
§ And cover polynomial time algorithms to convert elimination orders to jointrees and vice-versa
Moral Graph
§ An interaction graph for factors 𝜙! , … , 𝜙” is an undirected graph
§ Nodes correspond to the variables appearing in 𝜙!,…,𝜙”
§ Edges connect variables in the same factor
§ If the factors are CPTs of a Bayesian network with DAG 𝐺, the induced graph can be obtained from 𝐺 by moralization
§ Add an undirected edge between every pair of nodes that share a common child
§ Convert every directed edge in 𝐺 in an undirected edge
§ Treewidth is a central notion to complexity analysis of inference algorithms
§ It is usually defined for undirected graphs
§ It can be extended to DAGs through their moral graphs
§ The treewidth of a DAG 𝐺 is defined as the treewidth of its moral graph
§ We will later define the treewidth for undirected graphs
Other Graph-theoretic Definitions
§ We adopt the following definitions for a graph 𝐺
§ A neighbor of a node 𝑋 is a node 𝑌 connected to 𝑋 by an
edge in 𝐺. We also say 𝑋 and 𝑌 are adjacent
§ The degree of a node 𝑋 is the number of neighbors of 𝑋
§ A clique is a set of nodes in 𝐺 that are pairwise adjacent, i.e., every pair of nodes in the clique are connected by an edge
§ A maximal clique is a clique that is not strictly contained in another clique
§ When we say “graph 𝐺” in this lecture, we mean undirected graph. Otherwise, we say “DAG 𝐺”
Elimination Order
§ An elimination order for a graph 𝐺 is a total ordering 𝜋 of nodes of 𝐺, where 𝜋(𝑖) is the ith node in the ordering
§ The result of eliminating node 𝑋 from graph 𝐺 is another graph obtained from 𝐺 by
§ Adding an edge between every pair of nonadjacent neighbors of 𝑋
§ Deleting node 𝑋
§ The edges added during the
elimination process are called fill- in edges
AAA BCBCBC
DEDEDE FFF
The moral graph
The filled-in graph
𝜋 = 𝐹,𝐴,𝐵,𝐶,𝐷,𝐸
Node Elimination
𝜋 = 𝐹,𝐴,𝐵,𝐶,𝐷,𝐸
BCC DEDEDEDEE
Graph and Cluster Sequence
§ The elimination of nodes from graph 𝐺 according to order 𝜋 induces:
§ A graph sequence 𝐺!, … , 𝐺”, where 𝐺! = 𝐺 a graph 𝐺-.!is obtained by eliminating
node 𝜋(𝑖) from graph 𝐺-
§ A cluster sequence 𝑪!, … , 𝑪”, where 𝑪- consists of node 𝜋(𝑖) and its neighbors in
graph 𝐺- A
BCC DEDEDEDEE
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
Elimination Width
§ Let 𝜋 be an elimination order for a graph 𝐺, and 𝑪!, … , 𝑪” the induced cluster sequence
§ The width of the elimination order 𝜋 is 𝑤𝑖𝑑𝑡h 𝜋, 𝐺 ≝ 𝑚𝑎𝑥” 𝑪 -1! –
§ We extend this definition to DAGs: width of 𝜋 for a DAG is the width of 𝜋 for the
DAG’s moral graph
In this example width = 2
BCC DEDEDEDEE
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
§ The width of a graph 𝐺 is 𝑡𝑟𝑒𝑒𝑤𝑖𝑑𝑡h 𝐺 ≝ min 𝑤𝑖𝑑𝑡h(𝜋, 𝐺) $
§ When the width of an elimination order 𝜋 equals the treewidth, we say 𝜋 is optimal § When an elimination order 𝜋 does not lead to any fill-in edges, we say 𝜋 is a perfect
elimination order
§ Not every graph admits a perfect elimination order
This elimination order is optimal (why?) , but not perfect
BCC DEDEDEDEE
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
Elimination Heuristics
§ The computation of an optimal elimination order is NP-hard § Several greedy elimination heuristics have been proposed
§ These are ways to eliminate nodes based on local considerations
§ The two most common heuristics are
§ Min-degree: eliminate the node having the smallest number of
§ Min-fill: Eliminate the node that leads adding the smallest number of fill-in edges
§ Min-fill typically produces better results
§ Min-degree is known to be optimal for graphs with treewidth ≤ 2
Node 𝐶 has degree 4, and min-fill score of 1
Elimination Heuristics
§ In practice, it is common to combine heuristics
§ Select nodes with min-fill and break ties with min-degree § Select nodes with min-degree and break ties using min-fill
§ Stochastic techniques can be powerful to combine heuristics
§ Same elimination heuristic to eliminate every node but ties are broken stochastically
§ Different heuristics to eliminate different nodes, where the choice of a heuristic at each node is made stochastically
Node 𝐶 has degree 4, and min-fill score of 1
Optimal Elimination Prefixes
§ A prefix of elimination order 𝜋 is a sequence of variables 𝜏 that occurs in the beginning of 𝜋
§ For example if 𝜋 = 𝐴,𝐵,𝐶,𝐷,𝐸 then two prefixes are 𝜏 = 𝐴,𝐵,𝐶 and 𝜏 = 𝐴,𝐵
§ If 𝜏 is a prefix of some optimal elimination order, then 𝜏 is an optimal elimination prefix
§ It can be completed to yield an optimal order
§ The notion of width can be extended to prefixes by considering the cluster sequence of applying the prefix to a graph
§ We can generate optimal prefixes
§ There are preprocessing rules that eliminate a subset of variables
§ While guaranteeing they represent the prefix of some optimal elimination order
§ These rules are not complete, i.e., we cannot always use them to produce a complete elimination order
§ Yet, we can use these rules to eliminate as many nodes as possible before using a heuristic
Optimal Elimination Prefixes
§ We apply the rules in any order
§ A lower bound (low) is maintained on the treewidth
§ Some rules update this bound
§ While others use it as a condition for applying the rule
§ If a graph 𝐺’ is the result of applying any of these rules to 𝐺 and if low is updated accordingly
§ 𝑇𝑟𝑒𝑒𝑤𝑖𝑑𝑡h(𝐺) = max(𝑡𝑟𝑒𝑒𝑤𝑖𝑑𝑡h(𝐺’), 𝑙𝑜𝑤)
§ Therefore, these rules reduce the computation of treewidth of 𝐺 into the computation of treewidth of a smaller graph 𝐺’
§ Moreover, they are guaranteed to generate only optimal elimination prefixes
§ A simplicial node is a node that all its neighbors are pairwise adjacent, forming a clique § An almost simplicial node is one that all but one neighbor form a clique
Optimal Elimination Rules
§ There are four rules
§ Simplicial rule: Eliminate any simplicial node with degree 𝑑, updating 𝑙𝑜𝑤
to max(𝑙𝑜𝑤, 𝑑)
§ Almost simplicial rule: Eliminate any almost simplicial node with degree
𝑑aslongas𝑙𝑜𝑤 ≥ 𝑑
§ Buddy rule: If 𝑙𝑜𝑤 ≥ 3, eliminate any pair of nodes 𝑋 and 𝑌 that have
degree 3 each and share the same set of neighbors
§ Cube rule: If 𝑙𝑜𝑤 ≥ 3, eliminate any set of four nodes 𝐴, 𝐵, 𝐶, 𝐷 forming the structure in the figure
§ These rules are complete for graphs of treewidth ≤ 3
Optimal Elimination Rules
§ Special cases for simplicial rule
§ Isler rule: Eliminate nodes with degree 0
§ Twig rule: Eliminate nodes with degree 1 § Special cases for almost simplicial rule
§ Series rule: eliminate nodes with degree 2 if 𝑙𝑜𝑤 ≥ 2
§ Triangle rule: eliminate nodes with degree 3 if 𝑙𝑜𝑤 ≥ 3 and if at least two of the neighbors are connected by an edge
Optimal Elimination Prefixes: Example
BCC DEDEDEDEE
Optimal Elimination Prefixes: Example
Simplicial rule
𝜏=𝐹 𝑙𝑜𝑤 = 2
BCC DEDEDEDEE
Optimal Elimination Prefixes: Example
Simplicial rule
𝜏 =𝐹,𝐴 𝑙𝑜𝑤 = 2
BCC DEDEDEDEE
Series rule
Optimal Elimination Prefixes: Example
Simplicial rule
𝜏 =𝐹,𝐴,𝐵 𝑙𝑜𝑤 = 2
BCC DEDEDEDEE
Series rule Series rule
Optimal Elimination Prefixes: Example
Simplicial rule
BCC DEDEDEDEE
Series rule Series rule Series rule
𝜏 =𝐹,𝐴,𝐵,𝐶 𝑙𝑜𝑤 = 2
Optimal Elimination Prefixes: Example
Simplicial rule
BCC DEDEDEDEE
Series rule Series rule Series rule Twig rule
𝜏 =𝐹,𝐴,𝐵,𝐶,𝐷 𝑙𝑜𝑤 = 2
Optimal Elimination Prefixes: Example
Simplicial rule
BCC DEDEDEDEE
Series rule Series rule Series rule Twig rule Isler rule
𝜏 = 𝐹,𝐴,𝐵,𝐶,𝐷,𝐸 𝑙𝑜𝑤 = 2
Lower Bounds on Treewidth
§ Lower bounds are used to empower the pre-processing rules § We can use then to set 𝑙𝑜𝑤 to a larger initial value
§ This may allow us to apply more pre-processing elimination rules
§ Two well-know, but typically weak lower bounds are
§Ifagraph𝐺hasacliqueofsize𝑛,then𝑡𝑟𝑒𝑒𝑤𝑖𝑑𝑡h(𝐺) ≥ 𝑛 − 1
§ The degree of a graph is a lower bound for treewidth. The degree of a graph is the minimum number of neighbours attained by any of its nodes
§ The degeneracy of a graph is the maximum degree attained by any of its subgraphs
§ It is a better bound. It is based on two observations
§ First, the treewidth of any subgraph cannot be larger than the treewidth of the
graph containing it
§ Second, the degree of a subgraph may be higher than the degree of the graph containing it
Graph degree is 2
Graph degree is 3
Lower Bounds on Treewidth
§ The degeneracy is also known as maximum minimum degree (MMD) § MMD is easily computed by generating a sequence of subgraphs
§ Start with the original graph and remove a minimum-degree node
§ The MMD is the maximum degree attained by any of the generated subgraphs
DEDEDEDEE Degree = 2 Degree = 3 Degree = 2 Degree = 1 Degree = 0
Optimal Elimination Order
§ We can use search methods to find an optimal elimination order
§ For instance, depth-first search to generate all possible elimination orders
§ The search space has size 𝑂(𝑛!), where 𝑛 is the number of variables
§ Therefore, such methods are limited to small number of variables
§ However, we can explore optimal elimination prefixes and lower bound to prune the search space
ABACAD BABCBD CACBCD DADBDC
Optimal Elimination Order: Pruning
§ Pruning is essential to improve the efficiency of these search methods
§ If you have a query, use the query pruning techniques of the lecture 7
§ Use optimal prefix rules to further reduce the graph size and provide an elimination prefix and lower bound
§ In a depth-first search, each time we reach a leaf node in the search tree, we obtain an elimination order
§ Suppose we have an elimination order 𝜋 with width 𝑤!
§ We are currently exploring a prefix 𝜏 with width 𝑤”
§ We also have a lower bound 𝑏 on graph 𝐺”
§ If max(𝑤”, 𝑏) ≥ 𝑤!, then we cannot improve on order 𝜋
Elimination order 𝜋 with width 𝑤” This is our best-so-far width
AB ABACAD BABCBD
prefix 𝜏 with width 𝑤)
Jointree: Recap
§ A jointree for a network 𝐺 is a pair
(𝑇, 𝑪) where 𝑇 is a tree and 𝑪 is a function that maps each node 𝑖 in the tree 𝑇 into a label 𝑪J , called cluster. The jointree must satisfy the following properties
§ The cluster 𝑪- is a set of nodes from 𝐺
§ Each factor in 𝐺 must appear in some cluster 𝑪-
§ If a variable appears in two clusters 𝑪- and 𝑪𝒋, it must appear in every cluster 𝑪5 on the path
connecting nodes 𝑖 and 𝑗 in the jointree. This is known as jointree or running intersection property
A FDG1 2ACE
Jointree: Recap
§ The separator of edge 𝑖 − 𝑗 in a jointree is a set of variables defined as follows
𝑺-6 = 𝑣𝑎𝑟𝑠(𝑖, 𝑗) ∩ 𝑣𝑎𝑟𝑠(𝑗, 𝑖)
§ The width of a jointree is defined as the size of its largest cluster minus one
§ A jointree is also called a junction tree or a cluster tree
A FDG1 2ACE
EF G HABD5 6
Jointree Operations
§ The following transformations preserve all three properties of a jointree
§ Add variable: We can add a variable 𝑋 to a cluster 𝑪- if 𝑪- has a neighbour 𝑪6 that contains 𝑋
§ Merge clusters: We can merge two neighbouring clusters 𝑪- and 𝑪6 into a single cluster 𝑪5 = 𝑪- ∪ 𝑪6, where 𝑪5 inherits the neighbours of 𝑪- and 𝑪6
§ Add cluster: We can add a new cluster 𝑪6 and make it a neighbour of an existing cluster 𝑪- if
§ Remove cluster: We can remove cluster 𝑪6 if it has a single neighbour 𝑪- and 𝑪6 ⊆ 𝑪-
Jointree Operations
§ These transformations have practical applications
§ The addition of variable 𝐷 allows the jointree to compute
marginal over variables 𝐶𝐷𝐸
§ This marginal will not be computed if the algorithm is applied
to the left jointree
§ Merging two clusters eliminates the separator connecting them
§ As the Shenoy-Shafer algorithm creates factors over each separator, this transformation can be used to reduce space requirements
§ This will also typically increase the running time, as merging clusters can lead to larger clusters and the algorithm is exponential to cluster size
§ This transformation can be the basis for time-space trade-offs
Jointree to Elimination Orders
while not empty(𝑇) do
remove node 𝑖 from 𝑇 that has a single neighbour 𝑗 append variables 𝑪!\𝑪! ∩ 𝑪” to 𝜋
append variables 𝑪* to 𝜋, where 𝑟 is the remaining node in 𝑇 return 𝜋
𝜙%(𝐺,𝐷,𝐹) FDG ACE
𝜙& 𝐶,𝐴 𝜙'(𝐸,𝐴,𝐶)
EFH 𝜙((𝐻,𝐸,𝐹) 32
𝜙! 𝐴 𝜙”(𝐹,𝐴)
𝜙# 𝐵,𝐴 ABD 𝜙$(𝐷,𝐴.𝐵)
Jointree to Elimination Orders
𝜋 = [] FDG ACE
while not empty(𝑇) do
remove node 𝑖 from 𝑇 that has a single neighbour 𝑗 append variables 𝑪!\𝑪! ∩ 𝑪” to 𝜋
append variables 𝑪* to 𝜋, where 𝑟 is the remaining node in 𝑇 return 𝜋
𝜋 = [𝐺,𝐶,𝐵]
𝜋 = [𝐺,𝐶,𝐵,𝐻]
𝜋 = [𝐺,𝐶,𝐵,𝐻,𝐷]
𝜋= [𝐺,𝐶,𝐵,𝐻,𝐷,{𝐴𝐸𝐹}]
AF AF AF AF AF
AEF ADF AEF
ADF AEF ADF
ABD EFH ABD EFH ABD EFH
Jointree to Elimination Orders
§ This algorithm simulates the process of factor elimination
§ It runs in polynomial time
§ It provides an elimination order of no greater width than
the jointree width
§ The algorithm leaves a few choices undetermined
§ Choosing the node 𝑖 to remove next from the jointree
§ Choosing the order in which variables 𝑪#\𝑪# ∩ 𝑪$ are appended to 𝜋
§ None of these choices matter to the guarantees provided by the algorithm
34 ABD EFH
26 ABD EFH
Elimination Orders to Jointrees
§ We present a polynomial time, width-preserving algorithm for generating a jointree from an elimination order
§ The algorithm consists of two parts § Constructing the clusters
§ Assembling the jointree
Elimination Orders to Jointrees: Clusters
§ We show how to generate clusters with the following properties
§ The size of every cluster is ≤ 𝑤𝑖𝑑𝑡h 𝜋,𝐺 + 1 A § The clusters satisfy conditions 1 and 2 of the jointree
definition
§ Let 𝑪-,…, 𝑪” be the cluster sequence that results from applying the elimination order 𝜋 to the undirected graph (or moral graph of a DAG) 𝐺. Every family of 𝐺 is contained in some cluster in the sequence
§ 𝑪#,…, 𝑪% satisfy the first two conditions of the jointree § The size of each cluster is ≤ 𝑤𝑖𝑑𝑡h 𝜋,𝐺 + 1
Elimination Orders to Jointrees: Clusters
𝜋 = 𝐹,𝐴,𝐵,𝐶,𝐷,𝐸
𝑪% =𝐵𝐶𝐷 𝜙,(𝐴) A
𝑪& =𝐶𝐷𝐸 𝜙.(𝐶,𝐴)
𝜙/(𝐷,𝐵) D E 𝜙0(𝐸,𝐶)
F 𝜙+(𝐹,𝐷,𝐸)
Elimination Orders to Jointrees: Clusters
𝜋 = 𝐹,𝐴,𝐵,𝐶,𝐷,𝐸
𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
𝜙+(𝐹,𝐷,𝐸) 𝜙,(𝐴) 𝜙-(𝐵,𝐴)
𝜙/(𝐷,𝐵) 𝜙0(𝐸,𝐶)
Running Intersection Property
§ Let 𝑪-,…, 𝑪” be the cluster sequence induced by applying elimination order 𝜋 to graph 𝐺.
§ For every 𝑖 < 𝑛, the variables 𝑪# ∩ (𝑪#&' ∪ ⋯ ∪ 𝑪%) are contained in some cluster 𝑪$ where 𝑗 > 𝑖 § This is known as the running intersection property of the cluster sequence
𝑪# =𝐷𝐸𝐹 𝑪#∩𝑪$∪⋯∪𝑪( =𝐷𝐸
BCC DEDEDEDEE
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
Running Intersection Property
§ Let 𝑪-,…, 𝑪” be the cluster sequence induced by applying elimination order 𝜋 to graph 𝐺.
§ For every 𝑖 < 𝑛, the variables 𝑪# ∩ (𝑪#&' ∪ ⋯ ∪ 𝑪%) are contained in some cluster 𝑪$ where 𝑗 > 𝑖 § This is known as the running intersection property of the cluster sequence
BCC DEDEDEDEE
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
𝑪$∩𝑪%∪⋯∪𝑪( =𝐵𝐶
Running Intersection Property
§ Let 𝑪-,…, 𝑪” be the cluster sequence induced by applying elimination order 𝜋 to graph 𝐺.
§ For every 𝑖 < 𝑛, the variables 𝑪# ∩ (𝑪#&' ∪ ⋯ ∪ 𝑪%) are contained in some cluster 𝑪$ where 𝑗 > 𝑖 § This is known as the running intersection property of the cluster sequence
BCC DEDEDEDEE
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
𝑪%∩𝑪&∪⋯∪𝑪( =𝐶𝐷
Nonmaximal Clusters
§ Nonmaximal clusters are the ones contained in previous clusters of the sequence § We can remove a nonmaximal cluster from the sequence
§ But we must reorder the sequence to maintain the running intersection property (RIP)
§ Keeping the RIP is important. We will use it to connect the cluster into a jointree later
BCC DEDEDEDEE
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
Nonmaximal clusters 42
Nonmaximal Clusters
§ Nonmaximal clusters are the ones contained in previous clusters of the sequence § We can remove a nonmaximal cluster from the sequence
§ But we must reorder the sequence to maintain the running intersection property (RIP)
§ Keeping the RIP is important. We will use it to connect the cluster into a jointree later
𝑪# =𝐷𝐸𝐹 𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸 𝑪( =E
Nonmaximal clusters
Nonmaximal Clusters
§ Nonmaximal clusters are the ones contained in previous clusters of the sequence § We can remove a nonmaximal cluster from the sequence
§ But we must reorder the sequence to maintain the running intersection property (RIP)
§ Keeping the RIP is important. We will use it to connect the cluster into a jointree later
𝑪# =𝐷𝐸𝐹 𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸
𝑪( =E 𝑪( =E
Move 𝑪# to 𝑪’ position
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪# =𝐷𝐸𝐹
Nonmaximal Clusters
§ Nonmaximal clusters are the ones contained in previous clusters of the sequence § We can remove a nonmaximal cluster from the sequence
§ But we must reorder the sequence to maintain the running intersection property (RIP)
§ Keeping the RIP is important. We will use it to connect the cluster into a jointree later
𝑪# =𝐷𝐸𝐹 𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪’ =𝐷𝐸
𝑪( =E 𝑪( =E
Move 𝑪# to 𝑪’ position
𝑪$ =𝐴𝐵𝐶 𝑪% =𝐵𝐶𝐷 𝑪& =𝐶𝐷𝐸 𝑪# =𝐷𝐸𝐹
Nonmaximal Clusters
§ Nonmaximal clusters are the ones contained in previous clusters of the sequence § We can remove a nonmaximal cluster from the sequence
§ But we must reorder the sequence to maintain the running intersection property (RIP)
§ Keeping the RIP is important. We will use it to connect the cluster into a jointree later
𝑪# =𝐷𝐸𝐹 Remove 𝑪’
Move 𝑪# to 𝑪’ position Remove𝑪(
𝑪$ =𝐴𝐵𝐶 𝑪$ =𝐴𝐵𝐶
𝑪$ =𝐴𝐵𝐶 𝑪# =𝐴𝐵𝐶
𝑪% =𝐵𝐶𝐷 𝑪% =𝐵𝐶𝐷
𝑪% =𝐵𝐶𝐷 𝑪$ =𝐵𝐶𝐷
𝑪& =𝐶𝐷𝐸 𝑪& =𝐶𝐷𝐸
𝑪& =𝐶𝐷𝐸 𝑪% =𝐶𝐷𝐸
𝑪’ =𝐷𝐸 𝑪# =𝐷𝐸𝐹
𝑪# =𝐷𝐸𝐹 𝑪& =𝐷𝐸𝐹
𝑪( =E 𝑪( =E
Nonmaximal Clusters: Another Example
𝜋 = 𝐴,𝐵,𝐶,𝐷,𝐸
A BCDEBCDECDEDEE
𝑪# = 𝐴𝐶𝐷 𝑪# = 𝐴𝐶𝐷
Move 𝑪# to 𝑪% position
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com