程序代写

What is a graph?
• V: Set of vertices (singular: vertex), also called nodes
􏰀 We’ll use letters or natural numbers in our examples.

Copyright By PowCoder代写 加微信 powcoder

􏰀 Conceptually, can be anything. But may require extra work.
(more later) • E: Set of edges
􏰀 Edges connect vertices: sets of two vertices Can represent…
• Road networks: vertices = cities, edges = roads
• Social graphs: vertices = people, edges = friendships
• Course prerequisites: vertices = courses, edges = prerequisites

V = {a,b,c,d,e,f,g,h,i,j,k,l,m,n,o}
E = {{a,b},{a,c},{a,d},{a,f},{b,d},{c,f},{c,h},{c,j},
{d, g}, {e, g}, {e, i}, {e, l}, {f, g}, {f, j}, {g, j}, {g, k}, {h, i}, {h, j}, {i, j}, {m, n}, {m, o}, {n, o}}

A directed graph
Edges are now pairs of vertices, not sets Order matters!
Can have self edges
Can have cycles
V = {a,b,c,d,e,f}
E = {(a,b),(b,c),(c,d),(c,f),(d,d),(d,e),(e,a),(e,f),
(f, a), (f, b), (f, d), (f, e)}
Directed graphs are also called digraphs

A weighted, directed graph
wisamappingfrom edges to numbers
E.g., distances between cities Can ask: shortest path from a to f
G = V = E = w =
{a,b,c,d,e,f} {(a,b),(b,c),(c,d),(c,f),(d,d),(d,e),(e,f),(f,e)} {(a,b) 􏰃→ 1,(b,c) 􏰃→ 2,(c,d) 􏰃→ 1,(c,f) 􏰃→ 12,…}

A directed acyclic graph (DAG)
A particular kind of directed graph Can be weighted or not
Some algorithms only work on DAGs E.g., dependency resolution
Could represent course prereqs or software package dependencies.
Cycle = can’t satisfy dependency!

A tree is (just) a graph with no cycles:
But hold on, that doesn’t look like what I think of as a tree…

Graph theory tree
Colloquial CS tree

What is a tree, really?
Usually, tree in CS ≡ rooted, ordered tree in graph theory:
• Rooted: one node (the root) is special, and you can only
reach the other nodes by going through it.
• Ordered: the “children” of a node are in a particular order;
we can distinguish a left child vs a right child, a first vs second vs third vs last child, etc.

What are trees good for?
Main one: representing hierarchical data:
• Department org charts
• Geographical structure (country, state, county, city, etc.) • Family trees
Can be used for the representation of other data structures:
• Path trees (see graph search lecture) • Binary heaps (see SSSP lecture)
• Binary search trees (see BST lecture) • etc.

How to Recognise Different Types of Trees From Quite a Long Way Away
Obscurity rating: 7/10
One distinction we can make:
• k children per node maximum (fewer ok): k-ary tree • Unlimited children per node: rose tree
􏰀 Named after the common rhododendron (or rose tree), which has a similar shape. The more you know!
11 25 2710
Binary (2-ary) tree Ternary (3-ary) tree

A little bit of graph terminology

Some graph definitions
If {v,u} ∈ E then v and u are adjacent
If {v0,v1},{v1,v2},…,{vk−1,vk} ∈ E then there is a path from v0 to vk , and we say v0 and vk are connected

Connected components
A subgraph is a graph that has a subset of the nodes of some other graph, and a subset of the edges between them.
A subgraph of nodes all connected to each other is a connected component; here we have two

The degree of a vertex is the number of adjacent vertices: degree(v,G) = 􏰄􏰄{u ∈ V : {u,v} ∈ E}􏰄􏰄 where G = (V,E)
The degree of a graph is the maximum degree of any vertex: degree(G) = max{degree(v,G) : v ∈ V} where G = (V,E)
Sometimes we will refer to the degree as d, such as when we say that a particular operation is O(d).

Some digraph definitions
If (v, u) ∈ E, v is the direct predecessor of u and u is the direct successor of v
If (v0,v1),(v1,v2),…,(vk−1,vk) ∈ E then there is a path from v0 to vk ; we say that vk is reachable from v0
If vk and v0 are mutually reachable from each other, they are strongly connected

Your turn!
Is E reachable from A? no
Are A and B strongly connected? yes

Degree, revisited
Digraphs distinguish between in-degree and out-degree. in-degree(v,G) = 􏰄􏰄{u ∈ V : (u,v) ∈ E}􏰄􏰄 where G = (V,E)
out-degree(v,G) = 􏰄􏰄{u ∈ V : (v,u) ∈ E}􏰄􏰄 where G = (V,E)

Strongly connected components
Yellow and pink are SCCs.
Can get from yellow to pink. But not the other way around. So the whole graph is not SC.
In a digraph, a subgraph of vertices all strongly connected to each other is a strongly connected component; here we have a connected graph with two SCCs.

Dense versus sparse
217 217 3636
Qualitative observation on the number of edges relative to the
number of vertices.
Not a hard line between the two.
Still a useful notion when comparing graphs, and considering how different graph representation perform in different graphs.
Q: Maximum number of edges for a graph with n nodes? A: O(n2). Exact number depends on kind of graph.

Some tree definitions
The parent of a node is the node that is one step closer to the root. The root has no parent.
The children of a node are the nodes for whom it is the parent. a
Here, a is the parent of b. c and d are children of b.

Some tree definitions
A forest is a group of trees.

Programming with graphs

A graph ADT
Abstract values: look like (as earlier)
We will use the natural numbers 0 . . . n − 1 as our vertices. Operations:
interface GRAPH:
def new_vertex(self) -> nat?
def add_edge(self, u: nat?, v: nat?) -> NoneC
def has_edge?(self, u: nat?, v: nat?) -> bool?
def get_vertices(self) -> SetC[nat?]
def get_neighbors(self, v: nat?) -> SetC[nat?]
Invariants:
• V={0,1,…,|V|−1} •􏰅E⊆V

Graph ADT laws
􏰂 g.new_vertex() ⇒ n 􏰁g = ∧n = max(V)+1􏰂 ∧n,m∈V􏰂 g.add_edge(n,m)⇒None 􏰁g= 􏰂
∧ {n, m} ∈ E􏰂 g.has_edge?(n, m) ⇒ True {} ∧ {n, m} ̸∈ E􏰂 g.has_edge?(n, m) ⇒ False {}
􏰁g = 􏰂 g.get_vertices() ⇒ V {}
􏰂 g.get_neighbors(n) ⇒ {m ∈ V : {m,n} ∈ E} {}
(V ∪ {n}, E)
(V , E ∪ {{n, m}})

A digraph ADT
Abstract values: look like (E set of pairs of vertices) Operations:
interface DIGRAPH:
def new_vertex(self) -> nat?
def add_edge(self, src: nat?, dst: nat?) -> NoneC
def has_edge?(self, src: nat?, dst: nat?) -> bool?
def get_vertices(self) -> SetC[nat?]
def get_succs(self, v: nat?) -> SetC[nat?]
def get_preds(self, v: nat?) -> SetC[nat?]
Invariants:
• V={0,1,…,|V|−1}
• ∀(v,u)∈E.v∈V∧u∈V

Digraph ADT laws
􏰂 g.new_vertex() ⇒ n 􏰁g = ∧n = max(V)+1􏰂 ∧n,m∈V􏰂 g.add_edge(n,m)⇒None 􏰁g= 􏰂
∧ (n, m) ∈ E􏰂 g.has_edge(n, m) ⇒ True {} ∧ (n, m) ̸∈ E􏰂 g.has_edge(n, m) ⇒ False {}
􏰁g = 􏰂 g.get_vertices() ⇒ V {}
􏰂 g.get_succs(n)⇒{m∈V :(n,m)∈E} {} 􏰂 g.get_preds(n)⇒{m∈V :(m,n)∈E} {}
((V ∪ {n}, E))
(V , E ∪ {(n, m)})

A weighted digraph ADT
Abstract values: look like (as earlier) Operations:
let weight? = OrC(num?, inf)
interface WDIGRAPH:
def new_vertex(self) -> nat?
def set_edge(self, src: nat?, w: weight?,
dst: nat?) -> NoneC
def get_edge(self, src: nat?, dst: nat?) -> weight?
def get_vertices(self) -> SetC[nat?]
def get_succs(self, v: nat?) -> SetC[nat?]
def get_preds(self, v: nat?) -> SetC[nat?]
We’ll use weight of ∞ to represent absent edges. Infinitely far. Avoids False/None as special case, can use < or > directly.

Weighted digraph ADT laws, Part 1
∧ n, m ∈ V ∧ a < ∞􏰂 g.set_edge(n, a, m) ⇒ None ∧ n, m ∈ V 􏰂 g.set_edge(n, ∞, m) ⇒ None g.new_vertex() = n ⇒ None 􏰁g = ∧ n = max(V) + 1􏰂 (V ∪{n},E,w) (V , E ∪ {(n, m)}, w ∪ {(n, m) 􏰃→ a}) (V , E \ {(n, m)}, w \ {(n, m)}) Weighted digraph ADT laws, Part 2 ∧ (n, m) ∈ E􏰂 g.get_edge(n, m) ⇒ w(n, m) {} ∧ (n,m) ̸∈ E􏰂 g.get_edge(n,m) ⇒ ∞ {} 􏰂 g.get_vertices(g) ⇒ V {} 􏰂 g.get_succs(n)⇒{m∈V :(n,m)∈E} {} 􏰂 g.get_preds(n)⇒{m∈V :(m,n)∈E} {} Even for a specific graph ADT, different languages / libraries provide different operations. Here are some you may see. • Fixed vs extensible set of vertices 􏰀 Here: can add new vertices 􏰀 Homework 4: number of vertices is fixed at graph creation • Access to set of edges 􏰀 Here: have to get each vertex’s edges 􏰀 Homework 4: get_all_edges to get them all at once • Direct access to predecessors 􏰀 Vs indirect: iterating over all possible preds (i.e., all nodes) A lot of variations in practice. • But all follow the same idea! Vertices, edges, etc. • Missing ops can (usually) be written in terms of the others. 􏰀 Not always efficiently, though. I am not a number, I am a free man! We’ve been using natural numbers as our vertices, which is great for simplicity. But graphs that we care about may have other data as vertices! • Cities • People • etc. Turns out we can get the best of both worlds. Any ideas how? Dictionaries to the rescue! Keep a dictionary mapping node ids to, e.g., people! Keys naturals 0–n? 6 0 Direct addressing 4 here I come! {0:Alice, 1:Bob, 2:Charlie, 3:Diane, 4:Eve, 5:Fred, . . .} And reverse mapping to, e.g., find the names of Alice’s friends. If one ADT doesn’t 100% solve your problem, try two! Graph Representations Two graph representations There are two common data structures to represent graphs when programming: • adjacency list • adjacency matrix Others exist, and can be used for special types of graphs, but the above two are the most common. Adjacency list Array of lists of neighbors (or successors) for each vertex: 12 02 013 2 Can think of as a dictionary from node # to list of neighbors. Keys are 0 . . . n − 1 → direct addressing! If graph is undirected, will be symmetric. Adjacency matrix Store a |V|-by-|V| matrix of booleans indicating where edges are present: Matrix usually represented as an array of arrays. A directed adjacency matrix example 0 1 2 3 4 5 Directed → not necc. symmetric! With weights 10 3 4 5 3 2 0 1 2 3 4 5 Asymptotic Space Complexity We’ve talked about asymptotic complexity in terms of time • I.e., how time grows in relation to (some aspect of) the input Same principles apply to the space required by representations Example: have a set of n numbers, of which we (only) want to know the maximum element: • Can store the whole set in a vector: space O(n) • Can store only the maximum element itself: space O(1) Can also speak of the space complexity of an operation, which means the auxiliary space it needs (i.e. aside from the data structure itself). Space Comparison: Adjacency List Adjacency list: list for each vertex • |V|-element vector (“spine”) 12 02 013 2 • Total length of all the lists (“ribs”) = # of edges (or 2×) 􏰀 4 edges → 8 list elements total • O(|V|+|E|) Space Comparison: Adjacency Matrix 0 1 2 3 4 5 Adjacency matrix: |V| by |V| matrix • Matrix size only determined by |V| • Regardless of how many edges are actually present • O(|V|2) Space Comparison: Adjacency Matrix 0 1 2 3 4 5 Adjacency matrix: |V| by |V| matrix • Matrix size only determined by |V| • Regardless of how many edges are actually present • O(|V|2) Q: For a sparse graph, which is more space-efficient? A: Adjacency list. Sparse → |E| is much smaller than |V2| Time Comparison add/set_edge list matrix get/has_edge list matrix get_succs list matrix get_preds list matrix O(d) O(|V |) O(|V| + |E|) O(|V |) (traverse one list) (2 vector accesses) (traverse one list) (2 vector accesses) (copy one list) (traverse one row) (traverse all lists) (traverse one column) Oh, but what if my graphs are trees? Our two graph representations work perfectly fine with trees! • After all, trees are a kind of graph! • But there are better representations. • More restrictions → can cut corners! Let’s look at two tree-specific representations. More exist! • See path trees in graph search lecture • See binary heaps in SSSP lecture Structs for k-ary trees region left mid right region region left left mid mid right right With structs, can store data directly! No need for separate dictionary. (Unlike general graphs.) region left mid right region left mid right region region left left mid mid right right Rose trees using arrays name subdirs name subdirs name subdirs Variable number of children → need to use arrays! Next time: graph search 程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com