CSC373H Lecture 10
Dan Zingaro
November 21, 2016
Knapsack Approximation
Recall from last time that we want a fast approximation algorithm for the 0-1 knapsack problem
Assume that each item i has wi ≤ W
Our simple technique of taking highest to lowest vi /wi could
be infinitely bad as an approximation algorithm
Butasmalladditiontothealgorithmcanfixthat…
Greedy Algorithm 2 for Knapsack
Greedy algorithm, take 2.
sort the items by decreasing v_i/w_i
try two ways to fill the knapsack:
solution 1)
for each item:
if the item fits in the knapsack, add it
otherwise, stop the algorithm
solution 2)
add only the most-valuable item that fits in the knapsack
return solution 1) or 2), whichever is more valuable
Greedy Algorithm: Approximation Ratio Proof
Theorem: the algorithm is a 2-approximation algorithm for knapsack.
Think about what would happen if we could use a fraction of the first item k + 1 that doesn’t fit in the knapsack
That is, we take the first k items that fit, and then some proportion of item k + 1
Call this the greedy fractional solution
e.g. Say we have 7 weight left in the knapsack, and item k + 1 has weight 10 and value 20
Then we take 7/10 of item k +1, and get 7/10∗20 = 14 value from it
Claim (think about the proof!): the greedy fractional solution is at least as good as any optimal solution
Greedy Algorithm: Approximation Ratio Proof…
Let G be the value of the greedy solution
By solution 1), G is at least as good as the value of the first k
items
By solution 2), G is at least as good as the value of item k +1
2G ≥ total value of first k + 1 items ≥ total value of fractional greedy ≥ value of optimal solution
Approximation Schemes
Can we do better than claims like “the algorithm is no worse than twice as bad as the optimal”?
An approximation scheme for a problem lets us trade-off time and accuracy
An approximation scheme takes an instance of the problem to solve, plus a parameter ε > 0
ε specifies how close we’d like to be to optimal
e.g. if ε = 0.5, then we are asking for a 1.5-approximation
algorithm
As ε decreases, our approximation gets better, but our runtime gets slower
Approximation Scheme for Knapsack
Recall that dynamic-programming works well for knapsack when W is small
We will use a different dynamic-programming formulation here that works well when the item values are small
The strategy is to transform the item values so that they are small, then solve that easier instance of the problem
Our answer will not be correct — we’re not solving the original problem anymore — but we will be close
Knapsack: New Dynamic Programming Algorithm
Let vm be the maximum item value
Define2darrayN[0..n,0..n∗vm)
N[i,j] stores the minimum weight of a knapsack required to get a value of at least j using the first i items
0, if j ≤ 0
N[i,j] = ∞, if i = 0 and j > 0 min{N[i−1,j],N[i−1,j−vi]+wi} ifi>0andj>0
How to Use the Algorithm
For some positive integer m, divide each vi by m
Then, run the new dynamic programming algorithm on these
transformed values
The larger we take m, the more “off” are our vi values, so the more error we will have in our solution
On the other hand, a larger m value corresponds to smaller transformed vi values, so our runtime is better
We won’t do the analysis here, but the final piece is to decide how to set m to get the required 1 + ε-approximation!