loading...

CS502 Fundamentals of Algorithms GDB Solution Spring 2013

As both Dynamic Programming and Greedy Strategy are used for solving Optimization problems and influenced by optimal-sub property; it might possible that one can attempt to find Greedy solution, if in fact Dynamic Programming is the right approach or vice versa. With this background, Knapsack problem (either 0/1 or Fractional) is a classic optimization problem to investigate with.

You are required to support or contradict the given below statement with proper
justification.

“Dynamic Programming yields optimal solution for 0/1 Knapsack problem while Greedy
approach does not. In addition, Greedy approach yields optimal solution for
Fractional Knapsack problem”.

Solution: Greedy Algorithm focus on achieving the local optimal solution, while dynamic programming achieves global solution. Greedy pick the “current” best solution, while dynamic always achieves the best optimal solution.

 Both dynamic programming and greedy algorithms can be used on problems that exhibit “optimal substructure” (which CLRS defines by saying that an optimal solution to the problem contains within it optimal solutions to subproblems). However, in order for the greedy solution to be optimal, the problem must also exhibit what they call the “greedy-choice property”; i.e., a globally optimal solution can be arrived at by making locally optimal (greedy) choices. In contrast, dynamic programming is good for problems that exhibit not only optimal substructure but also overlapping subproblems. (This means that a particular subproblem can be reached in multiple ways. The dynamic programming algorithm calculates the value of each subproblem once and then can reuse these every time the algorithm revisits them.)

 

Greedy Algorithms

Many computational problems can be cast in the following format: You are given a system consisting of a number of components that each can be set in various ways. Your goal is to set them in such a way that a given objective is optimized. A greedy approach to such a problem iterates over the components in a well-chosen order, and sets them in a way that is locally optimal (according to a well-chosen metric), rather than trying to figure out what will work long-term. Most objectives are too complex for such a myopic approach to yield a global optimum. But if it does, then the resulting algorithm is often very efficient.

Designing a greedy algorithm boils down to determining the order in which to consider the components, and figuring out the right metric. In doing so, it is important to verify that this combination leads to the correct solution – ideally by reasoning about the algorithm, or else by developing appropriate test cases.

Examples of problems for which a greedy approach works include: shortest paths in graphs with nonnegative weights (Dijkstra’s algorithm, which makes its decisions based on minimum distance) and minimum spanning trees (Prim’s algorithm, which makes its decisions based on minimum edge weight).

Dynamic Programming

Dynamic programming is another classical programming paradigm, closely related to divide-and-conquer. Both are based on recursion: an instance of a problem is reduced to one or more simpler instances of the same problem. A divide-and-conquer approach implements this recursion in a straightforward way, whereas in dynamic programming special care is given to make sure the number of different subproblems that arise in the recursion remains small, and that each of them is solved at most once. Keeping the number of subproblems small often involves discerning some additional structure in the instances that the recursion generates. If the straightforward divide-and-conquer strategy takes too much time, you should try to find additional structure and give dynamic programming a shot.

To ensure that no subproblem is solved more than once, you can use memoization: keep track of a list of solved subproblems, and before solving one first check whether it appears in the list. Alternately, you can often arrange the subproblems in order of simpler to more complicated, and solve all of them in that order up to the one you are interested in. Many dynamic programming approaches make use of an array to store the solutions to possible subproblems. The array can be 1D, 2D, or more complex. The latter approach is often slightly faster than memoization, but memoization is useful when the space of possible subproblems is large and it is difficult to predict or organize the subproblems needed. Either approach typically requires a lot of memory, more so than the straightforward divide-and-conquer approach, but avoids the potential duplication of work in divide-and-conquer.

If you’re only interested in the optimal value of the objective function (rather than an actual solution that realizes that value), the code for dynamic programming is often simpler and the memory overhead can be reduced.

A classic, simple example of the difference between divide-and-conquer and dynamic programming is computing Fibonacci numbers. Just recursively computing F(n) using the formula F(n) = F(n-1) + F(n-2) takes exponential time, but we can keep an array of all values F(1), F(2), F(3), and so on, and reduce the time to compute F(n) by a large margin, since it only takes one addition to compute F(n) when we know the previous values. This also has the advantage that we now know all the earlier Fibonacci numbers, so multiple queries only take us as much time to compute as the time to compute the largest one.

DOWNLOAD SOLUTION HERE
loading...