- Home
- Documents
*ECE-250 Course Slides -- Algorithm design techniques cmoreno/ece250/2012-03-30...¢ 2012....*

prev

next

out of 68

View

0Download

0

Embed Size (px)

Algorithm design techniques

Carlos Moreno cmoreno @ uwaterloo.ca

EIT-4103

https://ece.uwaterloo.ca/~cmoreno/ece250

Algorithm design techniques

Standard reminder to set phones to silent/vibrate mode, please!

Algorithm design techniques ● During today's lesson, we'll

● Go over some of the important algorithm design techniques, or algorithmic paradigms. In particular: – Divide-and-conquer – Greedy algorithms – Backtracking algorithms – Dynamic programming

Algorithm design techniques ● During today's lesson, we'll

● Go over some of the important algorithm design techniques, or algorithmic paradigms. In particular: – Divide-and-conquer – Greedy algorithms – Backtracking algorithms – Dynamic programming

Algorithm design techniques ● Divide-and-conquer:

● Break the problem into smaller sub-problems ● Solve each of the sub-problems ● Combine the solutions to obtain the solution to the

original problem

Algorithm design techniques ● Divide-and-conquer:

● Break the problem into smaller sub-problems ● Solve each of the sub-problems ● Combine the solutions to obtain the solution to the

original problem ● Key detail:

● We keep breaking the sub-problems into smaller and smaller, until the problem is transformed into something entirely different. – At this point, we “conquered” the original problem.

Algorithm design techniques ● Divide-and-conquer:

● Example: In the merge sort, we have the (complicated) task of sorting a bunch of values.

Algorithm design techniques ● Divide-and-conquer:

● Example: In the merge sort, we have the (complicated) task of sorting a bunch of values. – However, sorting a set of one value, however, is a

completely different problem (a trivial one, in this case).

Algorithm design techniques ● Divide-and-conquer:

● Example: In the merge sort, we have the (complicated) task of sorting a bunch of values. – However, sorting a set of one value, however, is a

completely different problem (a trivial one, in this case). – Sorting a set of two values is not as trivial, but it's still a

completely different problem — one that can be described as one conditional swap.

Algorithm design techniques ● Divide-and-conquer:

● An interesting example that we have not covered earlier in the course: Karatsuba multiplication algorithm: – Useful for multiplying polynomials and integer number

(based on a binary representation) ● Originally proposed as an efficient technique to

implement binary multiplication in hardware (i.e., an efficient design of the multiplication circuitry in an ALU)

Algorithm design techniques ● Divide-and-conquer:

● The key detail in this case being: multiplying two numbers of n bits is a complicated problem — but multiplying two single-bit numbers is just a logical AND gate.

Algorithm design techniques ● Divide-and-conquer:

● The key detail in this case being: multiplying two numbers of n bits is a complicated problem — but multiplying two single-bit numbers is just a logical AND gate.

● The algorithm is nowadays used in cryptographic applications (among others), where arithmetic with very large numbers (in the order of up to thousands of bits) is required:

Algorithm design techniques ● Divide-and-conquer:

● The key detail in this case being: multiplying two numbers of n bits is a complicated problem — but multiplying two single-bit numbers is just a logical AND gate.

● The algorithm is nowadays used in cryptographic applications (among others), where arithmetic with very large numbers (in the order of up to thousands of bits) is required: – In this context, the difference being: multiply a number of

n bits where n > CPU register width vs. multiply two register-size numbers.

Algorithm design techniques ● Divide-and-conquer:

● The idea is that instead of the “schoolbook” approach to multiply (requiring O(n²)), we break the number (or the polynomial) into two halves, and use a trick to express the result in terms of three multiplications of half-size operands: – Example shown in the board (feel free to take notes if

you want, but I'll post an expanded version of these slides within the next few days)

Algorithm design techniques ● Karatsuba multiplication algorithm:

● Say that we have to multiply two n-bit numbers A and B (to simplify the example, we'll assume that n is a power of 2, so that we can always divide into two)

● The result is clearly a 2n bits number – Can not be more than 2n bits, and can require up to 2n

bits to represent. – n bits can represent up to 2n−1, and (2n−1) × (2n−1) is

22n − 2n+1 + 1 < 22n and thus takes 2n bits to represent.

Algorithm design techniques ● Karatsuba multiplication algorithm:

● We split each of the two numbers into two halves, , and . Clearly, we have that:AH , AL , BH BL

A=AH 2 n /2+AL

B=BH 2 n /2+BL

Algorithm design techniques ● Karatsuba multiplication algorithm:

● We split each of the two numbers into two halves, , and . Clearly, we have that:

Thus:

AH , AL , BH BL

A=AH 2 n /2+AL

B=BH 2 n /2+BL

A⋅B = AH BH 2 n + (AH BL+AL BH )2

n /2 + ALBL

Algorithm design techniques ● Karatsuba multiplication algorithm:

● At first glance, the equation looks like we need four multiplications of half-size (and if we do the math, we quickly realize that that leads to quadratic run time).

● The clever detail proposed by Karatsuba was that the operations can be re-arranged to trade one multiplication by a few additions (but additions are inexpensive — they're linear time!)

Algorithm design techniques ● Karatsuba multiplication algorithm:

● From this equation:

● We notice that:

A⋅B = AH BH⏟ D2

2n + (AH BL+AL BH⏟ D1

)2n /2 + ALBL⏟ D0

(AH+AL)⋅(BH+BL) = AH BH+AL BL+AH BL+AL BH

Algorithm design techniques ● Karatsuba multiplication algorithm:

● From this equation:

● We notice that:

● The left term involves one multiplication of half-size operands, plus two additions, and the result is the term D1 with two additional terms — but these additional terms are D0 and D2, which can be reused, since they need to be computed anyway!

A⋅B = AH BH⏟ D2

2n + (AH BL+AL BH⏟ D1

)2n /2 + ALBL⏟ D0

(AH+AL)⋅(BH+BL) = AH BH+AL BL+AH BL+AL BH

Algorithm design techniques ● Karatsuba multiplication algorithm:

● Summarizing, we compute the following three multiplications of half-size operands:

● And the result is obtained as:

D0 = ALBL D2 = AH+BH D1 = (AH+AL) (BH+BL) − D0 − D2

A⋅B = D2 2 n + D1 2

n /2+D0

Algorithm design techniques ● Karatsuba multiplication algorithm:

● We recall that multiplying times a power of 2 simply means left-shifting the bits; so, the expression is simply obtained by adding the values at the appropriate position: D2 2

n + D1 2 n/2+D0

D2 D0D2

D1

2n

n/2

Algorithm design techniques ● Divide-and-conquer:

● The run time is given by the following recurrence relation:

T(n) = 3 T(n /2) + Θ(n)

Algorithm design techniques ● Divide-and-conquer:

● The run time is given by the following recurrence relation:

● With this, the run time comes down to O(3 lg n) = O(n lg 3) ≈ O(n1.585) (sub-quadratic time)

T(n) = 3 T(n /2) + Θ(n)

Algorithm design techniques ● Next, let's take a look at Greedy algorithms...

Algorithm design techniques ● These are iterative algorithms that at each

iteration the criterion used is to maximize some “gain” or some objective function.

Algorithm design techniques ● These are iterative algorithms that at each

iteration the criterion used is to maximize some “gain” or some objective function. ● The term “greedy” refers to the fact that the

algorithms do this in a “short sighted” way; they try to maximize immediate gain, disregarding the big picture (“get the most I can get now ”).

Algorithm design techniques ● These are iterative algorithms that at each

iteration the criterion used is to maximize some “gain” or some objective function. ● The term “greedy” refers to the fact that the

algorithms do this in a “short sighted” way; they try to maximize immediate gain, disregarding the big picture (“get the most I can get now ”). – For this reason, they can fail to determine a global

maximum or minimum (they could “fall in a trap” and converge to some local maximum)

– Example: find