Most people know just one way to multiply two large numbers by hand. Typically, they learned it in elementary school. They’re often surprised to find that there are a variety of ways to do multiplications, and each such algorithm has advantages and disadvantages. Moreover, grade-school multiplication can be far from the best method available in certain contexts.

Slight differences in the efficiency of multiplication algorithms can make a huge difference when calculators or computers do the work. Computers worldwide perform enormous numbers of multiplications each day. In most computers, each operation consumes mere microseconds, but multiplied by the number of computations performed, the differences in time taken can be significant. So, the general question of how quickly two *n*-bit numbers can be multiplied has not only great theoretical importance but also practical relevance.

Indeed, when it comes to multiplying two numbers, the best (or fastest) way to do it is often far from obvious.

One particularly intriguing and efficient multiplication algorithm was developed in the late 1950s by Anatolii Alexeevich Karatsuba, now at the Steklov Institute of Mathematics in Moscow.

Karatsuba’s “divide-and-conquer” multiplication algorithm has its roots in a method that Carl Friedrich Gauss (1777-1855) introduced involving the multiplication of complex numbers.

A complex number is an expression of the form *a* + *bi*, where *a* and *b* are real numbers, and *i* has the property that *i*^{2} = –1. The real number *a* is called the real part of the complex number, and the real number *b* is the imaginary part. When the imaginary part *b* is 0, the complex number is just the real number *a*.

Suppose that you want to multiply two complex numbers, *a* + *bi* and *c* + *di*. To do so, you use the following rule:

(

a+bi)(c+di) = [ac–bd] + [ad+bc]i.

For example: (2 + 3*i*)(1 + 2*i*) = [2 – 6] + [4 + 3]*i* = (–4 + 7*i*).

Expressed in terms of a program, you would input *a*, *b*, *c*, and *d* and output *ac* – *bd* and *ad* + *bc*.

Computationally, multiplying two digits is much more costly than is adding two digits. Suppose then that multiplying two real numbers costs $1 and adding them costs a penny. To obtain *ac* – *bd* and *ad* + *bc* requires four multiplications and two additions, for a total of $4.02.

Is there a cheaper way to obtain the output from the input? The Gauss optimization algorithm offers an alternative approach. Here’s how the computation can be done for $3.05, with three multiplications and five additions.

x1 =a+b

x2 =c+d

x3 =x1x2 =ac+ad+bc+bd

x4 =ac

x5 =bd

x6 =x4 –x5 =ac–bd

x7 =x3 –x4 –x5 =bc+ad

So, Gauss optimization saves one multiplication out of four.

Karatsuba’s divide-and-conquer multiplication algorithm takes advantage of this saving.

Consider a multiplication algorithm that parallels the way multiplication of complex numbers works. Roughly speaking, the idea is to divide a given multiplication problem into smaller subproblems, solve them recursively, then glue the subproblem answers together to obtain the answer to the larger problem.

Suppose that you want to compute 12345678 * 21394276.

Break each number into two 4-digit parts.

a= 1234,b= 5678,c= 2139,d= 4276

Find the products *bd*, *bc*, *ad*, and *ac*.

bd= 5678 * 4276

bc= 5678 * 2139

ad= 1234 * 4276

ac= 1234 * 2139

Break each of the resulting numbers into 2-digit parts, Repeat the calculations with these parts. For example, break 1234 * 2139 into 12, 34, 21, and 39. Find the appropriate products.

12 * 21, 12 * 39, 34 * 21, 34 * 39

Repeat the same steps with these 2-digit numbers, breaking each one into 1-digit parts and finding the appropriate products. For example, 12 * 21 gives the multiplications 1 * 2, 1 * 1, 2 * 2, and 2 * 1.

In this divide-and-conquer algorithm, given two decimal numbers *x* and *y*, where *x* = *a*10^{n/2} + *b* and *y* = *c*10^{n/2} + *d*, you have *xy* = *ac*10^{n} + (*ad* + *bc*)10^{n/2} + *bd*.

So, for *n* = 2, *ac* = 1 * 2 = 2, *ad* = 1 * 1 = 1, *bc* = 2 * 2 = 4, *bd* = 2 * 1 = 2, you obtain:

12 * 21 = 2 * 10

^{2}+ (1 + 4)10^{1}+ 2 = 252

Similarly, for 12 * 39, you get:

1 * 3 = 3, 1 * 9 = 9, 2 * 3 = 6, 2 * 9 = 18

3 * 10

^{2}+ 9 * 10^{1}+ 6 * 10^{1}+ 18 * 1 = 468

You get 12 * 21 = 252, 12 * 39 = 468, 34 * 21 = 714, 34 * 39 = 1326.

This allows you to compute 1234 * 2139.

252 * 10

^{4}+ 468 * 10^{2}+ 714 * 10^{2}+ 1326 * 1 = 2639526.

Similarly, you obtain:

1234 * 4276 = 5276584

5678 * 2139 = 12145242

5678 * 4276 = 24279128

Hence, 12345678 * 21394276

= 2639526 * 10^{8} + 5276584 * 10^{4} + 12145242 * 10^{4} + 24279128 * 1

= 264126842539128.

Karatsuba’s insight was to apply Gauss optimization to this divide-conquer-and-glue approach, replacing some multiplications with extra additions. For large numbers, decimal or binary, Karatsuba’s algorithm is remarkably efficient. It’s the sort of thing that your computer might be doing behind the scenes to get an answer to you a split second faster.

It’s not the fastest know multiplication algorithm, but it’s a significant improvement on grade-school multiplication. Indeed, properly implemented, Karatsuba multiplication beats grade-school multiplication even for 16-digit numbers and is way better for 32-digit numbers.

And we probably haven’t heard that last word on multiplication. More innovations in computer arithmetic may yet lie ahead.

If you wish to comment on this article, go to the blog version.