New technique produces real randomness

‘Extractor’ removes predictability from computer-generated numbers

dice

NO DICE For computers, generating random numbers is more complex than a simple roll of the dice. Computer scientists have devised an easier way for computers to harvest randomness from their environment.

James Bowe/Flickr (CC BY 2.0)

Ask a computer to pick a random number and you’ll probably get a response that isn’t completely unpredictable. Because they are deterministic automatons, computers struggle to generate numbers that are truly random. But a new advance on a method known as a randomness extractor makes it easier for machines to roll the dice, generating truly random numbers by harvesting randomness from the environment.

The method improves on previous randomness extractors because it requires only two sources of randomness, and those sources can be very weak. “It’s a big breakthrough on a fundamental problem,” says computer scientist Dana Moshkovitz of MIT. “It’s a huge improvement over anything that was done before.”

Computer scientists Eshan Chattopadhyay and David Zuckerman of the University of Texas at Austin will present the new randomness extractor June 20 in Cambridge, Mass., at the Symposium on the Theory of Computing.

For computers, random numbers are a precious resource, essential for encrypting sensitive information, modeling complex systems like the Earth’s climate and selecting unbiased samples from data. But computers typically fail at generating truly random numbers. Many computer applications instead rely on pseudorandom numbers, which appear to be random but aren’t. These are generated in a reproducible way, relying on an algorithm.

Crucially, random numbers are used to produce encryption keys that garble private data, such as credit card numbers and bank account details, keeping them safe from prying eyes. But if the numbers used for encryption are predictable, a hacker may be able to crack the code.

Any deviation from true randomness can therefore create a security hole. “A common way for hackers to break into systems is to exploit the fact that people don’t use high-quality randomness,” Zuckerman says.

To sidestep computers’ predictable natures, computer scientists have devised ways of harvesting randomness from the environment, using input from the mouse or the keyboard, for example. The computer might sample the mouse’s coordinates at several points in time and convert these values into a string of numbers. But this still falls short of being truly random. If the mouse is on the left of the screen one moment, it’s less likely to be all the way on the right in the following instant. So successive numbers could be correlated or biased toward certain values, making them only weakly random.

Randomness extractors excavate the randomness from these weak sources, throwing away the predictable junk to create a truly random number. “Randomness is a resource — it’s just like gold that you mine,” says Moshkovitz. “You take the sources that you have and you just purify the gold out of them.”

The new randomness extractor combines two independent sources of weakly random numbers into one set that is nearly random, with only minor deviations. Then the researchers use a “resilient function,” a method of combining information, to turn the string of numbers into one truly random bit — a 1 or 0.

Resilient functions combine information in a way that can withstand a certain amount of bias. For example, in an election, some number of malicious voters might collude to sway it in one direction. A resilient function can protect honest voters. Rather than taking the majority vote, election officials could group voters into threes and take the majority of each group, then group those results into threes and take the majority and so on. This method allows the election to tolerate a larger number of bad apples — or, in the case of random number generation, it can filter out the effect of the biased numbers.

Compared with the previous state-of-the-art randomness extractors, which required input that was already very close to random, the new method can mine sources that are “much, much, much, much weaker,” says computer scientist Avi Wigderson of the Institute for Advanced Study in Princeton, N.J. The new extractor is a “substantial improvement over the previous results, and it’s very close to the best you can hope for.”

Physics writer Emily Conover has a Ph.D. in physics from the University of Chicago. She is a two-time winner of the D.C. Science Writers’ Association Newsbrief award.

More Stories from Science News on Computing

From the Nature Index

Paid Content