Department of Mathematical Sciences
Director of the Algebraic Cryptography Center
Stevens Institute of Technology
When you use an automatic teller machine or shop online, private information about your bank and credit card accounts is sent over the Internet and over telephone lines. Public key cryptosystems are used to protect this information from eavesdroppers and keep it private.
A public key system includes an algorithm for generating random pairs of public and private keys. Anyone can use the public key to encrypt data, but only the owner of the corresponding private key should be able to decrypt.
The security of a public key system depends on the difficulty of an underlying computational problem. Each random public–private key pair corresponds to a random instance of this computational problem. For the system to be secure, each random instance of the computational problem should be, with overwhelming probability, too difficult to solve in any reasonable amount of time (“very difficult” for short).
For the widely employed RSA system the underlying computational problem may be taken to be factorization of integers. Each public key includes an integer, called the modulus, which is the product of two unique smaller integers. If you can factor the modulus, you recover these smaller integers, which can be used to compute the private key and decrypt messages.
For example if the modulus is 91, an easy factorization produces the integers 7 and 13. Thus 91 is not a satisfactory modulus. As computers and factoring techniques improve, the length of secure moduli for RSA increases. Nowadays they are thousands of digits long.
Below is a 220 digit public key modulus published by RSA Laboratories in 1991 and designated RSA-220. RSA-220 was published, along with several other RSA numbers, as a challenge to the cryptographic community, and the disk drive containing the factorizations of these RSA numbers was destroyed. The challenge was withdrawn in 2007, but to date no one knows what the factors of RSA-220 are (more precisely no one admits to knowing).
The integer RSA-220 1
There is substantial heuristic evidence that the factorization problem is very difficult, but there is no proof. This situation is typical of public key systems in use today. In each case there is an underlying computational problem, of which the instances used for public keys are supposed to be very difficult. However, there is no suitable way to determine the degree of difficulty for sure. Standard computational complexity techniques from computer science are not well adapted to this task.
Researchers at Stevens’ Algebraic Cryptology Center (ACC) want to better understand what makes a computational problem uniformly difficult. It is known that by using methods of recursive function theory one can construct problems, which are provably uniformly difficult, but so far at least, these problems are not natural and are not suitable for cryptography.
The ACC is systematically developing a notion of complexity, generic case complexity, which is designed to be useful for investigations into the nature of hard problems. Although generic case complexity is still a baby, it has led to several surprising insights and motivated a new line of research in recursive function theory. In addition heuristic assumptions about the generic case complexity of various computational problems have produced successes in cryptanalysis.
The ACC research program involves computer experiments on a wide variety of computational problems to get an idea of how hard they are in practice and how their difficult instances are distributed. Combinatorial group theory, i.e., the theory of groups given by generators and relations, has a well-developed history of computation going back to 1911 and is a fruitful source of examples.
The work of ACC researchers is currently supported by grants from the National Science Foundation and the National Security Agency.