A difficulty with key systems is the key distribution problem, i.e. the problem of getting the common secret key to A and B. This is eloquently expressed by Professor Lomonaco when he writes about the Catch‐22 of cryptography [Lom98].
Catch‐22 of Cryptography:
To communicate in secret, we must first communicate in secret.
For symmetric encryption, the private key has to be given secretly to each entity, whereas the public keys for each entity are
We have spoken already of the assumption that the encryption algorithms for public key cryptography are assumed to be mathematical one‐way functions. This means that enciphering has the property that its values are easily computed by a computer (i.e. are computed in polynomial time), yet the deciphering algorithm cannot be computed in a reasonable amount of time (even on a computer). In other words, the problem of deciphering the cipher text is intractable.
Of course, we emphasize again that the existence of such mathematical one‐way functions is still in doubt since nobody has discovered a mathematical function that is provably one‐way.
But one‐way functions abound in the physical world, many of them related to time. For example, as Beutelspacher [Beu94] points out, most people are not getting any younger, i.e. the aging function is one‐way!
Another analogy is the telephone directory of any big city. Here each name gets enciphered to the corresponding telephone number. The deciphering algorithm starts with a number and tries to find the corresponding name, a much more daunting task.
One can also make physical analogies concerning the two kinds of cryptography.
For public key cryptography, consider the following scenario: A wants to send a secret message to B. A number of mailboxes are available. A knows that the mailbox for B is number 3 (3 is the public key for B). All A has to do is to drop the message into mailbox number 3 for B. Then B (but nobody else) can recover the message since B has the key to mailbox number 3.
Another system for public key algorithms is as follows: A wants to send a secret message to B. He goes to the hardware store and buys a box and a combination lock marked B (this corresponds to the public key of B). Then A puts the message in the box, locks the box with the combination lock and mails it to B. When the box arrives, B opens it, and gets the message because he knows the combination for the lock. Nobody else can open the box to get the message.
The following is an analogy for symmetric encryption: If A wants to send a secret message to B, we can imagine A putting the message in a strong box, locking the box with a key and mailing it to B. When B receives the box, he opens it with his key and gets the message. Only A and B have a key for the box, so nobody else can get the secret message.
3.6 Summary of Encryption
We have seen what encryption is and the difference between public key and symmetric cryptography. Public‐key algorithms such as RSA yield computational security which can be breached given sufficient time and computing resources. With RSA, security is weaker for a text message encoded in ASCII than if the message is a random binary string. This is also true for for some symmetric encryption systems. The reason for this reduction in security is attributed to the fact that consecutive characters in a text message are dependent upon each other.
The only mathematical way to assess the security of symmetric systems is through information theory, which is discussed later on in the book. The security depends on the uncertainty pertaining to the key and the uncertainty pertaining to the message. Roughly speaking, the longer the key the more secure the message. One reason for discussing historical ciphers, such as the Vigenère cipher, in this book is to furnish examples of how this works.
With public key algorithms, only one message can fit with a given cipher text. The keys have to be made longer and longer to withstand brute‐force attacks. What the proper length should be is a matter of conjecture and it is one of the “hot‐button” issues in modern cryptography. One the one hand, a financial institution using public key algorithms may not be in a hurry to report that its system has been broken. On the other hand, a successful intruder may not want to report success.
With symmetric systems, one can (at least in theory!) quantify the security which can be measured in Shannon bits. In certain situations, it can be measured exactly; in other situations, it can be estimated experimentally (e.g. with text messages encoded in binary). In general, it may be the case that many messages will fit with a given cipher text so that, in the end, the determination of the message may still be a matter of guesswork. Nowadays, RSA keys should be several hundred decimal digits in length. In bits, they should be at least 2048 bits long.
We should also point out that in some situations, whether dealing with symmetric or public key algorithms, it may be easier or faster to try to guess the message than to guess the key and then to get the message. Furthermore, there is a strong probabilistic component running through all of cryptography, exacerbated by the possibility of transmission errors.
We have not touched on several practical issues here such as message compression, transmission errors, and checking for key equality. These will be dealt with in Parts II and III of the book when the appropriate machinery has been built up.
3.7 The Diffie–Hellman Key Exchange
This is one of the most mathematically elegant algorithms in cryptography. Communicating parties
The DH key exchange may proceed in the following way:
Participants