Researchers at Google Brain, Google’s deep learning project, have worked out a way to teach artificial intelligence neural networks how to create their own encryption formulas.
In a research paper published by Martín Abadi and David Andersen, “Learning to Protect Communications with Adversarial Neural Cryptography,” the researchers put three AIs together: two that attempt to communicate secretly (Alice and Bob) and a third that tries to spy on that communication (Eve).
According to the paper, Alice’s job is to construct messages using some sort of secret algorithm and Bob’s duty is to discover how to decrypt those messages. On the other side of the divide, Eve is listening in to the communication between Alice and Bob and attempts each time to read the message sent.
The objective of the entire process is to have Alice and Bob come up with a communication scheme that Eve cannot easily break, all without teaching Alice, Bob or Eve any particular encryption scheme.
The only thing traded between Alice and Bob was a pre-determined cryptographic key to which Eve did not have access. From there Alice would iterate through methods of encrypting a message to Bob and send it along. The success of that message being decrypted by Bob and not being successfully read by Eve would set the parameters for the next attempt.
The entire domain of the message was only 16 bits. That’s not a very long message, but it was sufficient for the simple encryption learning the researchers wanted to teach.