Homomorphic Encryption
End-to-end encryption does have a major drawback: the backend is completely blind to the contents of the encrypted entities. That means no indexing, no training, no logical parsing or search or comparison. No processing of the encrypted payloads.
Homomorphic Encryption is an answer to this — the idea of achieving true end-to-end encryption, but allowing the backend the ability to do some processing on the encrypted entities. Specifically it seems like people are trying to do this with LLMs — one of the big problems with LLMs is the risk of exposing your company data to the LLM vendor’s servers. When you ask ChatGPT to generate some code for you based on an input, OpenAI might be stealing that code you give as an input. A lot of modern codegen tools work by taking the entire codebase as context. So basically you’re exposing your entire codebase to Open AI. Not cool if you work on some sensitive stuff. They’re also probably ingesting, e.g., your secrets in the codebase, etc.
The same security and intellectual property risk applies to all other knowledge work that benefits greatly from AI.
Companies doing things in Homomorphic Encryption: