As AI personal assistants become more common, privacy concerns are growing just as fast. Many popular tools require users to share deeply personal information, which is often retained, analyzed, or used to train models owned by large tech companies. With advertising already entering the AI space, critics warn that chatbot conversations could soon be treated like social media data—tracked, monetized, and exploited.
A new project by Signal co-founder Moxie Marlinspike aims to change that narrative. Launched in December, Confer is a privacy-first AI assistant designed to feel like ChatGPT or Claude, while fundamentally rethinking how user data is handled. Built with open-source principles similar to Signal’s, Confer ensures conversations are never stored, accessed, or used for training or advertising.
Marlinspike says the motivation comes from how personal AI chat interfaces have become. He describes them as technologies that “actively invite confession,” often learning more about users than any previous digital tool. Combining that level of intimacy with advertising, he argues, would be like paying someone to manipulate your most private thoughts for profit.
To prevent that outcome, Confer relies on multiple layers of security. Messages are encrypted using WebAuthn passkeys, while all AI inference runs inside a Trusted Execution Environment (TEE). Remote attestation verifies that the system has not been altered, and open-weight foundation models process queries without exposing user data to the host system at any point.
The approach is more complex—and more expensive—than standard AI setups, but it delivers on Confer’s core promise: private conversations stay private. The service offers a free tier limited to 20 messages per day, while a $35 monthly subscription unlocks unlimited chats, advanced models, and personalization. It costs more than mainstream AI plans, but for users prioritizing privacy, Confer positions itself as a clear and deliberate alternative.
source: Techcrunch
