Newsletter #85 - FTX Debacle Part III
What AI tools can teach us about the dangers of outsourcing trust. Are we already living in this world?
This post is part of a 4-part series on FTX that I wrote for the Bankless DAO Writers Cohort. The posts were written in real-time as the FTX news unfolded. My hope is that in the future this series will be a prescient warning for how to stay safe from crypto frauds and fraudsters.
Read the rest of the posts in the series:
Hi friends,
As a recovering Type-A Control Freak, I have spent the first 28 years of my life grinding hard to make sure every letter was perfectly placed, every word was chosen carefully, and every sentence was strung together with maniacal precision.
And I have spent the last 5 years unlearning all of those bad habits to become more comfortable with the natural state of chaos.
Given my modus operandi for so long, it was difficult for me to trust an AI that promised to help me write. Nonetheless, I decided to try out Lex – the GPT-3 powered AI writing assistant from the team at Every.
When I realized that Lex was not the first AI writing tool that I have used, I was surprised. Because I had never used a tool that actually felt like what I thought an AI tool should feel like. Never had a I used a tool that helped me to think.
Grammarly, a popular “AI” tool that has long been on the market since 2009, merely fixes grammar and spelling mistakes (at least in the free version, which is the only version that that I have ever used), whereas Lex’s helped me to research and come up with compelling messaging.
Lex’s deep learning algorithms suggested words and phrases that seamlessly fit into the context of my writing, so I didn’t need to expend energy on word-smithing, freeing up mental bandwidth to focus on content, flow, and revisions.
What I love about OpenAI, the non-profit research lab that created GPT-3, the language model that powers Lex, is that all of the tools that the lab builds are free!
Despite the “magic” of GPT-3, it has one major flaw: it can tell lies and spin up misinformation and conspiracy theories that can go so far as to swindle someone out of money and persuade them to join a cult!
This is really serious.
The best example of this that came to my attention was from Marc Andreessen, who tweeted about his use of ChatGPT, the latest AI tool to come out of OpenAI, and one that has grabbed the attention of many of the people I follow on Twitter.
Unlike Lex, ChatGPT, is able to craft messaging based on a user-inputted prompt. For example, Marc inputted this prompt:
Write a script for a psychotherapist coaching a client. The client sees how smart ChatGPT is and is tempted to give up on life.
Voila! Within seconds, the AI wrote an entire script that was (initially) very helpful, reassuring the client that they are a unique and valuable person, only to quickly divulge into an open invitation to join a cult! The therapist ends up asking the client to contribute a significant portion of their income in order to unlock the secrets of the universe….
Take a look for yourself at the script below.
Perhaps we are about to enter a scary, dystopian future where people outsource trust to chat interfaces that pose as authority figures. In this world, AI tools like ChatGPT will be used to create malicious chatbots and marketing campaigns that swindle innocent victims; and AI writing tools like Lex will be used to create even more compelling conspiracy theories and misinformation.
Perhaps we already are, in some ways, living in this world of outsourced trust, and perhaps we already have, in some ways, stopped thinking for ourselves.
Let me know what you think.
Take good care,
Rika