The “Luddite” Label and AI in Social Settings
The term “Luddite,” originally referring to 19th-century English workers destroying machinery, is now often used derogatorily for anyone opposing new technology. While I disagree with the likely sentiment behind being called this, I am viscerally against the widespread use of AI and machine learning in modern social situations.
Human cognition seems to be rapidly, almost aggressively, reduced by technological tools, chief among them Large Language Models (LLMs). While valuable applications exist, using this technology in peer-to-peer social contexts isn’t just a slippery slope; it feels like an existential threat to authentic human connection. The logical extreme seems to be implanted chips and the atrophy of genuine social skills.
Early Signs: Gemini and Match Group
Google’s Gemini language model has been marketed partly as a tool for awkward social situations, showing teens asking for excuses to cancel plans while with friends. The implication is using AI for advice when uncomfortable interacting with others – a use case I find deeply problematic for human-to-human interaction.
Another example is Match Group, owner of numerous dating apps. Their CEO, Bernard Kim, stated in late 2024 that AI is “revolutionary for dating,” envisioning it influencing “everything from profile creation to matching and connecting for dates, literally everything.” The dangers of AI constructing profiles, writing bios, and sending messages seem obvious to some.
Enter Cluely: “We Want to Cheat on Everything”
Beyond these examples lies Cluely, a product whose stated purpose and design seem intended to replace human thinking in social situations, under the catchphrase “We want to cheat on everything.”
The Origin Story: Roy Lee and Interview Coder
Understanding Cluely requires understanding its founder, Roy Lee. Brilliant yet seemingly prone to combating authority, Lee previously created “Interview Coder,” a program automating “Leetcode” interviews.
Leetcode interviews are a controversial software hiring practice where applicants solve algorithmic challenges under time pressure. Many, including Lee, view them as obsolete or unfair, motivating the desire to “cheat” them.
In February 2025, Lee interviewed with Amazon, deliberately using Interview Coder to cheat. He posted the interview publicly after successfully getting the position. Amazon issued a copyright takedown, and someone from the Amazon team contacted Columbia University, resulting in Lee’s suspension for a year.
After his suspension in March 2025, Lee posted on social media, “I just got kicked out of Columbia for taking a stand against Leetcode interviews.” One month later, on April 20th, Cluely was announced.
Cluely’s Manifesto: A Race to the Bottom?
Cluely was launched by Lee (and a co-founder reportedly also expelled from Columbia) after supposedly receiving $5.3 million in venture capital. Its website manifesto reads:
“We want to cheat on everything. Yep, you heard that right. Sales calls. Meetings. Negotiations. If there’s a faster way to win — we’ll take it. We built Cluely so you never have to think alone again. It sees your screen. Hears your audio. Feeds you answers in real time. While others guess — you’re already right. And yes, the world will call it cheating. But so was the calculator. So was spellcheck. So was Google. Every time technology makes us smarter, the world panics. Then it adapts. Then it forgets. And suddenly, it’s normal.”
While the comparison to calculators and spellcheck holds some truth (we adapt to tools), Cluely isn’t just a calculator. It aims to replace integral social behaviors, not just supplement calculation or grammar.
Deceit as Marketing: The Cluely Trailer
The product’s trailer shows a young man using Cluely on a date, waiting for the AI to feed him lines, including a fake age, to systematically deceive the woman. He relies on the AI for basic facts about his supposed personality (“art guy”) to “cheat” on the date, manufacturing shallow interaction likely aimed at a physical outcome. Is it just rage-bait marketing?
The manifesto suggests otherwise: “Why memorize facts, write code, research anything when a model can do it in seconds? The best communicator, the best analyst, the best problem-solver is now the one who knows how to ask the right question. The future won’t reward effort. It’ll reward LEVERAGE. So, start cheating. Because when everyone does… no one is”
This “if everyone cheats, no one is” logic is a race to the bottom. Applying it elsewhere reveals the flaw. In video games, ubiquitous cheating wouldn’t just end meritocracy (reducing it to who has the best cheat program); it would prevent the development of skills like reflexes, problem-solving, and critical thinking that high-level play fosters. The cheat does the thinking and aiming *instead* of the player.
Degrading Social Skills and Authenticity
Similarly, facing adverse social situations (dates, interviews) where you don’t know the answers strengthens your ability to adapt. It’s like exercise: discomfort builds future strength. Widespread use of Cluely would degrade the ability to navigate basic human interactions.
Sure, AI glasses might feed you facts to impress someone, but what happens without them? The counter-argument (“you always have your phone/glasses”) misses the point: you’re no longer getting to know people, nor are they getting to know *you*. You become a passenger spitting out algorithmically generated lines, replacing real connection with calculated responses – a pathetic substitute for genuine interaction.
The Slippery Slope to Implants?
This “feed me the answer” mentality requires undetectability. No one wants to date or hire ChatGPT masquerading as a person. If external devices like glasses are prohibited, the drive to maintain the crutch while avoiding detection creates a slippery slope, potentially leading toward implanted chips – all to let a program replace your ability to connect authentically.
This mindset feels deeply corrosive, potentially even “evil” in its intent to supplant genuine human connection.
AI as Replacement, Not Enhancement
While AI can be a “force multiplier” for productivity, Cluely represents AI as a replacement for human personality. Using it, or similar tools like Gemini for social excuses, replaces a core part of being human.
The justifications of “progress” and “value” seem built on fear: fear of being wrong, fear of being disliked, fear of failure despite honest effort. Relying on AI mitigates these fears by turning the user into a soulless intermediary. Rejection doesn’t feel as personal if you were just relaying AI answers – it wasn’t *you* being rejected.
It’s an exaggeration of avoiding discomfort, seemingly born from the founder’s spite after being disciplined for cheating an interview process. “You say I’m cheating? Fine, let’s cheat on everything.”
The Real Cost: Atrophied Social Skills
Worse than childish spite, it’s a real product with funding and visibility. People will use it, thinking they’re smart for having the answers, while missing out on developing basic social skills – skills already underdeveloped in parts of the population.
Products like Cluely contribute directly to the collapse of human thinking and connection. They don’t enhance the human experience; they replace it. If opposing this makes me a “Luddite,” I accept the label.
There are beneficial AI advancements for socialization (like real-time translation earbuds). But Cluely represents technological nihilism, packaged for the lazy, fearful, and insecure.