Cluely is the New AI Assistant for Cheating—Should We Be Worried?

cluely ai cheating controversy
Photo by RDNE Stock project/Pexels

A new AI tool promises to help you ace interviews, exams, and more — but at what cost?

Imagine you’re in a high-stakes job interview, nerves kicking in, mind blanking — and then, like magic, answers begin to appear on your screen in real time. Not from a colleague, not from a script — but from a stealth AI assistant feeding you exactly what to say. That’s not science fiction anymore. It’s Cluely, a controversial new AI startup that’s pushing the limits of what “help” really means.

In recent weeks, Cluely has made headlines not just for its slick technology, but for the firestorm of ethical questions it’s kicked up. Backed by millions in VC funding, yet born from a suspension-worthy scandal, it’s become a symbol of both the promise and peril of AI in everyday life. So what exactly is Cluely? Why is it riling up universities and tech companies alike? And what does it mean for the future of fairness in a world increasingly shaped by artificial intelligence?

Let’s break it down.

What Is Cluely, and Why Is Everyone Talking About It?

Cluely is an AI-powered desktop assistant designed to help users in real-time during virtual situations like job interviews, exams, and even sales calls. Unlike traditional tools, Cluely operates in stealth mode. It watches your screen, listens to what’s happening through your mic, and responds through a concealed browser window, allowing users to discreetly receive AI-generated prompts and suggested answers.

The app was developed by Chungin “Roy” Lee, a 21-year-old former Columbia University student, and Neel Shanmugam. Their earlier product, Interview Coder, aimed to assist candidates during technical interviews, but Columbia found it crossed a line, leading to Lee’s suspension. That didn’t stop him. Instead, he doubled down with Cluely, which recently secured $5.3 million in seed funding from high-profile investors, including Abstract Ventures and Susa Ventures.

The Ethical Dilemma: Innovation or Integrity?

The launch of Cluely has reignited the age-old debate: where do we draw the line between tools that enhance performance and those that cheat the system?

On one hand, proponents compare it to using a calculator or Grammarly—tools that once drew skepticism but are now ubiquitous. They argue that Cluely democratizes access to support, helping candidates who might not have expensive coaching or perfect English fluency.

On the other hand, critics say it blatantly undermines trust. Job interviews and exams are supposed to assess the individual, not their ability to follow AI prompts. Big companies like Meta and Amazon have started reevaluating their interview processes, even considering returning to in-person evaluations to weed out AI-assisted cheating.

As artificial intelligence becomes more capable, the line between assistance and deception is getting blurrier—and more consequential.

Real-World Performance: Hype vs. Reality

Despite its futuristic pitch, Cluely isn’t quite foolproof. Users report noticeable lag, awkward timing, and generic answers that can actually backfire if not edited on the fly. One tester described it as more distracting than helpful, noting that keeping up with both the conversation and the AI suggestions proved more stressful than going in cold.

There are also privacy concerns. The tool needs access to your screen and microphone to function, raising red flags about how data is handled and whether sensitive information could be exposed.

At this stage, it’s still unclear whether Cluely is a groundbreaking edge or just another Silicon Valley overpromise.

What Does This Mean for the Future?

Cluely is part of a larger wave of AI tools reshaping how we learn, work, and interact. The controversy surrounding it highlights a growing need for institutions—whether academic, corporate, or governmental—to rethink how we evaluate performance in a world where AI can step in at any moment.

This might mean designing assessments that are AI-resistant, such as open-ended, in-person tasks or real-world simulations. Or it could mean leaning into the shift—training people to work effectively with AI, and building standards around transparency and acceptable use.

Either way, Cluely is a wake-up call: AI isn’t coming. It’s already here.

Final Thoughts: Navigating the New Normal

Cluely didn’t just launch an app—it sparked a debate that’s only going to grow louder. Can we trust what people say or do in virtual settings? What happens when machines become our silent co-pilots? And who gets to decide where help ends and cheating begins?

As we navigate this new normal, one thing is clear: the conversation around AI and ethics can’t wait until the next app goes viral. It’s happening now. And the decisions we make today will shape how we define fairness, skill, and authenticity for years to come.