Back in college, I had a friend named Alex. He was the kind of guy who couldn’t start his day without a fresh dose of BBC headlines and would quote The Wall Street Journal over lunch like it was scripture. “I’d rather miss dinner than miss the full context,” he used to say. But lately, even die-hard news junkies like him have been leaning more on AI-powered tools like Perplexity to get their daily information fix.
And that’s where things start to get messy.
The BBC recently issued a stern legal warning to Perplexity, accusing the AI search engine of copying its content verbatim. According to a letter sent to Perplexity CEO Aravind Srinivas, the broadcaster claims Perplexity’s AI models were trained on BBC content without permission. The BBC is demanding that Perplexity stop scraping its site, delete all existing BBC content, and propose a financial settlement—or face legal action.
This might sound like just another copyright dispute, but it's really about something much bigger: the survival of journalism in the age of generative AI.
I remember reading a Financial Times analysis a few months back pointing out that Perplexity often summarizes articles with impressive precision—sometimes too precise, copying sentences word for word without proper citation. For users like Alex, that efficiency is a lifesaver in the era of information overload. But for legacy media companies like the BBC, it’s a serious threat to their core value proposition: trust, context, and traffic.
The BBC isn’t just worried about content theft. It claims that 17% of Perplexity’s answers are deeply flawed, citing issues like factual errors, poor sourcing, and missing context. When users increasingly rely on AI-generated responses, and those responses are inaccurate or misleading, it’s not just an IP problem—it’s a credibility crisis.
Perplexity, for its part, responded with defiance. The company accused the BBC of misunderstanding how the internet, AI, and intellectual property law work, and even went as far as to say the broadcaster was “trying to protect Google’s illegal monopoly” out of self-interest.
But this isn’t Perplexity’s first run-in with publishers. Over the past year, The Wall Street Journal, The New York Times, New York Post, and Forbes have all taken issue with how the company handles their content. An investigation by Wired even alleged that Perplexity circumvented technical barriers to scrape paywalled material.
To be fair, Perplexity has made some efforts to play nice—it launched a revenue-sharing program last year involving outlets like Time, Der Spiegel, and The Texas Tribune. Still, the friction with major publishers shows no sign of easing.
And it’s no wonder. The media industry is in survival mode. Ad revenues are drying up, attention is migrating to social apps, and local newsrooms are vanishing at an alarming rate. Since 2005, the U.S. has lost 2,900 local newspapers, according to Northwestern University. Meanwhile, OpenAI—the maker of ChatGPT—is valued at $300 billion, and Perplexity has shot up to $14 billion, with backing from heavyweights like SoftBank, Nvidia, Amazon, and even Jeff Bezos.
This is the uncomfortable paradox: the platforms profiting from content are growing exponentially, while the creators of that content are fighting to stay alive.
Just the other day, Alex told me he was using Perplexity to look up “counterarguments to climate change.” The result seemed well-structured, but when he clicked through, the source was an unverified personal blog. For the first time, he hesitated—realizing that sometimes, a slick summary just isn’t enough.
This isn’t just about Perplexity or even the BBC. It’s about whether society will continue to value original, verified journalism—or trade it for algorithmically generated convenience. The battle between media and AI isn’t just a legal issue—it’s a fight for the soul of information itself.
And like Alex, we all need to start asking: in a world where AI can answer anything, who’s making sure those answers are actually right?
