When Julia had a brilliant idea recently, she chose not to send it as a simple one-liner. Instead, she used ChatGPT to transform it into a comprehensive email for the leadership team. What started as a paragraph ballooned into a lengthy exposition complete with made up market analysis, generalised implementation strategies, and various unrealistic scenarios. Her skip-level manager, overwhelmed by the length of the email received, used Outlook’s AI plugin to condense it into a TLDR. The irony? Julia could have saved time, a few kilojoules of energy, some trees and other resources by hitting send on her original one paragraph email instead.
If you haven’t experienced this directly, you’ve definitely witnessed an explosion of longer, but not necessarily better, content across emails, documents, and presentations. BetterUp Labs, in collaboration with Stanford, has coined a term “workslop” for this very phenomenon and it is everywhere! Defined as AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task, it’s polished on the surface but hollow at its core, often creating more work for colleagues who must sift through the excess to find valuable content.
Their research paints a concerning picture. Of the 1,150 desk workers surveyed, 41% reported encountering such AI-generated output. The cost? Nearly two hours of rework per instance, translating to about $186 per person per month in lost productivity. For a mid-sized company, this could mean over $9 million annually in productivity losses. Even more troubling, people perceive those pushing out “workslop” as less creative and less reliable, creating downstream issues in productivity, trust, and collaboration. Unsurprisingly, professional services and technology sectors are disproportionately impacted, given their early and enthusiastic adoption of AI tools.
Social media hasn’t been spared either. Increasingly clogged with low-quality AI-generated posts, this content is often referred to as “AI slop.” Spotify made news last month when they revealed that over the last 12 months, they’ve had to delete over 75 million ‘spammy’ tracks – that’s almost three fake songs for every four real songs. In their own words – At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push “slop” into the ecosystem, and interfere with authentic artists working to build their careers. That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors.
The financial consequences of relying too heavily on AI-generated content can be severe too. Take Deloitte’s recent AI blunder where they ended up refunding part of a $440,000 consultancy fee to the Australian government after admitting that a report it delivered, which included AI-generated content, was riddled with serious errors. I am willing to bet that it would have been more cost-effective for Deloitte to have simply relied on traditional human expertise from the start.
The irony is impossible to miss. In a time when organizations are hoping to get leaner through GenAI and Agentic AI, many may instead end up hiring additional staff just to manage AI-generated problems. From updating labels and building protection rules to spotting hallucinations and fact-checking erroneous data, we’re creating new roles to clean up AI’s mess.
Before we move to solutions, let’s acknowledge something important. Whenever we push for rapid adoption of new technology, there will be stumbles along the way. Some “workslop” is perhaps an inevitable cost of innovation and learning. However, just as we learned to move beyond using PowerPoint for every communication or adding everyone to every email thread, we can develop better practices for AI usage.
So how does one fix this?
AI discernment is the new digital literacy. It starts with understanding when to use AI and when not to. In Julia’s case, while she could have used ChatGPT to bounce ideas around, it would have been more helpful to simply send a crisp email instead. It didn’t have to be perfect as long as it got the message across. Perhaps what we need is a reality check – imagine if every time someone used GenAI, it displayed a warning: ‘this query cost two trees’ or ‘you just killed ten trees’ before generating a response. Such stark reminders might make us think twice about using AI to merely make things ‘prettier’ rather than truly more valuable. Indiscriminate imperatives yield indiscriminate usage.
Training needs to move beyond basic “how-to” sessions. Teams should learn to spot AI-generated fluff – those telltale signs like overly formal language, redundant information, or generic insights that could apply to any situation. Regular peer reviews comparing effective and ineffective AI usage can help develop this critical eye. Some organizations are finding success with designated AI champions who help teams optimize their AI usage, ensuring technology enhances rather than complicates our work.
But even with these guardrails in place, we need to fundamentally rethink how we measure success. Stop measuring productivity by output volume or how polished it all looks- longer emails and lengthier presentations don’t equal better work. Instead, focus on impact and efficiency. Did this work move the project forward? Did it save others time or create more work? Would a shorter version have been more effective? The goal should be to measure the value we create, not the volume of content we generate.
The goal isn’t to abandon AI – it’s to use it purposefully. Technology should amplify human insight, not bury it under layers of artificial fluff. Sometimes the most innovative use of AI is knowing when not to use it at all.

