TikTok is making waves: the app has done just that introduced a feature that allows users to determine how much AI-generated content they're seeing in their feed – and if you're anything like me and wake up scrolling and wondering how much of what you're seeing was human-made, this could be big news.
The move was revealed at TikTok's European Trust and Safety Summit in Dublin, with the company stating that it has already marked over 1.3 billion videos as AI-generated.
Users will soon see a new toggle in “Manage Topics” in Content Preferences that will allow them to reduce (or increase) the AI-created content they view.
Here's where it gets interesting (and a bit messy): This move reflects a greater sense that people may actually want less algorithmic overload, not more.
We've written so much about how social media platforms push us down endless rabbit holes that sometimes I see everyone around the world with two smartphones hot on the heels of Alice in Wonderland chasing the white rabbit.
Now TikTok gives us a little ladder to climb out of. It comes amid greater concerns about what happens when feed becomes oversaturated “Slip” – content generated quickly and of low quality.
TikTok is also taking steps towards label transparency, adding much-needed labels to indicate when AI-generated videos are artificial (and not just CGI or deepfakes), and now adding watermarks generated with its tools or flagged as part of an industry-wide C2PA effort.
The goal: to let viewers see what they're watching without having to look at metadata.
From my perspective, it's a smart move – but there's also a broader story behind it. Blurring the Lines As AI-generated content floods our social media feeds, it becomes increasingly human-like.
This matters not only for authenticity, but also for the trust, mental health and sense of agency we have online.
Do recommendation systems still work for us or increasingly against us?
Scientists warn filter bubbles and personalization will reinforce what is known, not unknown.
What I want to know next: How effective will this control be?
Will limiting AI videos have a significant impact on our experiences, or will it just be cosmetic?
And how can small creators – people, not machines – be protected as we recalibrate algorithms and monetization models?
If platforms delegate power to viewers, they also have a moral obligation to protect the generators from which immersed content originates.
In other words: the slider TikTok just added is part of another move that allows the user, not just the algorithm, to customize their own experience.
It won't solve everything – scrolling problems, commitment traps, dopamine loops – but it gives us a little more control over what we see.
And in a time when so much seems automated, this is nothing.


















