Boni reached audiences across multiple platforms in Afan Oromo, including a YouTube channel she co-hosted with her husband: JOSY & BONI. Screenshot from YouTube video, ‘YEROO DHIYOOTTI‘ by JOSY & BONI, Fair use.
Content notice: This article contains mentions of suicide and depression, which some readers may find disturbing.
Boni, an Ethiopian content creator who spoke to her audience on TikTok in the Afan Oromo language, died by suicide on April 29, 2026. That day, she posted a farewell message in Afan Oromo on her Facebook account — “Nagaatti Yaa biyya lafa,” (Goodbye, world), followed five minutes later by a video on TikTok and Facebook, captioned “I said it, it is over,” in which she stated her intent on camera with imagery of the means visible in frame. That video has since been viewed and shared millions of times through stitches (videos where a creator reacts to another’s video, thereby creating new content), reposts, and reactions on TikTok and other platforms; both the farewell post and the video remain on Facebook at the time of writing this article. Neither platform’s automated detection systems flagged the content before Boni’s death.
In the days since, much of the public conversation across Ethiopian TikTok has focused on whether her audience or her circumstances were to blame for her death. However, the question that has received less attention — arguably the more pressing one — is what TikTok and Meta, the company that owns Facebook and Instagram, did and did not do at each stage of a sequence their own published policies were written to interrupt.
A creator with a community, on a platform with rules
Boni had built a following speaking primarily in Afan Oromo, one of Ethiopia’s most widely spoken languages, on a platform that markets multilingual moderation as a core capability. She was pregnant at the time of her death. According to creators familiar with her work, in the months before her death, she had become a target of recurring online harassment.
That harassment was not a vague phenomenon. It was content, hosted on TikTok. In one widely circulated video, a younger creator — visible only as a voice off-camera — prompts an elderly man, on camera, to repeat his wishes against Boni. He participates willingly: wishes for her death, excitement over her eventual funeral, a hope that her pregnancy ends in miscarriage, and other curses. Boni herself drew attention to the video by responding to it on her own account. Visibly tired, she told her followers she had no power to stop this kind of hate and asked them to watch the original and judge for themselves. The content was a clear violation of TikTok’s Hate and Harassment policy, which prohibits “wishing death on a named user.” Yet, it remained online.
Boni’s response was itself a user-safety signal — a creator publicly surfacing content that violated platform policy, in plain language, while showing distress and resignation. TikTok still did not act on the underlying videos.
What TikTok’s policies promised, and what they didn’t deliver
In Boni’s case, none of these mechanisms produced a result before harm occurred. The black-background farewell text was not flagged. Her farewell speech in Afan Oromo was not flagged. The visible imagery — exactly the kind of signal TikTok and Meta’s image classifiers are explicitly trained to detect — was not flagged. The combination, marketed by both platforms as the strongest signal of intent, was not flagged. The five-minute window between her goodbye post and the announcement video was a textbook escalation pattern, during which a functional safety system was supposed to intervene. Neither platform’s system did.
After Boni’s death, TikTok removed the announcement videos from her account. That removal is, by the platform’s own action, an admission: the content always violated policy. The classifiers that took the videos down after she died were available before she died.
But removal-after-death has not contained the harm. Across TikTok, other accounts are continuing to stitch, react to, and re-upload the same videos that TikTok has removed from Boni’s page; the cumulative reach across all these copies runs into the millions. The recommendation system continues to deliver this content — Boni’s voice, her image, her last moments — into the “For You” feeds of users who never followed her, including, in all likelihood, her family and friends. TikTok has used industrial hash-matching technology for years to suppress terrorism content, copyrighted music, and child sexual abuse material at upload time. It is choosing not to use it in this context, or its deployment is silently failing.
The harassment videos that targeted her, meanwhile, still remain available.
The same video, hosted on Facebook
While TikTok has at least reacted post hoc, Meta has not. As of this writing, both Boni’s farewell post and the announcement video she posted five minutes later — content TikTok now classifies as too harmful to host — remain on Facebook. The video is here: Meta’s systems did not act before her death, and Meta has not acted since.
Meta’s Community Standards on Suicide, Self-Injury, and Eating Disorders contain a near-identical prohibition. Meta has spent years publicly emphasizing proactive AI detection in dozens of languages. Both companies participate in industry forums for sharing signals on suicide-related content across competing platforms. None of these mechanisms produced a removal. The same content, in the same language, with the same visual signals, is now hosted on the products of two different corporations, both of which have published rules saying they would not allow it.
When platforms expand into a market and harvest users’ attention without staffing the moderation pipeline at parity, harm intensifies, and risk becomes concrete. Ethiopia, with more than 100 million people across multiple major languages, is one of those markets that social media giants are happy to profit from, but unwilling to invest in effective moderation strategies.
Moving beyond the platforms
Ethiopian TikTok has not been silent. Major creators have weighed in publicly. Adonay, one of the largest creators in this space, has placed responsibility on bullies, haters, and jealousy among Boni’s audience. Other widely circulated response videos, including from creators @thisday013 and @abebayehuassefatkursew, have pushed back: don’t blame the audience; consider her circumstances, her unmet expectations, her life.
These two camps appear to be at odds. They share a hidden assumption that Boni’s death was caused by one side of the screen or the other: Either her followers or disappointments in Boni’s life. In neither version do the platforms appear as actors. Yet the harassment Adonay describes was hosted on TikTok in violation of TikTok’s own rules; the “circumstances” the response creators ask viewers to consider include a platform that ignored a distressed creator’s stitch, missed her announcement, removed her content too late, and then allowed it to be re-uploaded across other accounts.
Online bullying, jealousy, and rivalry are not new and not specific to TikTok. They have existed long before the internet. What is specific to TikTok and Meta is the system that decides which of those human behaviors gets amplified, recommended, and monetized — and which gets caught and removed. That system is not a constant. It is a corporate choice. It is staffed and funded at corporate discretion. It is accountable, where human nature is not.
Boni was a person, not a content category. The conversation Ethiopian TikTok is having about who failed her is the wrong argument with the wrong opponent. The actor who wrote the policy, deployed the technology it markets, profited from her audience, and did not enforce its own rules is not in any of the videos. It should be.




