Governments around the world are rushing to regulate children’s access to social media. From Australia’s social media ban for minors to growing concerns around artificial intelligence, predators, privacy, and screen addiction, there is little global consensus on where the line should be drawn — or who should be responsible for drawing it.
At a Rest of World virtual event, Jules Polonetsky, CEO of the Future of Privacy Forum, discussed what keeping children safe online actually looks like. He reflected on why blanket bans may backfire, how AI could improve moderation, and why parents are increasingly overwhelmed by digital life.
The conversation has been edited for length and clarity. A recording of the live Rest of World event is available here.

Why is there a need to protect minors on the internet and on social media?
Parents are overwhelmed by digital challenges. There’s nobody to guide them. They know that saying no to access is not an answer. We all want guidance, but the norms aren’t yet established, so we want help, and we’re frustrated.
There are no easy answers because every culture is different, every family is different, and kids mature at different ages. We want the companies or the law to solve what is a complicated challenge. But some of it is education for parents, like having good alternatives for kids to spend time.
Several governments are contemplating Australia-like social media bans. What do you think about this trend?
Banning is the easiest thing to legislate. But it’s the hardest to make work.
We are beginning to see what this ban looks like in Australia, but I don’t think we can conclude its impact as of now. What we do know is that the number of VPN users seems to be skyrocketing there. Kids are smart enough to know how to get around the ban. Maybe the regulators will tighten the loopholes and figure out how to keep the kids out.
However, I think this is a dangerous direction.
For many folks around the world, access to a mobile phone is the way to access the internet. If you don’t have access, it impacts learning opportunities and opportunities to be in touch for safety. So, this could be a troubling roadblock that affects different people differently. Parents with affluence or those who want their kids to have access will ensure they do. But the parents who aren’t engaged, or savvy, or those who don’t have the time, won’t, and they will end up with digital red lighting.
Do blanket social media bans work and are we ready for the trade-offs that come with it?
Australia is a giant experiment, and we’ll have better data in the future. But Australia is not a representation of the entire world. Other jurisdictions will have different outcomes.
From what we’re hearing anecdotally about Australia so far is that kids are finding alternative spaces. They are managing to sign on as adults. I worry that we might end up pushing them to places where there’s less oversight.
We are monitoring big social media services, but there’s a big world out there of all sorts of services that change and grow quickly. Other services might not have the tools, oversight, or monitoring. That doesn’t mean that big players should be off the hook. But I just don’t see how a ban is a long-term solution.
If blanket bans don’t work, what is the alternative?
I think we’re rushing a little too quickly without recognizing that there are a lot of pieces to this puzzle. We need to do this in a risk-based way, with the privacy trade-off being more appropriate.
I think AI will be a game-changer for some oversight and protection.”
If you are a service that both kids and adults use, you need to ensure that those communities are separate so that we won’t need to worry about predators.
However, when we start putting those gates in place, we’re creating a forced identification system. We live in a world where there are real concerns about law enforcement and government access to data. So mandating an identification system for general services ends up being a privacy trade-off.
We need to use logic. If you’re an adult going into a kindergarten, there’s no reason you get to be anonymous, right? We understand that these are youth-oriented environments, and even though that burdens some adults, we can’t just walk into a school and roam around. We need to share who we are; are we a parent? We need to identify ourselves.
But that puts a burden on adults, especially when there’s such a wide variety of IDs, and in some countries, not everybody has hard government-backed IDs that can be used easily.
Do you think AI could play a role in creating safer spaces for kids online?
When AI started being the tool for moderation, I was very skeptical because even human moderators have not always been successful at understanding context and making decisions.
However, I’ve seen a lot of evidence that AI is increasingly able to have a good sense of the context of what is going on.
Take the example of a chat room, when someone reports a line or paragraph, how can a human moderator know that this is really inappropriate? They need to understand what kind of chat room it is, sometimes read conversations before and after that line to understand if it was appropriate. It’s complicated.
I think AI will be a game-changer for some oversight and protection. It’ll take a while, but it should allow some players to offer better protection at scale than they’ve been able to so far.
What do you think is the role of the platforms in keeping all their users — minors and adults — safe?
Here’s what I really want platforms to do: Keep it simple.
When we were working on parental controls at AOL in the beginning, it was sort of simple: Just allow a webpage or don’t allow a webpage. As the internet became big and interactive, it became complicated. It wasn’t because they didn’t have options. Actually, the more options, the more complex it gets.
With the tools and options that are available today, we can’t expect even the most tech-savvy parents to control. It’s like sitting in an air traffic control booth all day and navigating, controlling, and overseeing what kids do.
We need to make parents be able to do this without a huge burden. And that’s where there’s really been sort of a failure. Platforms need to make these tools really dummy-proof and for busy parents. It needs to be so easy that we can do it without having to sit down and become an expert.
Platforms need to make these tools really dummy-proof and for busy parents.”
What can we do to keep our children safe online?
For people with young children: They need your help. So you initially sit with them and show them how to use something. Then perhaps you review and say, “Okay, you can use that” or “Okay, I’m going to buy you a phone and set up parental controls.”
When they get a bit older — may be 16 or 17 — they deserve a lot more privacy. You’re not really as much in their business, and that’s perhaps when they are most at risk. They might be approached by adults online or can really get into trouble. They can travel to meet somebody and so on. And that’s when we’ve often retreated because they want their privacy.
It’s building that trusted relationship so that you are still part of that conversation.
We have had all these dangers to kids earlier, too. Now it’s happening online. Offline, the answer isn’t always digital. It’s all of us figuring out how to grapple with the ugly parts of life, whether they’re physical or digital. If something is happening on the playground, are you barging in to hear what’s going on? You have opportunities to barge in here. So, without invading their privacy or being overbearing, this has to be a partnership between you and your kid.




