The Embodied Internet: How to Create Safe Spaces

Cortney Harding
3 min readAug 5, 2022

The famously calm and balanced UK media has recently been all aflutter about the dangers of the embodied internet. If you can believe this, women on the embodied internet are, gasp, being harassed. The way the article portrays it, this seems to happen only on the embodied internet, and I guess when some dude loudly commented on the length of my dress a few days ago, I was actually in a simulation.

I don’t mean to make light of the problem and trauma that comes from harassment on the embodied internet. It is a serious problem that needs to be managed, and this piece will pose several ideas for how that could be done. But I also want to make sure we place everything in the proper context — just like soylent green, the embodied internet is people. And people, whether in the real world, on social media, or on the embodied internet, can be absolutely awful.

But given how new the space is, we have the ability to build something better from the ground up. We can look at the lessons of web2 and apply those towards making a better web3. A big part of the issues we face in web2 is that the huge companies who own the social spaces we share didn’t build with long term intentionality in mind. And now that it’s becoming toxic, they are afraid that they may lose money by putting down some rules. In web3 we can plan ahead for the problems we have seen arise in web2 and set up the proper parameters so that users can enjoy a better experience.

Luckily, it will be much harder to create bots on the embodied internet, given that voice is the primary way we’ll be interacting with one another. There’s a super advanced universe down the road where AI gets good enough to impersonate humans and have actual interactions and conversations, but that won’t happen for a while. Until then, it will be pretty easy to figure out whose voice sounds real and who sounds like a robot, and you can’t really cut and paste entire conversational interactions like you can with a social post.

There’s a school of thought that people won’t verbalize the same level of awful things they type, but as someone who has existed as a woman in the world for more than a few decades now and had plenty of vitriol spewed directly into my face…yeah, people are still going to be awful. One thing we can do is create rules so that moderators can ban people whose actions fall outside community guidelines. And of course, given the newness of the space, we should be setting moderation guidelines from the start rather than playing catch up.

We could also institute ranking systems where people who don’t contribute to conversations in a meaningful way would be docked points and eventually banned. This is not to discourage respectful disagreement, but it would go a long way towards tamping down the behavior we see on social media today where people are rewarded for fear-mongering and radicalism.

Allowing users to create and control guardians around their personal space is another easy solution, and one that several platforms are implementing already. The guardian should be on as a default and then a user can choose to switch it off if they choose to personally interact with someone. You can also choose to simply make a user disappear from your world if they are being inappropriate — something that we unfortunately can’t do in the real world (Sarah Jacobson fever dreams aside).

If we all contribute, we will be able to build an embodied internet that we want to spend time in and enjoy, rather than dealing with the stressful and often unpleasant web2 experience we seem to be stuck with today.

--

--

Cortney Harding

Founder and CEO at Friends With Holograms. Adjunct at NYU. Bylines Billboard, Ad Week. Speaker. Ultrarunner in my spare time.