This attitude and behavior pattern seems to be transferring to the Metaverse. After a couple of reported assault cases during a Metaverse preview, Facebook’s response was to say that the people in question should have enabled their Safe Zone feature. This is the equivalent of telling an assault victim they should not have been there or wearing that. Instead of addressing the problem, that assault happened and working to deter that behavior, they attempt to remove their ownership of the issue and turn it back on the user. Just as in real life, people should be able to use a platform like Facebook, Instagram of any VR, AR, XR app without the fear of being assaulted.
The internet as a communications platform gave rise to a feeling of empowerment for many. This empowerment also allowed for more aggressive and socially unacceptable behaviors to spill out in people. The Troll was reborn as a person that felt safe enough to spew hate and vitriol in conversations online. This migrated into more aggressive forms of behavior like cyberbullying, Cyber Stalking, and exploitation of targets though social media connections (Including the sending and receiving of illegal material and messages).
With this type of history how in the world does anyone think that Meta can properly police and prevent this type of behavior in their Metaverse world? If anything, the idea of it being a more engaging and complete experience might drive people to the platform looking for their fix. This seems to be backed up by at least one report from early in the testing of Horizon Worlds from December 2021.
The user in question reported the incident in the Official Horizon Worlds’ Facebook Group saying, “Not only was I groped last night, but there were other people there who supported this behavior which made me feel isolated in the Plaza.” Facebook’s response, as we mentioned before, was that the user was not using the right safety tool. Personally, I am not sure how it was the assaulted user’s fault here. The fault lies 100% with the persons performing the action. Meta has a responsibility here and must take ownership of this. When this gets pushed out to more and more people children are going to gain access and things will only go downhill from there.
I would love to feel that Facebook/Meta will get the message and make their platform safe instead of pushing everything back on the users to protect themselves from bad actors. We know that they can and will step up and police posts about specific information, including dropping dubious “fact checks”. However, their history with this kind of thing does not instill confidence in me. If anything, their history tells me that this will not go away and may become a more accessible platform for this type of behavior.