Law and order in the metaverse? 'Possible but difficult,' says Sydney-based innovation lawyer
- Enforcing real-world laws in virtual worlds is complex as it raises issues regarding jurisdiction, sovereignty, and the application of laws to virtual spaces. Roblox recently brought an action against a content creator, alleging the individual had been engaging in harassing behavior in violation of the Roblox terms of service.
- Interpol said earlier this year that it is looking at policing metaverse crimes, but it’s difficult to define a “crime” in the metaverse. It will be a complex area and may require some global coordination by relevant authorities.
- In a decentralized and largely unregulated space like the metaverse, it can be difficult to enforce copyright law. One solution is to use blockchain technology to create a system of digital rights management for metaverse assets.
Nick Abrahams is the Global Leader of Technology & Innovation for Norton Rose Fulbright (NRF) Australia. NRF has more than 4,000 lawyers in more than 50 offices around the world. Abrahams is a speaker on future trends and innovation, and he created the world's first AI-enabled privacy chatbot, Parker. He is the founder of the online legal site LawPath, and is on the boards of Integrated Research, a software company; the Garvan Foundation, the global leader in genomic research; the Vodafone Foundation; and the Sydney Film Festival. Joe Kornik, Editor-in-Chief of VISION by Protiviti, caught up with Abrahams to discuss law in the metaverse.
Kornik: What do you—and Norton Rose Fulbright—see as the biggest legal issues related to the metaverse? Are many of these Web 3 issues just extensions of what the legal system has been dealing with in Web 2?
Abrahams: The metaverse can be a truly transformative technological advancement, which can revolutionize personal and commercial life. Many brands have already taken advantage of this and are using the metaverse as a medium for employees and customers. However, operating in the metaverse comes with certain risks, a number of which are new and unique to Web 3. Some of the legal issues associated with the metaverse and Web 3 include:
- Data security and privacy: Metaverse projects may result in an increased collection of data, including personal information. Such information is at risk of exploitation, particularly where that information is transferred between platform operators, or between applications within a platform. Platform operators will need to ensure they have robust security measures in place in order to govern data transfers, maintain information security standards, and ensure compliance.
- Conduct: In Australia, the eSafety Commission fosters online safety by exercising powers under the Online Safety Act 2021. The Online Safety Act expanded and strengthened Australia's online safety laws, giving the eSafety Commission improved powers to help protect all Australians from online harm. Metaverse projects may involve collaboration between end users and represent another forum for online harassment, which raises a safety concern. As an example, Roblox recently brought an action against a content creator, alleging the individual had been engaging in harassing behaviour against other users in violation of the Roblox terms of service, as well as local fraud and abuse laws.
- Land ownership: Property laws govern physical real estate, and owners usually have a deed that gives them incontestable claims on their land. That claim is not as clear in the metaverse. Similarly, there is always the risk of theft. It is not uncommon for crypto assets to be stolen from an owner’s digital wallet through manipulation of authentication steps. This would result in a loss of a claim to ownership of a person’s digital property where law enforcement agents would likely be unable to retrieve the stolen assets due to the volatility of the metaverse. Finally, we do not know how long metaverse platforms will be operating. To invest thousands of dollars into a platform without any guarantee that it will exist is a real risk. Both Decentraland and The Sandbox’s terms and conditions state that the companies will hold no liability if the platforms cease to operate at the company’s exclusive and sole discretion.
- Intellectual property: Many companies have filed patent applications for multiple technologies that employ user biometric data for powering what they see and ensuring their digital avatars are animated realistically in the metaverse. Many of these IP applications have been filed for brand protection, which is similar to, or an extension of, what we have experienced in Web 2.
It is not uncommon for crypto assets to be stolen from an owner’s digital wallet through manipulation of authentication steps. This would result in a loss of a claim to ownership of a person’s digital property where law enforcement agents would likely be unable to retrieve the stolen assets due to the volatility of the metaverse.
Kornik: You touched on this earlier, but since no one entity owns the metaverse, and it’s largely unregulated, can we enforce real-world laws in virtual worlds? And are the consequences the same?
Abrahams: The question of enforcing real-world laws in virtual worlds is complex, as it raises issues regarding jurisdiction, sovereignty, and the application of laws to virtual spaces. In general, the enforcement of real-world laws in virtual worlds is possible but difficult. The lack of clear legal frameworks for virtual spaces means that it can be challenging to determine which laws apply and who has the authority to enforce them. Additionally, the fact that virtual worlds are created and operated by private entities further complicates matters, as these companies may have their own terms of service and community guidelines that users are expected to follow. For example, Interpol said earlier this year that it is looking at policing metaverse crimes, but it’s difficult to define a “crime” in the metaverse. It will be a complex area and it may require some global coordination by relevant authorities.
Kornik: I think one legal issue that pops to mind for a lot of people thinking about virtual worlds revolves around personal interactions in the metaverse. We’ve already heard about harassment, so how are we to think about these interactions from a legal perspective?
Abrahams: It is important to note that the legal landscape surrounding the metaverse is still evolving, and lawmakers, policymakers and industry stakeholders are engaged around the world to discuss these issues. Interactions in the metaverse obviously transcend physical boundaries since they can involve participants from around the world. As such, given different countries have different laws, it can be a complicated area to navigate in terms of enforcement, especially when applying existing harassment laws to the metaverse. Platform operators, as we’ve noted, have an additional layer of responsibility to ensure that they take appropriate measures to protect users from and prevent anti-social behaviour online. Again, we think coordinated action will be required on a global scale to ensure relevant authorities can proactively address such concerns, in particular establishing frameworks to share information, coordinate investigations and enforce laws across borders to help combat crimes in the metaverse.
Interpol said earlier this year that it is looking at policing metaverse crimes, but it’s difficult to define a “crime” in the metaverse. It will be a complex area and it may require some global coordination by relevant authorities.
Kornik: Some of the biggest success stories in the metaverse are from a branding or marketing perspective. What are the potential pitfalls there from a legal perspective? What should executives be aware of when entering the metaverse?
Abrahams: From a legal perspective, there are several potential pitfalls, including:
- Trademark and copyright infringement: In the metaverse, it's possible for users to create avatars, buildings and other digital assets and content that could potentially infringe on existing IP protections.
- Privacy issues: In the metaverse, users may share personal information or engage in behaviours that could raise privacy concerns.
- User-generated content: There is a potential for inappropriate or illegal content to be created by users in the metaverse, and it is important for companies to take steps to monitor and moderate user-generated content.
- Regulatory compliance: The metaverse is a rapidly evolving space, and there may not yet be clear regulatory frameworks in place to govern certain activities. Internal legal counsels should ensure compliance with applicable laws and regulations.
Kornik: There’s been talk about regulation of the metaverse for a variety of reasons, including to keep checks on the private sector. What are your views of regulation in the metaverse and how could any regulation be enforced?
Abrahams: Existing data protection laws and regulations, such as Australia’s Privacy Act 1988, and regulatory bodies like the Office of the Australian Information Commissioner (OAIC) and the eSafety Commissioner offer a basic level of protection for users, but perhaps further consideration should be given to enacting specific laws and regulations due to the novel challenges and evolving risks posed by the private sector. Certainly, the government is considering the creation of an AI Safety Commissioner (as recommended by the AHRC’s 2021 Human Rights and Technology Final Report). The European Union and the United Kingdom have proposed regulatory models that could serve as examples for how Australia might regulate AI and the metaverse. The European Commission has suggested implementing an AI framework that would take a risk-based approach to regulate and even prohibit certain aspects of AI systems. The UK, on the other hand, has opted for a more flexible approach that emphasizes collaboration between government, regulators and business rather than specific legislation. Although these models are still in their developmental phase, they could help inform how Australia, and perhaps the rest of the world, decides to implement its own reforms and protections.
The European Commission has suggested implementing an AI framework that would take a risk-based approach to regulate and even prohibit certain aspects of AI systems.