The promise of the Metaverse is the delivery of inclusivity due to its decentralised nature. However, there is a challenge. Namely can the AI for powering the Web3 experience and shaping our Meta interactions be trusted?
It is well documented that systems built with AI show discriminatory behaviour. The problem is the AI models are trained using data from human activity. And this is the difficulty as humans are often biased so the AI shows prejudice. Part of our thinking at CreDA is that it is not just a lack of data alone that stifles lending. It is also the fact that within current systems evaluating creditworthiness, the data itself is often infected with bias.
So here are four principles:
Create technology to address bias in the data – Being aware of the danger of bias means that it can be countered. Algorithms can be shaped to identify the parts of the training data that cause bias and then used to remove or modify its impact. This step is important, but has limitations. The weakness is firstly that making the new modified AI aware accurately of the ‘biased’ data and secondly even when accurately identified the complexity of this bias often is beyond the ability of AI to resolve.
Don’t fully rely on the AI – The glitch in the technology answer leads to the principle that we should never fully rely on the AI but also build in human supervision of systems. The obvious contradiction in human supervision is that the concern around bias is from human sources in the first place. Here the principles of a decentralised community come into focus. A diverse community can define protocols that become guard rails to the natural propensity for bias in both people and technology. This creates great responsibility on the ethics driving the protocols of the community, DAO or Web3 network. Yet if these bottom-up guard rails are in place and manged with integrity then a hybrid model can be created against bias.
Build the capability to verify decisions – This community model works not only through people and protocols but additional data sources and visualisation can be added to the system. For example with credit4good the bias towards lending to farmers is addressed via geo-tagging data to overcome assumption around farms as collateral. By combining data in this way a bigger picture can be built to create a ‘data’ metaverse that questions an AI decision, addressing both ignorance and assumptions through the protocols and smart contracts.
Give the Metaverse a soul – The exciting aspect of smart contracts is that these immutable agreements and associated identities can build a soul into the system. The idea of Soul Bound Tokens are just this kind of system. At the moment the SBT focuses on creating a consolidated identity, however, longer term the STB will create a platform for managing the broader integrity of people’s experience to begin the process of managing a soul for the Metaverse.
Bias in the Metaverse

(Visited 4 times, 1 visits today)