Scientists use Wi-Fi signals to track human movement for the metaverse

The NFT Unicorn a77f1d33-6239-47b7-bd7f-3bab618fd85e Scientists use Wi-Fi signals to track human movement for the metaverse Crypto News

A team of researchers from Nanyang Technological University in Singapore recently unveiled a new method for tracking human movement for the metaverse. 

One of the key features of the metaverse is the ability to represent real world objects and people in the digital world in real time. In virtual reality, for example, users can turn their heads to change their viewpoints or manipulate physical controllers in the real world to affect the digital environment.

The status quo for capturing human activity in the metaverse uses device-based sensors, cameras, or a combination of both. However, as the researchers write in their preprint research paper, both of these modalities have immediate limitations.

Related: Elon Musk sues Sam Altman, OpenAI over agreement breach

A device-based sensing system, such as a hand-held controller with a motion sensor, “only captures the information at one point of the human body and thus cannot model very complex activity” write the researchers. Meanwhile, camera-based tracking systems struggle with low-light environments and physical obstructions.

Enter WiFi sensing

Scientists have used Wi-Fi sensors to track human movement for years. Much like RADAR, the radio signals used to send and receive Wi-Fi data can be used to sense objects in space.

Wi-Fi sensors can be fine-tuned to pick up heartbeats, track breathing and sleeping patterns, and even sense people through walls.

Metaverse researchers have experimented with combining traditional tracking methods with Wi-Fi sensing to varying degrees of success in the past.

Enter artificial intelligence

Wi-Fi tracking requires the use of artificial intelligence models. Unfortunately, training these models has proven to have a high degree of difficulty for researchers.

Per the Singaporean team’s paper:

“Existing solutions using Wi-Fi and vision modalities rely on massive, labeled data that are very cumbersome to collect. … We propose a novel unsupervised multimodal HAR solution, MaskFi, that leverages only unlabeled video and Wi-Fi activity data for model training.”

In order to train the necessary models required to experiment with Wi-Fi sensing for human activity recognition (HAR), scientists have to build a library of training data. The datasets used to train AI can contain thousands or even millions of data points depending on the aims of the particular model.

Often, labeling these datasets can be the most time-consuming part of conducting these experiments.

Enter Mask-Fi

The team from Nanyang Technological University built “MaskFi” to overcome this challenge. It uses AI models built using a method called “unsupervised learning.”

In the unsupervised learning paradigm, an AI model is pretrained on a significantly smaller dataset and then put through iterations until it’s able to predict output states with a satisfactory level of accuracy. This allows researchers to focus their energy on the models themselves instead of the painstaking effort it takes to build robust training datasets.

Source: Yang, et. al., 2024

According to the researchers, the MaskFi system achieved about 97% accuracy across two related benchmarks. This indicates that this system could, through future development, serve as the catalyst for an entirely new metaverse modality: a metaverse that can provide a 1:1 real world representation in real time. 

Source: Cointelegraph