It’s hard to disagree that we live in an age where real and virtual lives overlap, creating a fascinating combination. The metaverse is the post-reality universe, which combines 2 worlds: virtual and real. As we know, the metaverse is closely dependent on innovations. Immersive technologies are one of the crucial ones that will profoundly influence the nature of the metaverse in the future.
Immersiveness (uncountable)
1. The quality or degree of being immersive.
the immersiveness of virtual reality
@wiktionary.org
Today, digital users have access to "immersive experiences." The term “immersiveness” is more frequently used as a distinct form of the metaverse concept. This high interest "to immerse" is a result of Web3’s transition to a 3D virtual environment. Web2 and Web3’s transitional phase permit users to perceive the virtual world via flat screens. Users will be able to immerse themselves in virtuality, interact with it, and become a part of it in the fully developed Web3 era. Moreover, the perceived information will not be limited only to visuals sound but also tactile sensations, smell, and even taste.
Extended Reality (XR) is a new and incredibly fascinating technology that everyone is talking about. XR is an umbrella term for augmented reality (AR), virtual reality (VR), and mixed reality (MR). XR is a rapidly expanding field with numerous applications, including entertainment, marketing, real estate, education, and remote work.
VR allows users to interact with computer models in an artificial three-dimensional environment using electronic devices such as headsets, controllers, and gloves. Oculus Quest 2, for instance, is one of the most well-known solutions on the virtual reality market. What makes VR special is the ability to create a different climate for a user.
AR allows to combine objects and environments of the real world with computer graphics on the screens of mobile devices or PCs. It is different from VR by its methods and the way of implementing virtual inputs into our reality to augment and improve it.
MR is a hybrid between VR and AR. It is designed to show how virtual and augmented worlds can flow one into the other. Also, mixed reality combines digital and tangible environments into one area where you can interact with non-existent objects in real-time. MR can be experienced with special headsets such as the Microsoft HoloLens. MR is more complex than the ones above.

The concept of the metaverse, even though the metaverse doesn’t exist at its full potential today, already comes across several challenges. XR devices are one of the valuable fields that may provide triggers for user inconveniences. The main problems of XR devices can be divided into 4 categories:
1. Actual prosperity, wellbeing, and security;
2. Brain research;
3. Profound quality and morals;
4. Information protection.
Let’s take a closer look at each of them.
XR devices may seriously affect the users’ well-being. While using XR devices, users can come across movement ailment, queasiness and unsteadiness. The main reason is the micro-delay in changing the image in response to our actions. Moreover, using glasses involves the vestibular apparatus as well, so that microstops affect our brain much more than when playing on a computer. Also, head and neck weakness is very common due to the long usage of VR headsets. Using VR headsets very often may prompt habit, social seclusion, and forbearance from real life frequently joined with body disregard. On top of that, Users may experience loneliness, griefing, digital tormenting, and badgering.
Scientists are still researching the influences of VR/AR on the human brain. Nevertheless, so far, there is no scientific evidence that XR can provoke constant brain damage in adults or children. It is important to remember, that XR is relatively new and requires investigation and research.
The moral aspect of the XR remains one of the crucial ones. It includes unapproved increase and truth control as any other device that provides media. Another issue is the accumulation of user biometric psychography that can be used for unethical purposes.
In fact, XR technologies require personal data to work. There is always a risk that the data will fall into the hands of cybercriminals. For this reason, developers of XR technologies must comply with data protection requirements.
In the future, XR devices will become more personal and elaborate. The fast development of this technology will speed up the process of transition from Web2 to Web3 and adopt the early metaverse. Apart from the metaverse, XR will be integrated into the different aspects of our life: video games, phygital events, VR parks, healthcare, real estate, education, and the military. In the coming years, XR projects will become more complex, dynamic, and useful. As technology advances, XR devices will become more powerful and be able to broadcast better visuals.
Speaking about advertising and retail industries, XR will increasingly help to control the quality of processes and finished products in retail to attract customers with new functionality. XR will allow you to create personalized mobile and native advertising. Out-of-home advertising will be sent via sophisticated digital devices when visiting certain places and objects.
XR technologies are truly incredible and open new horizons for us, users. As time goes on, we will witness how XR technologies will become smaller, faster, and more user-friendly. XR devices will sooner or later reach their end point of development, so then we, users, will finally reach a full immersion.
Roblox has been around since 2004, but it has only recently become popular because of the metaverse trend. The question of whether Roblox can be considered as a metaverse is still open. Still, Roblox is often called a precursor to the metaverse because it has a lot in common with this technology.
Roblox games create a $100 million business for more than 2 million developers worldwide. The Roblox Gambit is valued at $30 billion. Just a year ago, it was worth $4 billion. It is important to mention the influence of the pandemic on such fast value growth.
As we know, Roblox’s main audiences are teenagers and young adults (Gen Z) and the younger generation (Generation Alpha). Yet, many companies saw that as a potential channel to connect with their young audience. Since anyone can make their own world on the platform, companies can use it to promote their business, get the word out about the metaverse trend, and add certain features for a younger audience.
Let’s take a glance at the recent cases where companies implemented Roblox as part of their marketing strategy.
A prominent example of integrating trademarks into the metaverse of Roblox is Nikeland by Nike. Nike fans can now get together, create, compete, and share experiences, play mini-games, as well as create their own, using the interactive items available in the virtual space.
Initially, the company aimed to bring sports and gaming together inside their Roblox world. Nike plans to keep building its own virtual world in the future by adding new features and functions that make using the metaverse more fun and interesting. In the long term, Nike looks forward to hosting in-game versions of global sporting events such as online soccer and basketball matches. There is also a chance that the company could use the metaverse to find out how popular its future products will be or even let the top players work with Nike to make their own products.
As a result, Nikeland has received over 21 million visitors and has nearly 118,000 gamers as favorites. On top of that, Nike is gaining money from its collaboration with Roblox. Between December 2021 and February 2022, Nike’s revenues grew by 5%. However, in the next quarter, revenues fell again by 1%. However, by March 2022, since Nike implemented the metaverse as a direction of its marketing strategy, revenue had grown a lot from 10% three years ago.
Nike did, in fact, become a successful example of integrating their business into the metaverse in general, and Roblox in particular.
The latest news last week was that U.S. retail giant Walmart informed its investors and the audience about an agreement with Roblox. The companies have agreed to open two shared worlds on Roblox. Walmart said in an official statement that its decision to enter the Metaworld was driven mainly by two factors. First, the current preferences of the company's youngest customers, and second, the new "post-pandemic" trend of increasing popularity of shopping in virtual space.
As a part of the Roblox strategy, together with Walmart's Universe of Play, Walmart Land has the intention of promoting toys for children. Roblox may seem trivial, but it has more than 50 million daily active users, two-thirds of whom are under 16.
Walmart thinks that the online events and activities they set up will attract customers, so they will choose to shop at Walmart. One of the events will be Walmart Land virtual music festivals with the participation of popular artists and virtual exhibitions of products, which will represent the real goods available in the "offline" Walmart stores. Such promotions will involve famous influencers and celebrities.
To evaluate the gaming experience of the Walmart Roblox, Adello tested the game, and that’s our brief review.
We created an account to explore the Roblox world of Walmart’s immersive experience. First of all, we noticed that the navigation is not too difficult and the processes of registration, avatar creation, and world discovery run quite smoothly. You can collect coins as you know from conventional games.
The Walmart Land Roblox experience combines music, entertainment, and fashion. Inside the Walmart Roblox world, there are plenty of mini-games and activities, including a Ferris wheel, an "interactive piano track," and a DJ booth, most of which serve to attract players to specific brands. The virtual locker room, for example, allows players to spend coins collected in Walmart Land to decorate their avatar with Skullcandy headphones or a Fitbit fitness tracker. In the game's universe, you can race Razor scooters around the track or hang out with PAW Patrol characters. It's all incredibly simple in terms of graphics and game mechanics, but that's par for the course for Roblox.
You can either go to Electric Island, House of Style, or enter the Walmart Store. For example, you can go to House Island and do different Challenges. Like the Strike-A-Pose or the Manicure Challenge, where you have to copy the nail polish colors from one hand to the other while a timer is running. There is also a big Ferris wheel near the store. If you go on it, you can see everything better at a glance. Moreover, there is an "Electric Cafe" where you can sit and talk with your friends.
Overall, our first and short Roblox experience in the Walmart Land metaverse was satisfying. Definitely, it is quite boring if you don't have any friends there with whom you can play and communicate. Nevertheless, the idea of the different places you can visit is quite exciting.
At one point, promoting the company via Roblox may be considered a good strategy. Picture this: Every day, about 800 thousand players visit Roblox. In the MMORPG rating on the MMO Population site, it takes RobBLOX 7th place. In total, Roblox has over 16 million users worldwide. About 75% of Americans between the ages of 9 and 12 have already created an account on this platform.
Still, Walmart Land in Roblox doesn't get more than 500 visitors at this time, which is a pretty low number for such an enterprise.
Another problem is that gen Alpha just doesn't visit those huge department stores to play video games because it is simply hard to compete with the gaming industry. Will Walmart be able to develop its gamification features and grow a community to incentify young visitors and their parents to buy products featured in Walmart Land - this is an open question.
And what’s more important is how long this metaverse trend will remain viable. As a rule, gamers are always looking for new experiences. Today it is Roblox and Decentraland, tomorrow a more immersive platform with superior graphics built on Unreal Engine 5. As in the case of Clubhouse, the popular trend can be easily forgotten, and what remains is the question of whether the returns were worth the investment.
The most important rule: don’t follow the hype and carefully access opportunities for your business.
If you need help, contact us via info@adello.com.
Have you also noticed that more and more social platforms implement avatars? Last month Instagram added this feature, and shortly after, TikTok also included avatars in their app.Avatars have existed for a long time but recently gained a second wave of popularity due to the metaverse trend.
Users love avatars: It’s so exciting to have a digital copy of yourself. On top of that, you can be whatever you want in the digital world.So, what are avatars? Why are they so special to us? And what are their future utility?
Let’s start with the definition. Avatar is a graphic representation of a user, his online alter ego.
The word “avatar” takes its roots from Hinduism and means a being who embodies a god. Its initial goal was to reflect specific character traits of a person and help create a fairly accurate impression of his inner spiritual world and status (the nickname also serves this purpose).
Also, avatars had a decorative purpose. In addition, avatars simplify the perception of the topic discussion: They make it easy to associate posts with their authors.
The gaming industry was the first to implement the concept of avatars. It was used in 1985 in the computer game Ultima IV: Quest of the Avatar for the first time. Ultima has been releasing its series of adventure games since 1981, and becoming the Avatar was the goal of the fourth part.
The term was later adapted in the role-playing game Shadowrun (1985) and the online role-playing game Habitat in 1987 (this game is considered a forerunner of the metaverse).
Today, most games offer users to create their avatars and select their look, equipment, and other features.
The rapid development of the Internet led to the active use of avatars (a commonly used word for profile picture) in blogs, forums, and instant messenger services.
In the late 2010s, even before the metaverse trend went viral, several companies established a special feature for creating avatars, for instance, Apple and their memoji, Snapchat’s Bitmoji, and Zepeto.
Now, 3D avatars are becoming more and more widespread. The main advantage of such avatars before 2D pictures is dynamics. That means avatars can move and interact with other users.
Avatars have come a long way from video games to an integral element of the metaverse. Let’s take a look at how avatars are used today and what their future prospects are.
Many social media platforms use avatars - Facebook (Meta), Instagram, Tiktok, IMessages, Snapchat, and Zepeto - just to name a few!
This can be easily explained by the fact that most social media developers understand the upcoming trend of virtualization and moving in the direction of metaverse development. Indeed, In Web 3.0, social media will be transformed, so avatars will be a vital part of them.

Have you ever spent money to buy “skins” for your character in the game? Users are fond of making their avatars unique and are willing to modify them by purchasing different appearance items. And if something can be monetized, brands immediately jump at it.
Some brands, like Nike, MacDonald’s, or Gucci, have already contributed to this area and even created their own metaverse space. And avatars are expected to be used there.
Consequently, brands have started designing different skins and fashion items for avatars to sell online. Fashion and luxury brands will have a particular advantage in this field: Customizing the character not only with ordinary skins but also luxury items with a certain value, isn’t it cool?
Games will evolve and become more immersive. The need for avatars representing gamers in the metaverse is becoming more crucial.
The quality and resolution of in-game avatars will increase, and with innovative motion-capture technology, player facial expressions, emotions, and movements can be transferred to the game.
In the future, AI-driven avatars that don't require any human management will be a part of our daily lives. Such smart avatars can be helpful in different fields: from bot-consultant in retail to personal assistants and virtual influencers.
Of course, AI-driven avatars can be anything, but to accelerate user adoption, these avatars will look like people. Such skeuomorphism is used to give something new a familiar look to speed up acceptance and adoption. In other words, to feel more comfortable chatting with AI-avatar, we would still need a person to see.
Beyond that, these avatars can be indistinguishable from real humans. A neural network, of Samsung NEON, for one, can create an image of a person and animate it based on a huge database of uploaded photos.
And this is no longer a futuristic concept! Artificial humans like Lu do Magalu or Lil'Mikela are taking over the digital world. They have millions of followers; meanwhile, they represent fashion brands and collaborate with different companies. On YouTube, they rack up 1.5 billion a month and earn $100,000 a week from donations.
Avatars in the music industry are not new. You obviously know Gorillaz, whose performances created a real sensation at the beginning of the 21st century. At that time, no one could imagine that holograms could sing.
Another example is K/DA, a virtual K-pop girl group of four League of Legends characters. K/DA was developed by Riot Games, the company behind League of Legends. The group first performed at the Opening Ceremony of the 2018 League of Legends World Championship Finals with their debut song, "Pop/Stars." "Pop/Stars" topped the World Digital Songs Billboard charts, making K/DA the fourth female K-pop group to top the chart.
And last year, in September 2021, US TV channel Fox launched a new TV show where digital avatars perform on stage instead of humans and fight to win the competition!
Furthermore, real artists can have their avatars to give concerts in the metaverse. The company Wave offers such services. Among their clients are Justin Bieber, The Weeknd, Alison Wonderland, and others.
Cinematography influenced the nature of avatars not less than video games. The movie Avatar, as the name suggests, was one of them. The technology of motion capture was used in that movie. Simply put, the blue creatures in the film are high-realistic 3D models, not SFX-modified actors. This led us to ask the question: Do we really need people to act in the film now?

These kinds of hyper-detailed avatars can be seen in the movies such as Leia and Grand Moff Tarkin in Rogue One, or Rachel in Blade Runner 2049.
Today, however, motion capture technology competes with deep fakes when recreating an actor who is no longer alive.
Now we are seeing how the technology, that once was designed for games and social network users, is being transformed into a way of representing a living person for various needs. And this is just the beginning. With the development of the metaverse, everyone will have their own avatar, and perhaps even several.
Blake Lemoine, a software engineer at Google, recently made a remarkable statement. He claimed that the corporation’s conversational AI bot LaMDA (Language Model for Dialogue Applications) had obtained a consciousness. Lemoine noted that the chatbot speaks its rights and perceives itself as a person. In response, company management suspended him from work - reported by The Washington Post.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.
In fact, the capability of machines is becoming more advanced year after year, and Natural Language Processing (NLP) demonstrates prominent results. However, ones can protest in response: Then why do software like Siri or Alexa still barely cope with the most primitive tasks in languages?
For us, people, the language seems to be an ordinary and natural process. However, it doesn’t change the fact that natural languages are a very complicated system. Previously, there was a dominant idea that only humans could use logic, reasoning, and intuition to understand, use, and translate languages. However, human logic and intuition can be modeled mathematically and programmed. When computers became more advanced, people were trying to make them understand human speech. That is how the history of NLP went a long way from the virtual psychiatrist ELIZA in 1964 to the first automatically deciphers ancient language machine in 2010 and chatbots on almost every website.
Nevertheless, despite the long history of research, machines still have a range of serious limitations and barriers in terms of NLP. Machines can hear and read what we write, but they still do not completely understand what we mean since they don’t know the whole picture of the world. This is one of the problems of the century in artificial intelligence technology.
In the 80s-90s, at the beginning of the mass scale of the Internet, Web 1.0 users could only read the content. With the appearance of Web 2.0, there was the possibility to interact with text (Read-Write). That is why NLP technologies became especially useful and widespread in the second generation of the Internet. Thank to NLP, certain processes were facilitated, such as spam detection, chatbots, virtual assistance, and many others. Although the ability of machines to communicate on the human level remains relatively low, there are some quite interesting achievements:
Despite the fact that voice assistant was not developed to the level of the sophisticated interlocutor, they excellently perform their functions by assisting the user in different tasks.
Every day Google processes more than 3.5 billion searches. There are several advanced NLP models that are used to process the queries, including the most famous one, BERT (Bidirectional Encoder Representations from Transformers).
An example of how BERT improves the query’s understanding: When the user was quire “2019 brazil traveler to the USA needs a visa”, it was not clear to the computer. It could be a Brazilian citizen who is trying to get a visa to the US or an American to Brazil. Previously computers were giving results according to the keywords. In opposite, BERT takes into account every word in the sentence. In this exact context, “to” means a destination.
As was mentioned, Web 2.0 allowed users to create text in the Web space. Hence, there was a high demand for text correction. Previously, users were relying on the built-in corrector in the Office 360 software that was pointing out mistakes. However, the mistake-detection technology in that software was quite vulnerable.
The new generation software that is AI-driven, such as Grammarly, uses NLP to help correct errors and make suggestions for simplifying complex writing or completing and clarifying sentences. They are more advanced and precise than the error-detection technologies of the previous generation.
Today there are a lot of websites that offer on their web pages chatbot assistant. Same as with voice-controlled assistants, they are not fully advanced and, in many cases, use simple keyword search and decision tree logic. However, it helps the users at a certain level and facilitates the work of the support team.
It may seem that NLP development doesn’t achieve the expectational level as it could be and has a range of serious limitations. Hence, in Web 3.0, we must deal with all those flaws of NLP we have today.
There is no doubt that NLP will be a critical and foundational component of Web 3.0. AI-driven speech recognition and analytics technology can be used in many fields, such as enabling voice-based user interactions, command-driven hands-free navigation of virtual worlds, connection and interaction with virtual AI entities, and many others. Those are the examples of how NLP technologies will be used in web 3.0 and what are the objectives we need to achieve in this new internet era.
When we are talking about navigating in the metaverse, we might think about handheld controllers, gestures, eye-tracking, or voice control. In fact, voice operation would bring the user experience to a new level. Meanwhile, NLP technologies would help generate audio responses with linguistic nuances and voice modulation.
Speaking about voice operation in metaverse, it is important to mention development of the voice operation in games. The representatives of such in-game technology were games Seaman, Mass Effect 3, Bot Colony, Nevermind and others. Their NLP technologies were not that precisive. However, they can be considerate as the inspiration of the voice control in metaverse.
Voice operation is already in the development process by Meta. The company launched a Voice SDK that allows VR developers to create virtual environments using voice commands.
In Web 3.0, the users would expect assistance in more daily tasks than in Web 2.0. Hence the virtual assistance would need to be upgraded to a higher level.
For the development of such a voice assistant, Meta took responsibility. Meta claims that their voice assistant will take presence in the metaverse, so it will probably have a virtual form of an avatar. Moreover, Meta wants its voice assistant to be more sophisticated, unlike the existing voice assistant in terms of understanding the context of the request.
“To support true world creation and exploration, we need to advance beyond the current state of the art for smart assistants,” said Zuckerberg.
The difference between the Virtual assistant and AI companion is that the first one is designed to serve the user, while the AI companion is designed to be a virtual friend who knows your interest, can maintain a dialog, and give advice.
Web 2.0 offered us different AI chatbots. New generations of AI chatbots go further. One of the most prominent examples is the app Replika. After creating an avatar of its virtual companion that can also be visible via AR, users can start a conversation. AI companion, unlike many other AI chatbots, is capable of memorizing information about users, such as name, date of birth, interest, hobbies, memories, opinions, and others. Also, for the paid subscription, the AI companion can also be a romantic partner.

Developing technologies of virtual assistants and AI companions is crucial for the Web 3.0:
If we manage to master those, it will pave the path to mastering other AI bots that will be responsible for the various jobs, such as psychologists or consultants.
Trade is one of the areas where AI-driven chatbots that will have the role of seller/consultant will be required. Trade, in general, has a great potential in the metaverse. That is how AI chatbots in this field will upgrade the purchasing process and improve the buyer experience.
In the 2000s, the developers were trying to bring to the life idea of social media with a built-in translator. Users could talk to different people around the world in their own languages, and everyone will understand each other thanks to the built-in translator. For some reason, those social networks didn’t gain popularity. Moreover, translation technology at that time was not that well developed and often provided awkward results.
Metaverse will be more than just social media, rather also a place for business, networking, education, shopping, sports, and so on. In that case, the automatic translation would be very useful. However, to achieve this, we need to make sure that AI-powered translators cope with their job precisely, fast, and flexibly.
Meta is already working on such a translator. This year the company announced they are developing an AI-powered c for languages in their metaverse.
Another prominent company that reached certain achievements in the real-time AI-powered translations is Unbabel. The software combines the speed and efficiency of machine translation with the accuracy and empathy of a global community of native-speaking translators. The merits of the company was noted by Windows, they said:
“We’ve seen CSAT scores jump as much as 10 points, and in one instance, we increased issue resolution by 20 percent.”
Those are just a few examples of how NLP technologies can be applied in Web 3.0, which will be used provided that the NLP technology is advancing fast enough. In fact, the development of NLP in the context of metaverse remains one of the underrated topics. Nevertheless, it doesn’t change the fact that the ability of machines to perceive text remains one of the prior tasks for AI today.