EnglishDeutsch
Adello white

Have you also noticed that more and more social platforms implement avatars? Last month Instagram added this feature, and shortly after, TikTok also included avatars in their app.Avatars have existed for a long time but recently gained a second wave of popularity due to the metaverse trend.

Users love avatars: It’s so exciting to have a digital copy of yourself. On top of that, you can be whatever you want in the digital world.So, what are avatars? Why are they so special to us? And what are their future utility?

Origin of avatars

Let’s start with the definition. Avatar is a graphic representation of a user, his online alter ego.

The word “avatar” takes its roots from Hinduism and means a being who embodies a god. Its initial goal was to reflect specific character traits of a person and help create a fairly accurate impression of his inner spiritual world and status (the nickname also serves this purpose).

Also, avatars had a decorative purpose. In addition, avatars simplify the perception of the topic discussion: They make it easy to associate posts with their authors.

The gaming industry was the first to implement the concept of avatars. It was used in 1985 in the computer game Ultima IV: Quest of the Avatar for the first time. Ultima has been releasing its series of adventure games since 1981, and becoming the Avatar was the goal of the fourth part.

The term was later adapted in the role-playing game Shadowrun (1985) and the online role-playing game Habitat in 1987 (this game is considered a forerunner of the metaverse).

Today, most games offer users to create their avatars and select their look, equipment, and other features.

The rapid development of the Internet led to the active use of avatars (a commonly used word for profile picture) in blogs, forums, and instant messenger services.

In the late 2010s, even before the metaverse trend went viral, several companies established a special feature for creating avatars, for instance, Apple and their memoji, Snapchat’s Bitmoji, and Zepeto.

Now, 3D avatars are becoming more and more widespread. The main advantage of such avatars before 2D pictures is dynamics. That means avatars can move and interact with other users.

The Wide Utility of Avatars

Avatars have come a long way from video games to an integral element of the metaverse. Let’s take a look at how avatars are used today and what their future prospects are.

Social media

Many social media platforms use avatars - Facebook (Meta), Instagram, Tiktok, IMessages, Snapchat, and Zepeto - just to name a few!

This can be easily explained by the fact that most social media developers understand the upcoming trend of virtualization and moving in the direction of metaverse development. Indeed, In Web 3.0, social media will be transformed, so avatars will be a vital part of them.

Social media avatars
TikTok avatars, photo from sea.mashable.com

Brands

Have you ever spent money to buy “skins” for your character in the game? Users are fond of making their avatars unique and are willing to modify them by purchasing different appearance items. And if something can be monetized, brands immediately jump at it.

Some brands, like Nike, MacDonald’s, or Gucci, have already contributed to this area and even created their own metaverse space. And avatars are expected to be used there.

Consequently, brands have started designing different skins and fashion items for avatars to sell online. Fashion and luxury brands will have a particular advantage in this field: Customizing the character not only with ordinary skins but also luxury items with a certain value, isn’t it cool?

Games

Games will evolve and become more immersive. The need for avatars representing gamers in the metaverse is becoming more crucial.

The quality and resolution of in-game avatars will increase, and with innovative motion-capture technology, player facial expressions, emotions, and movements can be transferred to the game.

Virtual people

In the future, AI-driven avatars that don't require any human management will be a part of our daily lives. Such smart avatars can be helpful in different fields: from bot-consultant in retail to personal assistants and virtual influencers.

Of course, AI-driven avatars can be anything, but to accelerate user adoption, these avatars will look like people. Such skeuomorphism is used to give something new a familiar look to speed up acceptance and adoption. In other words, to feel more comfortable chatting with AI-avatar, we would still need a person to see.

Beyond that, these avatars can be indistinguishable from real humans. A neural network, of Samsung NEON, for one, can create an image of a person and animate it based on a huge database of uploaded photos.

And this is no longer a futuristic concept! Artificial humans like Lu do Magalu or Lil'Mikela are taking over the digital world. They have millions of followers; meanwhile, they represent fashion brands and collaborate with different companies. On YouTube, they rack up 1.5 billion a month and earn $100,000 a week from donations.

Music

Avatars in the music industry are not new. You obviously know Gorillaz, whose performances created a real sensation at the beginning of the 21st century. At that time, no one could imagine that holograms could sing.

Another example is K/DA, a virtual K-pop girl group of four League of Legends characters. K/DA was developed by Riot Games, the company behind League of Legends. The group first performed at the Opening Ceremony of the 2018 League of Legends World Championship Finals with their debut song, "Pop/Stars." "Pop/Stars" topped the World Digital Songs Billboard charts, making K/DA the fourth female K-pop group to top the chart.

And last year, in September 2021, US TV channel Fox launched a new TV show where digital avatars perform on stage instead of humans and fight to win the competition!

Furthermore, real artists can have their avatars to give concerts in the metaverse. The company Wave offers such services. Among their clients are Justin Bieber, The Weeknd, Alison Wonderland, and others.

Movies

Cinematography influenced the nature of avatars not less than video games. The movie Avatar, as the name suggests, was one of them. The technology of motion capture was used in that movie. Simply put, the blue creatures in the film are high-realistic 3D models, not SFX-modified actors. This led us to ask the question: Do we really need people to act in the film now?

Avatar from the movie avatar
Motion-capture technology in the movie Avatar, photo from premiere.fr

These kinds of hyper-detailed avatars can be seen in the movies such as Leia and Grand Moff Tarkin in Rogue One, or Rachel in Blade Runner 2049.

Today, however, motion capture technology competes with deep fakes when recreating an actor who is no longer alive.

Now we are seeing how the technology, that once was designed for games and social network users, is being transformed into a way of representing a living person for various needs. And this is just the beginning. With the development of the metaverse, everyone will have their own avatar, and perhaps even several.

Blake Lemoine, a software engineer at Google, recently made a remarkable statement. He claimed that the corporation’s conversational AI bot LaMDA (Language Model for Dialogue Applications) had obtained a consciousness. Lemoine noted that the chatbot speaks its rights and perceives itself as a person. In response, company management suspended him from work - reported by The Washington Post.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.

In fact, the capability of machines is becoming more advanced year after year, and Natural Language Processing (NLP) demonstrates prominent results. However, ones can protest in response: Then why do software like Siri or Alexa still barely cope with the most primitive tasks in languages?

Understanding the human language

For us, people, the language seems to be an ordinary and natural process. However, it doesn’t change the fact that natural languages are a very complicated system. Previously, there was a dominant idea that only humans could use logic, reasoning, and intuition to understand, use, and translate languages. However, human logic and intuition can be modeled mathematically and programmed. When computers became more advanced, people were trying to make them understand human speech. That is how the history of NLP went a long way from the virtual psychiatrist ELIZA in 1964 to the first automatically deciphers ancient language machine in 2010 and chatbots on almost every website.

Nevertheless, despite the long history of research, machines still have a range of serious limitations and barriers in terms of NLP. Machines can hear and read what we write, but they still do not completely understand what we mean since they don’t know the whole picture of the world. This is one of the problems of the century in artificial intelligence technology.

NLP in Web 2.0

In the 80s-90s, at the beginning of the mass scale of the Internet, Web 1.0 users could only read the content. With the appearance of Web 2.0, there was the possibility to interact with text (Read-Write). That is why NLP technologies became especially useful and widespread in the second generation of the Internet. Thank to NLP, certain processes were facilitated, such as spam detection, chatbots, virtual assistance, and many others. Although the ability of machines to communicate on the human level remains relatively low, there are some quite interesting achievements:

Voice-controlled assistants like Siri, Alexa, and Alisa

Despite the fact that voice assistant was not developed to the level of the sophisticated interlocutor, they excellently perform their functions by assisting the user in different tasks. 

Search engine

Every day Google processes more than 3.5 billion searches. There are several advanced NLP models that are used to process the queries, including the most famous one, BERT (Bidirectional Encoder Representations from Transformers).

An example of how BERT improves the query’s understanding: When the user was quire “2019 brazil traveler to the USA needs a visa”, it was not clear to the computer. It could be a Brazilian citizen who is trying to get a visa to the US or an American to Brazil. Previously computers were giving results according to the keywords. In opposite, BERT takes into account every word in the sentence. In this exact context, “to” means a destination.

Text correction

As was mentioned, Web 2.0 allowed users to create text in the Web space. Hence, there was a high demand for text correction. Previously, users were relying on the built-in corrector in the Office 360 software that was pointing out mistakes. However, the mistake-detection technology in that software was quite vulnerable.

The new generation software that is AI-driven, such as Grammarly, uses NLP to help correct errors and make suggestions for simplifying complex writing or completing and clarifying sentences. They are more advanced and precise than the error-detection technologies of the previous generation.

Chatbots

Today there are a lot of websites that offer on their web pages chatbot assistant. Same as with voice-controlled assistants, they are not fully advanced and, in many cases, use simple keyword search and decision tree logic. However, it helps the users at a certain level and facilitates the work of the support team.

It may seem that NLP development doesn’t achieve the expectational level as it could be and has a range of serious limitations. Hence, in Web 3.0, we must deal with all those flaws of NLP we have today.

NLP in Web 3.0

There is no doubt that NLP will be a critical and foundational component of Web 3.0. AI-driven speech recognition and analytics technology can be used in many fields, such as enabling voice-based user interactions, command-driven hands-free navigation of virtual worlds, connection and interaction with virtual AI entities, and many others. Those are the examples of how NLP technologies will be used in web 3.0 and what are the objectives we need to achieve in this new internet era.

Voice operation

When we are talking about navigating in the metaverse, we might think about handheld controllers, gestures, eye-tracking, or voice control. In fact, voice operation would bring the user experience to a new level. Meanwhile, NLP technologies would help generate audio responses with linguistic nuances and voice modulation.

Speaking about voice operation in metaverse, it is important to mention development of the voice operation in games. The representatives of such in-game technology were games Seaman, Mass Effect 3, Bot Colony, Nevermind and others. Their NLP technologies were not that precisive. However, they can be considerate as the inspiration of the voice control  in metaverse.

Voice operation is already in the development process by Meta. The company launched a Voice SDK that allows VR developers to create virtual environments using voice commands.

Virtual assistance

In Web 3.0, the users would expect assistance in more daily tasks than in Web 2.0. Hence the virtual assistance would need to be upgraded to a higher level.

For the development of such a voice assistant, Meta took responsibility. Meta claims that their voice assistant will take presence in the metaverse, so it will probably have a virtual form of an avatar. Moreover, Meta wants its voice assistant to be more sophisticated, unlike the existing voice assistant in terms of understanding the context of the request.

“To support true world creation and exploration, we need to advance beyond the current state of the art for smart assistants,” said Zuckerberg.

AI companions

The difference between the Virtual assistant and AI companion is that the first one is designed to serve the user, while the AI companion is designed to be a virtual friend who knows your interest, can maintain a dialog, and give advice.

Web 2.0 offered us different AI chatbots. New generations of AI chatbots go further. One of the most prominent examples is the app Replika. After creating an avatar of its virtual companion that can also be visible via AR, users can start a conversation. AI companion, unlike many other AI chatbots, is capable of memorizing information about users, such as name, date of birth, interest, hobbies, memories, opinions, and others. Also, for the paid subscription, the AI companion can also be a romantic partner.

Replika
Replika, AI companion

Trade and Retail

Developing technologies of virtual assistants and AI companions is crucial for the Web 3.0:

If we manage to master those, it will pave the path to mastering other AI bots that will be responsible for the various jobs, such as psychologists or consultants.

Trade is one of the areas where AI-driven chatbots that will have the role of seller/consultant will be required. Trade, in general, has a great potential in the metaverse. That is how AI chatbots in this field will upgrade the purchasing process and improve the buyer experience.

Translations

In the 2000s, the developers were trying to bring to the life idea of social media with a built-in translator. Users could talk to different people around the world in their own languages, and everyone will understand each other thanks to the built-in translator. For some reason, those social networks didn’t gain popularity. Moreover, translation technology at that time was not that well developed and often provided awkward results.

Metaverse will be more than just social media, rather also a place for business, networking, education, shopping, sports, and so on. In that case, the automatic translation would be very useful. However, to achieve this, we need to make sure that AI-powered translators cope with their job precisely, fast, and flexibly.

Meta is already working on such a translator. This year the company announced they are developing an AI-powered c for languages in their metaverse.

Another prominent company that reached certain achievements in the real-time AI-powered translations is Unbabel. The software combines the speed and efficiency of machine translation with the accuracy and empathy of a global community of native-speaking translators. The merits of the company was noted by Windows, they said:

“We’ve seen CSAT scores jump as much as 10 points, and in one instance, we increased issue resolution by 20 percent.”

Those are just a few examples of how NLP technologies can be applied in Web 3.0, which will be used provided that the NLP technology is advancing fast enough. In fact, the development of NLP in the context of metaverse remains one of the underrated topics. Nevertheless, it doesn’t change the fact that the ability of machines to perceive text remains one of the prior tasks for AI today.

Copyright 2008 - 2022 © Adello