Adello white

Blake Lemoine, a software engineer at Google, recently made a remarkable statement. He claimed that the corporation’s conversational AI bot LaMDA (Language Model for Dialogue Applications) had obtained a consciousness. Lemoine noted that the chatbot speaks its rights and perceives itself as a person. In response, company management suspended him from work - reported by The Washington Post.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.

In fact, the capability of machines is becoming more advanced year after year, and Natural Language Processing (NLP) demonstrates prominent results. However, ones can protest in response: Then why do software like Siri or Alexa still barely cope with the most primitive tasks in languages?

Understanding the human language

For us, people, the language seems to be an ordinary and natural process. However, it doesn’t change the fact that natural languages are a very complicated system. Previously, there was a dominant idea that only humans could use logic, reasoning, and intuition to understand, use, and translate languages. However, human logic and intuition can be modeled mathematically and programmed. When computers became more advanced, people were trying to make them understand human speech. That is how the history of NLP went a long way from the virtual psychiatrist ELIZA in 1964 to the first automatically deciphers ancient language machine in 2010 and chatbots on almost every website.

Nevertheless, despite the long history of research, machines still have a range of serious limitations and barriers in terms of NLP. Machines can hear and read what we write, but they still do not completely understand what we mean since they don’t know the whole picture of the world. This is one of the problems of the century in artificial intelligence technology.

NLP in Web 2.0

In the 80s-90s, at the beginning of the mass scale of the Internet, Web 1.0 users could only read the content. With the appearance of Web 2.0, there was the possibility to interact with text (Read-Write). That is why NLP technologies became especially useful and widespread in the second generation of the Internet. Thank to NLP, certain processes were facilitated, such as spam detection, chatbots, virtual assistance, and many others. Although the ability of machines to communicate on the human level remains relatively low, there are some quite interesting achievements:

Voice-controlled assistants like Siri, Alexa, and Alisa

Despite the fact that voice assistant was not developed to the level of the sophisticated interlocutor, they excellently perform their functions by assisting the user in different tasks. 

Search engine

Every day Google processes more than 3.5 billion searches. There are several advanced NLP models that are used to process the queries, including the most famous one, BERT (Bidirectional Encoder Representations from Transformers).

An example of how BERT improves the query’s understanding: When the user was quire “2019 brazil traveler to the USA needs a visa”, it was not clear to the computer. It could be a Brazilian citizen who is trying to get a visa to the US or an American to Brazil. Previously computers were giving results according to the keywords. In opposite, BERT takes into account every word in the sentence. In this exact context, “to” means a destination.

Text correction

As was mentioned, Web 2.0 allowed users to create text in the Web space. Hence, there was a high demand for text correction. Previously, users were relying on the built-in corrector in the Office 360 software that was pointing out mistakes. However, the mistake-detection technology in that software was quite vulnerable.

The new generation software that is AI-driven, such as Grammarly, uses NLP to help correct errors and make suggestions for simplifying complex writing or completing and clarifying sentences. They are more advanced and precise than the error-detection technologies of the previous generation.


Today there are a lot of websites that offer on their web pages chatbot assistant. Same as with voice-controlled assistants, they are not fully advanced and, in many cases, use simple keyword search and decision tree logic. However, it helps the users at a certain level and facilitates the work of the support team.

It may seem that NLP development doesn’t achieve the expectational level as it could be and has a range of serious limitations. Hence, in Web 3.0, we must deal with all those flaws of NLP we have today.

NLP in Web 3.0

There is no doubt that NLP will be a critical and foundational component of Web 3.0. AI-driven speech recognition and analytics technology can be used in many fields, such as enabling voice-based user interactions, command-driven hands-free navigation of virtual worlds, connection and interaction with virtual AI entities, and many others. Those are the examples of how NLP technologies will be used in web 3.0 and what are the objectives we need to achieve in this new internet era.

Voice operation

When we are talking about navigating in the metaverse, we might think about handheld controllers, gestures, eye-tracking, or voice control. In fact, voice operation would bring the user experience to a new level. Meanwhile, NLP technologies would help generate audio responses with linguistic nuances and voice modulation.

Speaking about voice operation in metaverse, it is important to mention development of the voice operation in games. The representatives of such in-game technology were games Seaman, Mass Effect 3, Bot Colony, Nevermind and others. Their NLP technologies were not that precisive. However, they can be considerate as the inspiration of the voice control  in metaverse.

Voice operation is already in the development process by Meta. The company launched a Voice SDK that allows VR developers to create virtual environments using voice commands.

Virtual assistance

In Web 3.0, the users would expect assistance in more daily tasks than in Web 2.0. Hence the virtual assistance would need to be upgraded to a higher level.

For the development of such a voice assistant, Meta took responsibility. Meta claims that their voice assistant will take presence in the metaverse, so it will probably have a virtual form of an avatar. Moreover, Meta wants its voice assistant to be more sophisticated, unlike the existing voice assistant in terms of understanding the context of the request.

“To support true world creation and exploration, we need to advance beyond the current state of the art for smart assistants,” said Zuckerberg.

AI companions

The difference between the Virtual assistant and AI companion is that the first one is designed to serve the user, while the AI companion is designed to be a virtual friend who knows your interest, can maintain a dialog, and give advice.

Web 2.0 offered us different AI chatbots. New generations of AI chatbots go further. One of the most prominent examples is the app Replika. After creating an avatar of its virtual companion that can also be visible via AR, users can start a conversation. AI companion, unlike many other AI chatbots, is capable of memorizing information about users, such as name, date of birth, interest, hobbies, memories, opinions, and others. Also, for the paid subscription, the AI companion can also be a romantic partner.

Replika, AI companion

Trade and Retail

Developing technologies of virtual assistants and AI companions is crucial for the Web 3.0:

If we manage to master those, it will pave the path to mastering other AI bots that will be responsible for the various jobs, such as psychologists or consultants.

Trade is one of the areas where AI-driven chatbots that will have the role of seller/consultant will be required. Trade, in general, has a great potential in the metaverse. That is how AI chatbots in this field will upgrade the purchasing process and improve the buyer experience.


In the 2000s, the developers were trying to bring to the life idea of social media with a built-in translator. Users could talk to different people around the world in their own languages, and everyone will understand each other thanks to the built-in translator. For some reason, those social networks didn’t gain popularity. Moreover, translation technology at that time was not that well developed and often provided awkward results.

Metaverse will be more than just social media, rather also a place for business, networking, education, shopping, sports, and so on. In that case, the automatic translation would be very useful. However, to achieve this, we need to make sure that AI-powered translators cope with their job precisely, fast, and flexibly.

Meta is already working on such a translator. This year the company announced they are developing an AI-powered c for languages in their metaverse.

Another prominent company that reached certain achievements in the real-time AI-powered translations is Unbabel. The software combines the speed and efficiency of machine translation with the accuracy and empathy of a global community of native-speaking translators. The merits of the company was noted by Windows, they said:

“We’ve seen CSAT scores jump as much as 10 points, and in one instance, we increased issue resolution by 20 percent.”

Those are just a few examples of how NLP technologies can be applied in Web 3.0, which will be used provided that the NLP technology is advancing fast enough. In fact, the development of NLP in the context of metaverse remains one of the underrated topics. Nevertheless, it doesn’t change the fact that the ability of machines to perceive text remains one of the prior tasks for AI today.

It may seem that Google and Meta live through hard times. Over the past decade, Google has already faced more than 8 billion euros in EU antitrust fines. Recently, Google had a lawsuit in France for adtech abuses and was fined for $268 Million.

Facebook has not been spared trouble too. The company keeps losing money on its stock after rebranding. Furthermore, currently, both Google and Facebook are under antitrust investigations in relation to their online display ad businesses by the European Commission and the U.K., which will drastically affect the reputation and income of those companies.

The Phantom Menace

Let us tell the story from the very beginning. It all started in 2007 when Google bought DoubleClick.

DoubleClick was one of the leaders in display advertising that developed and provided Internet ad services. DoubleClick was in collaboration with large brands such as Microsoft, General Motors, Coca-Cola, Motorola, L’Oréal, Apple, Visa, and many others.

Suddenly, Google acquired DoubleClick for $3.1 billion and immediately started implementing its rebranding. That is how DoubleClick became Google Marketing Platform brand, DoubleClick Bid Manager is now Display and Video 360, DoubleClick Search became Search Ads 360, and DoubleClick for Publishers turned to Google Ad Manager 360. Through this purchase, Google obtained its cash cow that Google protects with its full determination.

This occasion made advertisement analytics see the obvious red flags: there is reason to assume that if the leader in search advertising had acquired the leader in display advertising, it would consequently lead to the antitrust law breach.

Advertisers recognized a hidden menace in the Google Ad platform as well. That is why they started developing and open-sourcing a system called Header Bidding. This system allowed to launch a bid for several ad exchanges at once and in a much more transparent way than through Google’s systems. Thus, Header Bidding became a threat to Google’s dominance by improving the playing field for competition.

Exactly at this point, Facebook goes to the stage in this story. In 2017, Facebook began demonstrating a strong interest in Header Bidding but then suddenly backed off. It turns out, Google was trying to make friends with Facebook and gave the company a fantastic offer, which was not given to anyone else: a guarantee of a 90% (!) of auctions regardless of the bids; more time to bid, which is 300 milliseconds compared to the 160 offered to other companies, at the risk of pages loading more slowly, identification of 80% of smartphone users and 60% of web users. This agreement obtained a code name Jedi Blue.

Why tech monopoly is dangerous

Before we continue our story, let’s take a step back to understand why the monopoly of the tech giants is dangerous.

The Jedi Blue case is one of the examples of the abuse of Google and Facebook’s privileged positions and proof of why the tech giants need regulation.

For those, who are not familiar with the ad industry, Google and Facebook look like the undoubtful guarantee for quality since they reach dominance in the industry. However, this statement is misconceived, we will share some constructive criticism on the Facebook and Google ad products in a separate article. Furthermore, the monopoly of the large tech companies leads to a range of negative consequences.

Lack of quality

The most obvious reason that comes to mind is decreasing the quality as a result of monopoly. If there is no competition, there is no motivation to produce a quality product. This will consequently impede progress.

Owning large social capital is problematic

Today billions of people are dependent on services like Amazon, Apple, Facebook, and Google. Almost 3 billion people monthly use social media that belong to Meta (Facebook, WhatsApp, or Instagram), and this number will grow. If a large social capital is concentrated in one tech conglomerate, this can lead to a range of different problems such as disinformation or censoring.

Lack of confidentiality

Referring to the previous problem, lack of confidentiality deserves special attention. Collecting, selling, exploiting, and abusing data have long been a serious problem for tech companies. Currently, this is one of the reasons for antitrust and monopoly concerns about big tech companies.

Oppression of the smaller companies

The dominance of big tech companies is oppressing the small companies and preventing innovative brands and startups from entering the market. This will consiqencelly limit the customer choice.

And this is only the tip of the iceberg. There are more reasons why the power of the big tech companies should be under control. Nevertheless, there is a counter-power that prevents the domination of big companies and fights against the appearance of a monopoly.

The Government Strikes Back

Returning to the Jedi Blue case, it doesn’t happen for the first time when the tech giants, such as Apple, Google, Meta, or Amazon, are close enough to breach the antitrust law. For such cases, there is the European Commission’s commissioner of competition in Europe.

Very recently, EU antitrust authorities began their investigation regarding Google and Facebook’s Jedi Blue.

“On Friday, we just opened a new case with Google and Facebook, now Meta. It’s called Jedi Blue, named after the codename for an agreement that they seem to have entered back in 2018, with the aim, seemingly, to kill off Google competitors in the advertising ecosystem. We also have another Google case exclusively focusing on Google and the ad-tech stack, looking at some of the behaviors that seem to be anti-competitive.” - said Executive Vice President Margrethe Vestager, a European Commission’s commissioner of competition in 2014 and an executive vice president in 2019, she’s pursued antitrust cases.

If the investigation confirms the antitrust law breach from the side of Google and Facebook, the companies should expect another big fine.

Representatives from Facebook and Google have already commented on the situation.

Google spokesperson:

“The allegations made about this agreement are false. This is a publicly documented, procompetitive agreement that enables Facebook Audience Network (FAN) to participate in our Open Bidding program, along with dozens of other companies. FAN’s involvement is not exclusive and they don’t receive advantages that help them win auctions. The goal of this program is to work with a range of ad networks and exchanges to increase demand for publishers’ ad space, which helps those publishers earn more revenue. Facebook’s participation helps that. We’re happy to answer any questions the Commission or the CMA have.”

Meta also has declared their statement:

“Meta’s non-exclusive bidding agreement with Google and the similar agreements we have with other bidding platforms have helped to increase competition for ad placements. These business relationships enable Meta to deliver more value to advertisers and publishers, resulting in better outcomes for all. We will cooperate with both inquiries.”

Future of Facebook and Google

Facebook and Google are experiencing not only financial but also reputational losses due to the range of lawsuits. Remember when Zuckerberg lost in court because of Facebook’s privacy breach? After the range of claims from the EU privacy regulation administration, Meta is threatening to leave the European market.

Or did you notice how Google is losing its trust in privacy and ad products quality? For one, these consequences have already led to Google being completely banned in Austria.

Other European countries can follow the example of Austria and ban Google ads as well.

All those occasions will not be traceless for those companies. The question is if users will still want to collaborate with those tech giants after what happened.

While continuing to monitor the situation, we at Adello adhere to our mission to deliver the most transparent, reliable, and effective mobile advertising services to our valuable clients!

Copyright 2008 - 2022 © Adello