EnglishDeutsch
Adello white

The recent announcement of Apple joining the advertisement race and building its own demand-side platform (DSP) was quite surprising to many. Indeed, Apple is publicly known for being rather skeptical and critical of online advertisement, especially when it comes to user privacy matters. So why the change of heart?

Since the beginning of the year, the job ads and number of hires in the advertising platforms division skyrocketed.

In the May report, the company stated that the Apple Services group now has an executive director, Todd Teresi, who handles advertising-related initiatives.

Most notably, Digiday highlighted job listings that say Apple is building “the most advanced and sophisticated privacy platform possible.” This sounds like their own DSP.

Once again, what is DSP?

To explain what DSP is, we need to step back and return to the concept of the programmatic advertisement.

The programmatic advertisement offers an automated process of buying and selling online advertising. This makes transactions more efficient by optimizing the process and consolidating digital advertising efforts in one technology platform. Unlike traditional advertising, which includes requests for proposals, quotes, tenders, and negotiations, programmatic buying uses algorithmic software to buy and sell online display space.

The programmatic ecosystem comprises the different technology platforms, available advertising deal types, and methods of buying programmatic media and spans across various ad formats.

All components communicate and interact with each other and create

an ecosystem that allows making the process of automated media transactions more facile.

As a rule, each programmatic platform is owned and operated by either a publisher, an advertiser, or an intermediary.

DSPs allow advertisers to set up an automation that scales campaigns based on performance metrics using machine learning. As long as the platform is well designed, advertisers are happy and spend more.

DSPs also perform other functions, such as helping advertisers to accurately match the demographics of their intended audience, testing multiple ad campaign options with dynamic A/B testing, and ultimately optimizing and increasing spending on the most effective campaigns.

Apple

What to expect

This is not the first time for Apple to enter the advertising game. In 2010 the company acquired mobile advertising company “Quattro Wireless”. You may also remember iAd - Apple’s failed attempt at programmatic advertising. iAd was supposed to offer inventory available programmatically through open exchanges. Apple even announced in 2014 entering a partnership with Rubicon, MediaMath, Accordant Media, The Trade Desk, Adelphic, AdRoll, and others. Back then, it seemed like a promising and prospective beginning of the next ad giant! Suddenly, Apple rolled up the iAd project in 2016 due to data limitations and a lack of demand.

iAd case and the latest news about Apple’s DSP make asking the question: Will this time Apple’s attempt to get into the programmatic market be successful after such a dramatic fail? Looking ahead, yes, and there are many reasons why it will. Estimates are that Apple should make about $4B in advertising this year.

Apple has a huge advantage over the competition of being the owner of the popular iOS operating system. All those years, Apple has been building a complex system of connected products and services. These products and services are interconnected, which creates smooth consumer experiences. That is why Apple’s future DSP looks like a well-thought rational next step in this evolution. The company was building and expanding its advertising business behind the scenes, and now they finally revealed it.

Yet, Apple is giving very generous promises in their job posts regarding their future DSP platform:

“Our platform runs and delivers advertising auctions to match supply (customers) with demand (advertisers), focusing on technical components including Campaign Management, Bidding, Incrementality, Dynamic Creative Optimization, Matching, Auctions, and Experimentation, while empowering Customer Privacy throughout,”

Another interesting aspect of all of this is user data. Apple promises confidentiality in its actions and many times has already reported that privacy is a priority.

When Intelligent Tracking Prevention was introduced, it effectively disabled the usage of third-party cookies in the Safari web browser. Later, it ceased the mobile advertising identifier (MAIDs, or IDFA). Once App Tracking Transparency appeared, which allowed the user to control any ad experience by opting in or out of any track. The introduction of App Tracking Transparency was a clear win for consumers, but ad networks like Facebook and Snap have struggled to keep their ad platforms efficient. As a result, it is estimated to cost the industry tens of billions of dollars.

On top of that, Apple’s definition of “tracking” as a transfer of data to third parties means that its own personalized ads are not subject to the App Tracking Transparency, as this is

considered a transfer of data by the first party. So, users still will receive ads, but advertisers would have to pay “iPrice” for that.

That is how Apple first cut off Google and Facebook, which make up about 65% of all global advertising, by limiting their tracking and attribution systems while using privacy protection as an excuse.

It’s interesting to see whether Apple will get slammed by its users for monetizing their personal data for a premium price. Or whether US Federal Trade Commission will launch an antitrust investigation for monopolized advertising services.

Although, seeing how devoted Apple users are to the brand, there’s a good chance that Apple will get away with it.

Privacy rules are a strong tool for them to shut down competition while monetizing Apple user data all on their own.

Blake Lemoine, a software engineer at Google, recently made a remarkable statement. He claimed that the corporation’s conversational AI bot LaMDA (Language Model for Dialogue Applications) had obtained a consciousness. Lemoine noted that the chatbot speaks its rights and perceives itself as a person. In response, company management suspended him from work - reported by The Washington Post.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine.

In fact, the capability of machines is becoming more advanced year after year, and Natural Language Processing (NLP) demonstrates prominent results. However, ones can protest in response: Then why do software like Siri or Alexa still barely cope with the most primitive tasks in languages?

Understanding the human language

For us, people, the language seems to be an ordinary and natural process. However, it doesn’t change the fact that natural languages are a very complicated system. Previously, there was a dominant idea that only humans could use logic, reasoning, and intuition to understand, use, and translate languages. However, human logic and intuition can be modeled mathematically and programmed. When computers became more advanced, people were trying to make them understand human speech. That is how the history of NLP went a long way from the virtual psychiatrist ELIZA in 1964 to the first automatically deciphers ancient language machine in 2010 and chatbots on almost every website.

Nevertheless, despite the long history of research, machines still have a range of serious limitations and barriers in terms of NLP. Machines can hear and read what we write, but they still do not completely understand what we mean since they don’t know the whole picture of the world. This is one of the problems of the century in artificial intelligence technology.

NLP in Web 2.0

In the 80s-90s, at the beginning of the mass scale of the Internet, Web 1.0 users could only read the content. With the appearance of Web 2.0, there was the possibility to interact with text (Read-Write). That is why NLP technologies became especially useful and widespread in the second generation of the Internet. Thank to NLP, certain processes were facilitated, such as spam detection, chatbots, virtual assistance, and many others. Although the ability of machines to communicate on the human level remains relatively low, there are some quite interesting achievements:

Voice-controlled assistants like Siri, Alexa, and Alisa

Despite the fact that voice assistant was not developed to the level of the sophisticated interlocutor, they excellently perform their functions by assisting the user in different tasks. 

Search engine

Every day Google processes more than 3.5 billion searches. There are several advanced NLP models that are used to process the queries, including the most famous one, BERT (Bidirectional Encoder Representations from Transformers).

An example of how BERT improves the query’s understanding: When the user was quire “2019 brazil traveler to the USA needs a visa”, it was not clear to the computer. It could be a Brazilian citizen who is trying to get a visa to the US or an American to Brazil. Previously computers were giving results according to the keywords. In opposite, BERT takes into account every word in the sentence. In this exact context, “to” means a destination.

Text correction

As was mentioned, Web 2.0 allowed users to create text in the Web space. Hence, there was a high demand for text correction. Previously, users were relying on the built-in corrector in the Office 360 software that was pointing out mistakes. However, the mistake-detection technology in that software was quite vulnerable.

The new generation software that is AI-driven, such as Grammarly, uses NLP to help correct errors and make suggestions for simplifying complex writing or completing and clarifying sentences. They are more advanced and precise than the error-detection technologies of the previous generation.

Chatbots

Today there are a lot of websites that offer on their web pages chatbot assistant. Same as with voice-controlled assistants, they are not fully advanced and, in many cases, use simple keyword search and decision tree logic. However, it helps the users at a certain level and facilitates the work of the support team.

It may seem that NLP development doesn’t achieve the expectational level as it could be and has a range of serious limitations. Hence, in Web 3.0, we must deal with all those flaws of NLP we have today.

NLP in Web 3.0

There is no doubt that NLP will be a critical and foundational component of Web 3.0. AI-driven speech recognition and analytics technology can be used in many fields, such as enabling voice-based user interactions, command-driven hands-free navigation of virtual worlds, connection and interaction with virtual AI entities, and many others. Those are the examples of how NLP technologies will be used in web 3.0 and what are the objectives we need to achieve in this new internet era.

Voice operation

When we are talking about navigating in the metaverse, we might think about handheld controllers, gestures, eye-tracking, or voice control. In fact, voice operation would bring the user experience to a new level. Meanwhile, NLP technologies would help generate audio responses with linguistic nuances and voice modulation.

Speaking about voice operation in metaverse, it is important to mention development of the voice operation in games. The representatives of such in-game technology were games Seaman, Mass Effect 3, Bot Colony, Nevermind and others. Their NLP technologies were not that precisive. However, they can be considerate as the inspiration of the voice control  in metaverse.

Voice operation is already in the development process by Meta. The company launched a Voice SDK that allows VR developers to create virtual environments using voice commands.

Virtual assistance

In Web 3.0, the users would expect assistance in more daily tasks than in Web 2.0. Hence the virtual assistance would need to be upgraded to a higher level.

For the development of such a voice assistant, Meta took responsibility. Meta claims that their voice assistant will take presence in the metaverse, so it will probably have a virtual form of an avatar. Moreover, Meta wants its voice assistant to be more sophisticated, unlike the existing voice assistant in terms of understanding the context of the request.

“To support true world creation and exploration, we need to advance beyond the current state of the art for smart assistants,” said Zuckerberg.

AI companions

The difference between the Virtual assistant and AI companion is that the first one is designed to serve the user, while the AI companion is designed to be a virtual friend who knows your interest, can maintain a dialog, and give advice.

Web 2.0 offered us different AI chatbots. New generations of AI chatbots go further. One of the most prominent examples is the app Replika. After creating an avatar of its virtual companion that can also be visible via AR, users can start a conversation. AI companion, unlike many other AI chatbots, is capable of memorizing information about users, such as name, date of birth, interest, hobbies, memories, opinions, and others. Also, for the paid subscription, the AI companion can also be a romantic partner.

Replika
Replika, AI companion

Trade and Retail

Developing technologies of virtual assistants and AI companions is crucial for the Web 3.0:

If we manage to master those, it will pave the path to mastering other AI bots that will be responsible for the various jobs, such as psychologists or consultants.

Trade is one of the areas where AI-driven chatbots that will have the role of seller/consultant will be required. Trade, in general, has a great potential in the metaverse. That is how AI chatbots in this field will upgrade the purchasing process and improve the buyer experience.

Translations

In the 2000s, the developers were trying to bring to the life idea of social media with a built-in translator. Users could talk to different people around the world in their own languages, and everyone will understand each other thanks to the built-in translator. For some reason, those social networks didn’t gain popularity. Moreover, translation technology at that time was not that well developed and often provided awkward results.

Metaverse will be more than just social media, rather also a place for business, networking, education, shopping, sports, and so on. In that case, the automatic translation would be very useful. However, to achieve this, we need to make sure that AI-powered translators cope with their job precisely, fast, and flexibly.

Meta is already working on such a translator. This year the company announced they are developing an AI-powered c for languages in their metaverse.

Another prominent company that reached certain achievements in the real-time AI-powered translations is Unbabel. The software combines the speed and efficiency of machine translation with the accuracy and empathy of a global community of native-speaking translators. The merits of the company was noted by Windows, they said:

“We’ve seen CSAT scores jump as much as 10 points, and in one instance, we increased issue resolution by 20 percent.”

Those are just a few examples of how NLP technologies can be applied in Web 3.0, which will be used provided that the NLP technology is advancing fast enough. In fact, the development of NLP in the context of metaverse remains one of the underrated topics. Nevertheless, it doesn’t change the fact that the ability of machines to perceive text remains one of the prior tasks for AI today.

In a nutshell: This WWDC appeared to be more of a preparation for something bigger. They cleaned up and enhanced the software (OS versions). Of course, faster MacBooks are great, but that’s not typically what you mean by innovation from Apple.

Many had expected that Apple would now jump on the VR bandwagon, but no VR/XR hardware was presented this time. Does this mean the Metaverse is off the table for Apple?

Anyone who has been observing the IT industry for a while will notice that despite all the development of new technologies such as VR headsets or gesture recognition via cameras, our user interfaces are still based on the classic desktop-mouse concept with horizontal or vertical menus (even on mobile).

Apple was the first company to successfully establish user guidance with the mouse (stolen from Xerox...), replacing cryptic keyboard combinations with simple clicks. The same happened over and over again, with the iMac, Touch the iPod, the iPhone, and the iPad. First and foremost, Apple stands for making technology intuitively accessible to all.

In the battle for on-person sensors, Apple beat the weird concept of “Google Glasses” with the Apple Watch. Google Glasses will be remembered by the memes “Glassholes”, while Apple’s sensor continues to sell better than any other sensor. They simply had it elegantly packaged in a familiar watch form. It’s not really a watch after all (that’s still misunderstood today, I had described that in 2015).

Apple creates ecosystems. And successfully manages, maintains and expands them. This has been shown with the launch of the iPhone. First, a mobile operating system was established, and then apps and music were sold in a curated marketplace.

WWDC 2022

I think Apple is once again preparing for the next big leap. They have just announced several things:

WWDC 2022

In short, Apple is ready for the Metaverse. However, Apple is smartly waiting until users get accustomed to the technologies and until successful solutions emerge. The race for the next version of the Internet has begun. Apple does not seem to be in the running, but they are likely already in the lead.

Copyright 2008 - 2022 © Adello