Artificial Intelligence (AI) is all the rage, but four recent developments – privacy violations involving major technology companies (known as “big tech”), the rise of anti-monopolism, the EU’s implementation of the General Data Protection Regulation (GDPR), and repeated cries over personal capital fees – portend big tech regulation, even affecting AI development.

 

A lack of privacy protection mechanisms for social media

Cambridge Analytica, who cooperated with Facebook, misused personal information from about 87 million Facebook users and their friends, allegedly influencing the 2016 U.S. general election. In Facebook CEO Mark Zuckerberg’s Congressional hearing, he repeatedly claimed that he could use AI to survey content. With 2.2 billion users a month on average, and a rich variety of content in over 100 languages, however, experts contend that such efforts are doomed to failure. Any such guarantee is also a black box: “technology and society cannot be divided and sorted.” “The problem is Facebook itself (or its business model)!”

 

First, social media causes personal information to be mixed together with public resources. One can no longer hope for protection of individual privacy, as before, unless one is completely separated from the world, because friends or relatives, or even unrelated third parties, may still reveal one’s information. Cambridge Analytica leaked the personal information of 87 million people, but only about 270,000 Facebook users participated in its research; simulations have shown that as long as 1% of Londoners’ phones are hacked, half of its residents can be targeted. Therefore, experts such as Paul Francis at the Max Planck Institute argue that group privacy may have room for development.

 

Some small countries like Estonia, particularly emphasizing science and technology, simply rely on the state for centralized protection of personal information. This system has its advantages. Estonia’s annual tax returns take only five minutes, which is even better than the tax information center of Taiwan’s Ministry of Finance. This may however be an exception.

 

Some analysis has concluded that if the private sector becomes this personal information custodian, Google and Facebook would become the most able players. They have the most willingness, resources, scale, and experience – a development not foreseen by GDPR legislators.

 

The limits of GDPR’s control over large companies

Because of this abuse, Facebook is being targeted, but in fact Google is bigger. It has more searchable personal information, simulates user behavior through algorithms, and sells big data to advertisers, with estimated proceeds of US$ 20 billion – approximately 20% of its 2017 revenue.

 

The GDPR will affect the data brokerage industry landscape and division of profits. Google has collected a massive amount of user data (about 1 billion users) for the advertising and publishing industries, adding value through “third-person data” processing. According to the GDPR, Google is a “controller,” a top-level data processor with the highest number of permissions.

 

Google has been adapting for almost a year. 80% of its advertising revenue comes from search, involving only simple search keywords or products, or low-level privacy, so it believes that the impact of the GDPR will be limited. On the contrary, it is the smaller internet services – such as ad tech companies – which are more affected. The new law limps small players, but it’s good for Google.

 

Privacy protection is only one supervisory measure. Monopoly regulation is another. Lagarde, the most recent Executive Director of the International Monetary Fund, suggested controls on big tech to prevent them from abusing their market position, preventing innovation and reducing productivity.

 

In May, the Australian Competition and Consumer Commission made its first major shot at big tech: Google and Facebook were named as a duopoly, and investigations began on whether they had “taken advantage of their market position to infringe upon consumers and counterparties such as content creators and advertisers.” To date, over 60 publishers, brand owners, and related associations, led by News Corp, have provided written opinions.

 

Academia deepens its doubts over big tech’s monopoly

Academic consensus is also changing: the (American) “Chicago school” has always emphasized free markets and discounted government interference (including antitrust enforcement), yet it has expressed growing concerns in recent years about the monopoly of big tech. The annual Competition Law conference organized by the University of Chicago School of Management in April focused on platforms. Many participants favored strengthening supervision. The 40-page analysis was shocking: simply put, it asserted that the market position of digital platforms in this century is like that of the early railroad industry. The combined market share of Apple and Google in the global smartphone operating system market is 99%. Google and Facebook make up a combined market share of 73% in American digital advertising, driving 83% growth. 45% of online shopping starts from Amazon; last year it staked out a 40% market share in American digital sales. In a large survey of 900,000 internet users last year, 2/3 of web pages were posted on Google, and 30% on Facebook. Over the last five years, these companies have acquired 329 of their peers, eroding space for growth of new independent ventures.

 

Scholars call this phenomenon “E-distortion.” In addition to hidden concerns on the level of competition law, big tech has also created challenges in a number of aspects of economics, society, culture, and government, such as infringement of personal information, control of data, the “attention as currency” effect, lack of transparency, and trading of political influence, etc.

 

Financial regulators have also started meticulously researching big tech, mainly outsourcing (such as cloud services) and data exchange, fearing a lack of operational flexibility due to overreliance on such services – but their plans for future regulation leave much to the imagination. As for the Facebook CEO, who seemed to have received a passing grade for his Congressional hearing, an American financial industry insider who wished to remain unnamed said that “[he doesn’t] understand that the storm is still coming.”

 

Should platforms pay users for their information?

After Facebook’s privacy incident, and the implementation of the GDPR, calls for platforms to pay consumers have increased. Big tech is weaned on personal data from all of us.

 

Jason Lanier, author of the 2013 bestseller Who Owns the Future and a virtual reality expert, has suggested that the industry should pay consumers. This year, a new book in the U.S. – Radical Markets: Uprooting Capitalism and Democracy for A Just Society – cited the theory of property rights to contend that personal information should be treated as equivalent to the fruits of one’s labor, being subject to ownership and remuneration, in order to promote information quality, benign competition, and consumer choice. Both authors are big names: Eric Posner is a professor of Law at the University of Chicago Law School, specializing in economic analysis of law, while E. Glen Weyl is Principal Researcher at Microsoft.

 

They argue that fees paid to users are not only about property rights, but also fairness. We worry about AI replacing labor, but if big tech becomes rich quickly due to free capital; will it be possible to redistribute the wealth? Posner and Weyl note that in the tech sector, employees get only 5%-15% of companies’ economic value, while in companies like Walmart, that value may reach 80%.

 

If this disproportion persists, the value of labor in corporate turnover – currently at 70% – could be reduced to 20%-30%, further worsening the distribution of wealth. Based on the existing division of profits, the authors estimate that if big tech makes up 10% of the economy, and it gives out 2/3 of its profits to users, the annual payout to a normal family of four could increase to US$20,000 – not a small amount!

 

Is free use of a big tech platform, in exchange for use of data, a deal with the devil? Some entrepreneurs, such as Australia’s Unlockd, believe that there’s no such thing as a free lunch, and have implemented a simple but revolutionary measure: each time consumers click on an advertisement through the app, they are remunerated for their data. The app has attracted 330,000 users so far. Recently, the company filed a complaint in the U.K alleging that Google had threatened to drive it away from its ecosystem (Google said that they had breached a contract). Because this case involves privacy, payment for data, competition law, and even the big tech business model as a whole and “small tech” countermeasures, the markets are paying close attention.

 

Standing at the frontier of the internet, Taiwan is placing its hopes upon local big tech or unicorns. That’s easier said than done. But it’s always been good at competing against much larger players. By grasping the trends described above, and creating strategic miracles as with Unlockd, it can consider cutting into the AI service industry, rather than just habitually laboring over OEM in the “red sea” of the Internet of Things.

 

Big Tech, FAANG, and BAT

Big Tech refers broadly to world-class big cap technology business, often using platforms as their business model, and often playing in several simultaneous markets, such as user accounts (usually free), and on the advertising end sell huge amounts of information to customers who make targeted ads through the collection and algorithmic analysis of big data.

 

FAANG: Refers to the dominant American platforms Facebook, Apple, Amazon, Netflix, and Google.

BAT: Refers to the Chinese giants Baidu, Alibaba, and Tencent

 

What is the GDPR?

The capital protection laws in both Taiwan and the EU are modeled after the 1977 German law. Protection of privacy is based on explicit written consent (opt in), rather than American-style non-objection (opt out).

 

At that time, the internet was just beginning to burgeon, and the focus was on protecting “personal” privacy. The implementation of the GDPR at the end of this May was in response to the new information protection act in platform industry generation. It is even stronger, but also addresses larger challenges. Companies violating the GDPR may be fined up to 4% of their global sales, or €20 million, whichever is higher.

 

The hypothesis of GDPR is a high degree of rationality, so that as long as information and incentives exist, users will know how to allow or block others from using their data. But this may be too idealistic. Starting out from the perspective of protecting EU citizens, it has extraterritorial force, and being stronger than American law, it will be more likely to become a model for similar laws around the world.

 

The Facebook CEO promised during his Congressional hearing to strengthen user protections based on the GDPR, pleasing the representatives, even though the GDPR is not an American concept at all!

 

Big Tech brings about diverse economic, social , cultural, and governance challenges and hidden concerns for competition law, a phenomenon known as “E-distortion.”