Open Banking? Open Everything!

The way to deal with the power of BigTech is not to break them up, but to open them up.

Dateline Woking, 11th September 2021.

In their fascinating paper on “The Data Economy: Market Size and Global Trade” for the Economic Statistics Centre of Excellence (part of the UK's National Institute of Economic and Social Research), Diane Coyle and Wendy Li talk about the growing data gap between global Big Tech and potential competitors, disruptors and innovators.

(Diane co-directs the Bennett Institute for Public Policy at the University of Cambridge. She has held a number of public service roles including Vice Chair of the BBC Trust, member of the Competition Commission, the Migration Advisory Committee and the Natural Capital Committee. She was awarded a CBE in the 2018 New Year Honours List. This was, surprisingly, for her contribution to the public understanding of economics and not, as most people would imagine, for persuading me to write “Identity is the New Money”.)

Diane and Wendy argue (convincingly) that this data gap is a a barrier to entry that affects not only businesses but also aggregate innovation, investment and trade:

Large data holdings, rich in volume and variety, thus give large online platforms a significant competitive advantage, powered by network effects and the virtuous cycle between data and the AI algorithms improving the services and increasing revenues.

This advantage means that platforms obtain insights about adjacent sectors and can then enter them more easily. Potential new competitors without access to that data and the advantages it confers therefore struggle to enter new markets. This has been obvious for some time. Indeed, the EU's “A Europe fit for the Digital Age” initiative launched in 2020 made a central observation that the market power provided by the data advantage allows a handful of large players to unfairly leverage into new markets.

That all sounds qualitatively correct and I’m sure that none of you would disagree with the thrust of the argument. But how big is the problem? It is hard to obtain quantitative evidence because most data flows around out of the view of statisticians. Statistics Canada tried to estimate the value of the country's data in 2019 and came up with somewhere between c$160-220 billion. For comparison, that would make the value of all the data in America something in the region of $1.4-2 trillion (which would be nearly 5% of America's stock of private physical capital).

That's a pretty significant sum. But, as The Economist notes, that while the data economy is clearly large, a robust measurement has yet to be developed. Diane and Wendy propose a new and consistent impact-based approach to estimate the size of the overall data market in a sector by comparing the values of data with and without the entry of an online platform. They use the impact of AirBnB on the hospitality sector as an example, and calculate the market size for data in the global hospitality sector as USD $43 billion in 2018, growing on average at the rate of one-third per annum. But as their calculations show, the benefits of this growing market are not distributed evenly.

That rate of growth and its distribution would seem to confirm that the online platforms' disruption of incumbents' firm-specific knowledge is fast and significant: they grow the data market and they own it. This may be why a number of online platforms are adopting the super-app approach pioneered in the China because the ability to combine data from multiple sources is even more advantageous than we might thing, meaning that the platforms impact may be “accelerated or multiplicative”.

Super-apps are yet another confirmation that data isn't the new oil. It doesn't get used up as it is refined. Instead the use of data produces even more data and the more this reservoir data grows the more it feeds innovations and generates positive externalities.


Bits and Borders

This methodological analysis gets even more interesting when applied at the aggregate level, as it shows that there is developed country that is a net importer of data and a net exporter of digital goods: yes, as you would guess, it is the United States. America's online platforms collect data from around the world to feed centralised digital production in the US. (China is also a net data importer and digital goods exporter although more of its digital consumers are domestic.)

(The UK, in comparison, has no dominant local platforms. We are net data exporters with a sound digital infrastructure but a disadvantage in data trade. Given that data is what we need to produce added-value digital goods, it’s a pretty serious disadvantage.)

Many countries are revising their data policies and localisation rules in a thrust for “digital sovereignty”. This is a useful catch-all description of the many ways that governments try to assert more control over their digital infrastructure. It has long been a concern in supply chains, affecting the kinds of hardware and software available in a given market. Now, of course, this form of resurgent nationalism is fragmenting the cloud and damaging data trade. Governments around the world are passing measures that require companies to host infrastructure and store certain kinds of data in local servers.

(Some countries also require companies that operate within their borders to provide the government with access to data and code stored in the cloud.)

These policies are not currently based on economic calculation, which is why the measurement of data markets should help policymakers understand the dynamics and contribute to sane localisation policies. But a barrier to understanding the economic impact that data imports and exports are not directly related to the creation and distribution of the value of data. Data flows do not reflect the value derived from data.

(In the paper Diane and Wendy illustrate this point using the example of Taiwan. Google has two data centres in Taiwan to support its operation in Asia. This means that there are large cross-border data flows between Taiwan and other countries, but Taiwan is unlikely to receive most of the benefits.)

Free Trade

Oliver Dowden, the UK’s Minister for Culture and such like, got a lot of criticism this week when the British government set out its plans for data in the post-Brexit world. The government wants to stimulate trade and innovation by “reducing unnecessary barriers and burdens on international data transfers” but as many commentators noted, the plans “fix GDPR” (which Wired called “extremely high-level and vague”) could put it on a collision course with the European Union.

The popular press various reported on the Minister’s comments as being about ending annoying cookie popups and/or handing Briton’s personal data over to sinister multi-national technology fiends. However, as Diane Coyle and Wendy Li note in their paper, the core point holds: GDPR restricts firms from repurposing data beyond its original intended use without re-obtaining consent from individuals (to safeguard privacy) which limits data sharing among firms.

No pictures, please respect my privacy.

I am as sensitive as many others (eg, my cat, as shown here) about individual privacy while at the same time agreeing with Elizabeth Denham, the current Information Commissioner, who supported the government’s view saying that "data driven innovation stands to bring enormous benefits to the UK economy”.

(It’s not all about regulation, of course. Many companies are reluctant to share data with others, either because of market position of perhaps sometime because they do not understand the value of the data.)

Blocking data flows is sort of metaverse mercantilism that benefits no-one. The European Council on Foreign Relations (ECFR, a prominent think tank) published a call for action on “Defending Europe's Economic Sovereignty” last year in which it called for the EU (and the UK) not to put up the shutters but to agree data free-flow with the US while prohibiting forced sensitive data transfers and introducing a dispute settlement mechanism.

A Third Way

This is where there is a new approach that fintechs can build on. We (society) do not want data locked up in hoards and nor do we want personal data flowing through every nook and cranny of the Net. What we need is to develop new data holding and sharing structures that can provide the facts about data that are needed to make decisions without disclosing the data itself. How can we do this while protecting privacy? This is the difference between providing a date of birth and providing proof that you are over 18 or 21 or whatever to get into a bar, or providing proof that a company’s assets exceed its liabilities without disclosing what any of those assets or liabilities actually are.

Fortunately, we already know how to add up encrypted “2” plus encrypted “2” to make encrypted “4”, it’s called homomorphic encryption. And it’s only one of an array of (tried and tested) cryptographic techniques that can deliver the degree of security and privacy that we need to move forward. With this, and cryptographic blinding and zero-knowledge proofs and so on, we should be able to square the circle of more data sharing and more privacy.

Data sharing will enable a greater degree of entry and competition, driving more innovations. Diane and Wendy conclude that an open data-sharing ecosystem will increase productivity and therefore economic wellbeing. From my inexpert perspective, I could not agree more. We have entered an era of data hoarding where companies are now storing any and all of the data that they can get their hands on just in case it will be worth something in the future. But as it sits in these hoards, that data is not working to the greater good.

This, to my mind, is yet more support for the idea of taking on the data misers and forcing them to share data to the benefit of competition and the economy as whole. We already know what do! Open banking was a good place to begin the attack on data hoarding by incumbents and we are learning a lot as the paradigm spreads. Open finance is an obvious next step. But regulators should be set the firm goal of open data in all sectors (except possibly national security) in mind and start working towards it. The fintech sector's demands should be maximal: open everything!

Big Regulation Coming For BigTech 

This approach (ie, open everything) provides a practical manifesto for dealing with Big Tech. In America, where the House Judiciary Committee's antitrust panel carried out a 16 month investigation into Amazon, Apple, Google and Facebook, resulting in a finding that Big Tech has "monopoly power" in key business segments and “abused" their dominance in the marketplace.

So, what is to be done? I had the honour of chairing Professor Scott Galloway who is the author of “The Four”, an excellent book about the power of internet giants (specifically Google, Apple, Facebook and Amazon - hence the title), at a conference in Washington a while back. He set out a convincing case for regulatory intervention to manage the power of these platform businesses. Just as the US government had to step in with the anti-trust act in the late 19th century and deal with AT&T in the late 20th century, so Professor Galloway argues that they will have to step in again, and for the same reason: to save capitalism.

Professor Galloway argues that the way to do this is to break up the internet giants. Should Congress go down this route? Well, one of the panel’s own members, Ken Buck (Republican), while agreeing with the diagnosis, said that the Democratic-led panel’s proposal to force platform companies to separate their lines of business (ie, break them up) is not the right way forward. I agree. Forcing Amazon to spin out Amazon Web Services (to use an obvious and much-discussed example) won’t make any difference to Amazon’s role in the online commerce world.

Google is not U.S. Steel. While I do not think that data is the new West Texas Intermediate, and Facebook is not the new Standard Oil, the idea of focusing regulation on the refining and distribution of an economy's crucial resource has logic to it. We need this to protect competition in the always-on world of today and there, as Angela Chen explained in MIT Technology Review, there are plenty of alternatives to breaking up technology companies. Perhaps the most fruitful way forward is an approach based on a future capitalist framework along the lines of what Viktor Mayer-Schönberger and Thomas Range called in Foreign Affairs a “progressive data sharing mandate”. 

There are many informed observers who say that America should to look see what is going on in Europe in order to formulate this kind of approach: Here in Forbes last year, Robert Seamans and “Washington Bytes” highlighted data portability as a potentially valuable approach and pointed to the UK’s open banking regulation as a source of ideas. I think this makes a lot of sense and that a good way to explore what some form of data-centric remedy might look like is indeed to take a look at Europe's open banking regime. More specifically, start with what it got wrong: because in that mistake are the seeds of a solution. 

Give a gift subscription

The Open Way

Back in 2016, I wrote about the regulators demanding that banks open up their APIs to give access to customer data that “if this argument applies to banks, that they are required to open up their APIs because they have a special responsibility to society, then why shouldn’t this principle also apply to Facebook?”. My point was, I thought, rather obvious. If regulators think that banks hoarding of customers’ data gives them an unfair advantage in the marketplace and undermines competition then why isn’t that true for Big Tech?

When I went on to say that the regulators were giving Big Tech a boost in “Wired World in 2018”, no-one paid any attention because I’m just some tech guy. But when Ana Botin (Executive Chairman of Santander) began talking about the lack of any reciprocal requirement for those giants to open up their customer data to the banks, regulators, law makers and policy wonks began to sit up and pay notice. She suggested that organisations holding the accounts of more than (for example) 50,000 people ought to be subject to some regulation to give API access to the consumer data. Not only banks, but everyone else should provide open APIs for access to customer data with the customer’s permission.

This is along the lines of what is being implemented in Australia, where open banking is part of a wider approach to consumer data rights and there will indeed be a form of symmetry imposed by rules that prevent organisations from taking banking data without sharing their own data. The Australian Competition and Consumer Commission (ACCC) has already had enquiries from international technology companies wanting to participate in open banking. The banks and many others want this method of opening up to be extended beyond what are known as the “designated” sectors, currently banking and utilities, so that if a social media company (for example) wants access to Australian’s banking data it must become an “accredited data recipient” which means it turn that it must make its data available (in a format determined by a Consumer Data Standards Body). This approach would not stop Facebook and Google and the others from storing my data but it would stop them from hoarding it to the exclusion of competitors.

As Jeni Tennison set out for the UK’s Open Data Institute, such a framework would allow “data portability to encourage and facilitate competition at a layer above these data stewards, amongst the applications that provide direct value to people”, just as the regulators hope customer-focused fintechs will do using the resource of data from the banks. The Chinese government think this same way. Ant, for example, is already being prodded by authorities to open up its hoard of personal financial data to both state-owned companies and smaller rivals. While no specific rules for fintech have been issue at this point, observers expect them soon.

At last year’s SIBOS, the CEO of ING Steven Van Rijswijk re-iteratedthe need for reciprocity, saying that he wanted the regulators come up with an equivalent for banks so "the data flow can go two ways”. Well, this may be on the horizon. As the Financial Times observed, an early draft of the EU’s new Digital Services Act shows it wants to force Big Tech companies to share their “huge troves” of customer data with competitors. The EU says that Amazon, Google, Facebook and others “shall not use data collected on the platform . . . for their own commercial activities . . . unless they make it accessible to business users active in the same commercial activities”.

It seems to me that regulators might adopt the open banking, API-based approach across developed and developing economies to kill two birds with one stone: requiring both Big Banking and Big Tech to provide API access to customer’s data would both open up their data hoards and stimulate competition. Why shouldn’t my bank be able to use my LinkedIn graph as input to a credit decision? Why shouldn’t my Novi wallet be able access my bank account? Why shouldn’t my IMDB app be able to access my Netflix, Prime and Apple TV services (it would be great to have a single app to view all of my streaming services together).

This symmetric data exchange can lead to a creative rebalancing of the relationship between the sectors and make it easier for competitors to both to emerge. Instead of turning back to the 19th and 20th century anti-trust remedies against monopolies in railroads and steel and telecoms, perhaps open banking adumbrates a model for the 21st century anti-trust remedy against all oligopolies in data, relationships and reputation.

Leave a comment