Dateline: Amsterdam, 6th June 2022.
I went to see the new Tom Cruise “Top Gun” movie. It was fun, I enjoyed it and I didn’t fall asleep once. The action sequences were good, but not good enough to make me want to go and see it again or watch it on cable (as compared to, for example, Aliens, which I have watched about 100 times). But it was fun.
The problem with it is that I just do not buy the underlying premise. Given the basic idea of an elderly although still athletic gentleman jousting across the sky with dastardly opponents of an unspecified background, there are three reasons why it didn’t really reel me in: dogfights, dotage and drones.
First of all, there is pretty much no dogfighting at all in modern warfare. The idea of these knights in the sky duelling to the death in honourable but deadly combat is as anachronistic as seeing their mediaeval counterparts charge tank formations. In modern warfare, there are still tanks and there are still planes, but neither are fighting each other.
Secondly, the conflict in Ukraine has already shown us what happens when retired top guns are pressed back into service in their third age: they die. The formerly-retired Kanamat Botashev (63) was flying an Su-25 Frogfoot ground attack jet when he was brought down by a missile and killed. Nikolai Markov (also 63), a formerly-retired air force colonel, had earlier died when he was shot down over Luhansk. As evidenced by the pictures of their blazing wreckage gleefully displayed by their opponents, the Moscow Mavericks are frankly not all that when they are up against inexhaustible batteries of comparatively inexpensive missiles.
(Don’t comment here about the movie. No spoilers.)
Finally, and most obviously, putting people in fighter planes at all seems like a complete waste of time and money in the age of machines (a point actually made in the movie by Ed Harris, who is playing an elderly air force person of some kind). In DARPA’s AlphaDogfight F-16 trials in 2020, the winning Artificial Intelligence (AI) pilots beat the human USAF pilot in five dogfights out of five. The future of air warfare isn’t a gen-Z Maverick dogfighting with North Korea’s top fighter ace but $100m Tempest fighters (which as Sebastian Robin already pointed out in Forbes might make more sense as unmanned vehicles) trying to evade AI-controlled intelligent drones and machine-learning (ML) swarms of supersonic explosives that can accelerate and turn ten times quicker than any manned aircraft.
(Never mind macho swagger, this will be determined by budget as much as by military tactics. Inexpensive Turkish drones have been observed in Ukraine and elsewhere destroying enemy armour with relish.)
Where DARPA leads, DeFi will surely follow.
Similarly, the future of financial services isn’t Robin Hood Cavaliers versus BlackRock Roundheads on a future but familiar battleground, but robot brains trading instruments so complex that people will simply be unable to comprehend the trading strategies. A few years back, John Cryan (then CEO of Deutsche Bank) said that the bank was going to shift from employing people to act like robots to employing robots to act like people. At the time, the bank announced that it would spend €13bn on investments in infrastructure that were "already making some humans at Deutsche unnecessary".
It is not surprising to see this change happening so quickly, because there are many jobs in banks that are far simpler to automate than flying fast ground attack jets to establish air superiority over a contested battlefield. I’m slightly surprised that there are still human traders at all, given their ability to make stupid mistakes: AIs don’t have fat fingers.
There’s a way to go in practice though. The Financial Brand reported on research from MIT Sloan Management Review and the Boston Consulting Group showing that only one in ten companies that deploy AI actually obtain much of a return on ROI. This is, as I understand it, because while bots are good at learning from people, people are not yet good at learning from bots. A robot bank clerk is like a robot fighter pilot, an artificial intelligence placed in the same environment as a human: when organisations are redesigned around the bots, then the ROI will accelerate.
The robots will take over, in banking just as in manufacturing. So will you be served by a machine when you go to the bank five years from now? Of course not. That would be ridiculous. For one thing, you won’t be going to a bank five years from now under any circumstances, and that’s true whether in the meatverse or metaverse. You’ll be explaining “going to” a bank to your baffled offspring just as you were explaining “dialling” a phone to them a few years ago.
(As I pointed out in Wired a couple of years ago, the big change in financial services will come not when banks are using AI, but when customers are.)
Decision Support
I cannot wait for AI to take over my financial life. Under current regulations, my bank is required to ask me to make decisions about investments while I am the least qualified entity in the loop. The bank knows more than I do, my financial advisor knows more than I do, the pension fund knows more than I do, the tax authorities know more than I do. Asking me to make a decision in these circumstances seems crazy. Much better for me to choose an approved and regulated bot to take care of this kind of thing. And if you are concerned that there may be legal issues around delegating these kinds of decisions to a bot, take a look at Ryan Abbott's argument in MIT Technology Review that there should be a principle of AI legal neutrality asserting that the law should tend not to discriminate between AI and human behaviour. Sooner or later we will come to regard allowing people to make decisions about their financial health as dumb as letting people drive themselves around when bots are much safer drivers.
I'm actually doing a little experiment of my own in this space at the moment, using an artificial intelligence machine learning (and probably quantum-resistant cloud-based big data-enabled) trading bot to manipulate my meagre holdings. So far so good: with this robot brain taking care of business, my crypto-fortune has soared by (*checks notes*) I mean collapsed by about half at the time of writing.
(Obviously before I committed my coins to this cyber trader I took the precaution of getting to know the people behind the enterprise, firstly by knowing one of them for a decade and secondly going to stay with one of the others for the weekend.)
Not everyone is able to take these simple precautions though. Take a look at what's been going on in South Africa where the now-infamous Mirror Trading International (MTI) persuaded tens of thousands of investors that they had a sophisticated trading bot ready to go to work on their behalf. Ultimately, the company collapsed when the CEO (didn’t see this one coming) suddenly vanished along with the cash. Africrypt, another South African cryptocurrency trading outfit, made similar claims and similarly collapsed when its directors vanished. These scams are not confined to the developing world by the way, since the U.S. Securities and Exchange Commission (SEC) charged BitConnect with defrauding retail investors out of $2 billion in 2017 and 2018 through a scam involving a crypto trading bot said to offer (and here’s a surprise) “a guaranteed return on investment”.
When the ecosystem has evolved and the regulations are in place, the battle for future customers will take place in landscapes across which their bots will roam to negotiate with their counterparts - i.e., other bots at regulated financial institutions - to obtain the best possible product for their “owners”. In this battle, the key question for customers will become a question of which bot they want to work with, not which bank. Consumers will choose bots whose moral and ethical frameworks are congruent with theirs. I might choose the AARP Automaton, you might choose the Buffett Bot or the Megatron Musk. Once customers have chosen their bots, then why would they risk making suboptimal choices around their financial health by interfering in the artificial brain’s decisions?
Imagining the world of the future as super-intelligent robo-employees serving mass-customised credit cards and bank accounts to human customers is missing the point (just as imagining the world of the future as F16s with robot pilots duelling M-29s with robot pilots is) because in the future the customers will be super-intelligent robo-agents too and they will be buying products that simply don’t exist right now.
Are you looking for:
A speaker/moderator for your online or in person event?
Written content or contribution for your publication?
A trusted advisor for your company’s board?
Some comment on the latest digital financial services news/media?