Greg Lindsay is Urban Tech Fellow at Cornell Tech and leads ‘The Metaverse Metropolis’, a new initiative exploring the implications of augmented reality at urban scale.
“Technology is the answer…but what was the question?” asked the British architect Cedric Price in 1966, back when IBM mainframes were the state of the art in computation. Fast forward to 2024, when artificial intelligence (AI) — particularly large language models (LLMs) headlined by OpenAI’s Generative Pre-trained Transformer 4 (GPT-4) model — has taken corporate boardrooms by storm, offering answers to the future of nearly everything while eliding questions as to its true potential. Is the current wave of hype a mere flash-in-the-pan, a step-change disruption, or a transformational shift in computing?
The questions AI poses for global mobility professionals and policymakers are similar. Are there painful bottlenecks in relocation and migration that generative AI is uniquely positioned to solve, or is it a solution in search of problems to justify its adoption? If the former, will newfound gains in simplicity and efficiency persist, or will institutions respond in kind with technologically aided obstacles and complexity? Because AI doesn’t exist in a political vacuum. While mobility professionals may rejoice at AI’s ability to reduce friction for desirable workers and economic migrants, it might well be deployed to harden borders and suppress the need for migrants altogether. The questions AI ultimately answers may not necessarily be the ones we hoped to ask.
Before speculating on generative AI’s potential uses (and abuses), it’s important to understand how it works — and hence its strengths and weaknesses. Simply put, LLMs such as ChatGPT, Google’s Bard, and Anthropic’s Claude combine vast amounts of data with transformers (the ‘T’ in ChatGPT) trained to weigh inputs and generate output ranging from code to cats and everything in between. The probabilistic nature of transformers is also responsible for LLMs’ inherent ‘hallucination’ problems and led skeptics to dismiss generative AI as autocomplete on steroids, which is both true and beside the point.
Contrast the doubters’ dismissiveness with Bill Gates, who has described ChatGPT as one of “two demonstrations of technology that struck me as revolutionary,” the other being the graphical user interface (GUI) underpinning Windows. The comparison is instructive — if the GUI replaced the command line with drop-down menus, AI promises to do away with the desktop or search bar altogether. Rather than users tapping, swiping, or pointing-and-clicking, AI-powered agents will do the hard work of finding, analyzing, and bringing information to them. One example is Paidleave.ai, a free tool assisting residents of New York in discovering, applying for, and receiving eligible state benefits.
In a similar vein, I recently led a team of researchers at Cornell Tech’s Jacobs Urban Tech Hub tasked with creating an AI “nutrition label” that evaluates leading tools according to the needs of the architecture, engineering, and construction industries. Using GPT-4 embedded within Microsoft Bing, we asked the AI to grade itself and its competitors using our criteria. Not only did it award dozens of grades without human intervention, it also supplied 170,000 words worth of reasoning, links, and citations justifying its decisions. It would be easy to dwell on the dozens — if not hundreds — of hours we would have otherwise spent compiling such a dossier, but more important was how it empowered us to focus on what questions we wanted to ask, rather than how we would answer them.
Seen this way, generative AI’s potential for global mobility professionals such as investment migration experts is profound. Instead of performing repetitive tasks such as filling forms or battling through screens running on antiquated systems, increasingly sophisticated agents could handle these tasks, freeing professionals to think more creatively about their roles and what they can offer, such as:
And so on. With two big caveats (or at least two). The first being upon closer inspection of GPT-4’s scores for our nutrition label, we sometimes found its judgement to be faulty, albeit logically consistent, underscoring the pitfalls of relying on all-purpose LLMs trained by scraping large swaths of the open web. Our exercise highlighted the need for mobility professionals to carefully conserve, secure, and harness data that is specific to the industry and individual firms for refining and fine-tuning AI models to reduce the number of errors and hallucinations.
Second, just because AI can be employed to streamline and simplify bureaucratic complexity doesn’t mean it can’t (or won’t) be used to manufacture it too. One of the domains where generative AI is already being put productively to work, for instance, is answering requests for proposals. IBM, Twilio, and other companies are building commercial tools that employ similar methods to our little hack at Cornell Tech. One such start-up recently received a request for proposal that its CEO suspected had been generated by ChatGPT. “And guess what?” he told Wired. “We responded with our own AI.” Governments accustomed to relying on bureaucratic moats may well choose to fight AI fire-with-fire.
Beyond these immediate applications, there are second-order effects of AI and automation poised to transform labor mobility for the worse, particularly for less-skilled workers comprising the majority of migrants. Across the Global North, the governments of Britain, Germany, and Italy — along with a possible second Trump administration — are caught in the bind of being implacably opposed to migrants while simultaneously dependent on them. In this context, if AI is the technological answer, the question may be, “How do we eliminate the need for migrants altogether?”
In 2019, the economist Richard Baldwin — inspired by a previous wave of advances in machine learning — predicted a “globotics upheaval” driven by a combination of growing automation and what he labeled “remote intelligence”, a new phase of outsourcing and offshoring enabled by advances in software and bandwidth. Five years and a pandemic later, evidence for this upheaval is already apparent. Autonomous delivery start-ups such as Serve, Coco, and Kiwibot typically use remote tele-operators to guide their robots through difficult situations. In Kiwibot’s case, deliverybots making their rounds on American university campuses are being driven by workers stationed in Bogotá, Colombia.
Another example is OryLab’s Avatar Robot Café DAWN ver.ß in Tokyo, where the robotic staff is operated remotely by Japanese residents who are living with disabilities or ill and unable to easily participate in society. While pitched as a public good, it’s not difficult to imagine such a system as an opportunity to displace in-person workers. In fact, a recent survey by Baldwin of nearly 10,000 Japanese workers suggests remote intelligence and AI are complements that are fueling each other’s adoption.
The dystopian end point of this trend may be best depicted by the 2008 sci-fi film Sleep Dealer, which imagines a future of permanently closed borders and would-be migrants outfitted with virtual reality and cybernetics to remotely operate factories and everything else. Technology was indeed the answer; it’s just that the question itself is bone-chilling.
The film highlights how AI is just as likely to reinforce sovereignty as it is to disrupt it. The US’ White House and the European Union have wasted relatively little time in attempting to regulate AI, the former through a comprehensive executive order and the latter through the Artificial Intelligence Act, which restricts “unacceptable” and “high-risk” applications.
These regulations have led to calls from some quarters for an internationally recognized “right to compute” as a subset of free speech. A mysterious start-up named Del Complex has already promised to build AI compute clusters in international waters for clients vehemently opposed to regulation. Meanwhile, the UAE has developed an open source sovereign LLM dubbed “Falcon” a top national priority, and is positioning itself as an enclave of open-mindedness compared to the US, the European Union, and China. “Watch as people migrate to where their digital rights are respected,” warns Stephen Cobb, an AI ethicist at the Dubai-based start-up Haltia.AI. It is only a matter of time before the ‘right to compute’ (or at least the privilege) is folded into nations’ migration policies or even their investment migration programs.
In that case, technology is once again the answer, only this time the question is how the AI arms race will factor into nations’ plans to corner the global market on talent and investment.