Headlines this week - Nov 9, 2025
A look at how capital is being deployed across future opportunities
This week in the future:
1 - Companies are (rapidly) starting to invest in AI, but returns will take time (and a deeper transformation)
Economists are more pessimistic than technologists about the speed of AI adoption. Economists argue that, like electricity, AI’s productivity gains will be slow. Real returns will require time-consuming and complex “complementary investments” in new business processes, which technologists often overlook.
This uncertainty is making investors skeptical of corporate AI spending. This disconnect between hype and reality is making investors nervous. For example, Rightmove’s (the leading UK real-estate consumer app) shares tumbled 10% after it announced new AI spending, signaling investor fears that these projects are costly and have an uncertain payoff.
AI labs are now hiring “domain experts” to help companies adopt the tech. AI labs like OpenAI and Anthropic are implicitly recognizing these adoption barriers. They are aggressively hiring “forward-deployed engineers“—specialists who can code and talk to customers—to embed within companies and customize models for real-world use.
The technology’s immaturity is a key reason for these low initial impacts. A recent Microsoft experiment highlights how far AI agents are from replacing people. When tested in a fake e-commerce marketplace, the AI agents failed at simple tasks, suggesting they are not yet ready for unsupervised, real-world work.
2 - AI is increasingly seen as a complement to workers, rather than a direct substitute
Tim O’Reilly argues AI should be seen as a “tool,” not a “worker”. He challenges the narrative that AI is a “worker,” calling this framing “potentially dangerous”. He argues that treating AI as a tool empowers humans to solve new problems, while treating it as a worker simply aims to replace them.
This aligns with a more human-centric approach to the future of work. Many analysts argue the “future of work is still human powered”. They posit that technology is “value neutral” and can be used for the best and for the worst. So the debate is actually if humans matter in deciding how technology should be used or, alternatively, they are just means to an end, and could actually be replaced by technology
Knowing how to use AI tools is now a mandatory career skill. The imperative to adopt these new tools is growing, with the message from management becoming: “Use AI or You’re Fired“. This signals that failing to integrate AI into one’s workflow is becoming a significant professional risk.
“AI power users” are already gaining a significant advantage. “AI literacy” is creating a clear divide: employees who master these tools are “impressing bosses and leaving co-workers in the dust”. This demonstrates a tangible career advantage for those who become leading adopters.
Analysts remain skeptical that recent layoffs are truly caused by AI substitution. As we already discussed last week, despite recent mass layoffs, many experts believe this is “probably not a sign of the A.I. apocalypse“. Instead, companies may be cutting staff to protect profit margins and reassure investors they are spending “responsibly” on massive AI infrastructure .
3 - Few people question that AI has turned into a financial bubble. But many expect it to be a “good one”, that could foster innovation in energy and computing hardware
Investors are starting to wonder if the massive AI spending has gone too far. A new report from The WSJ highlights growing investor concern about the sheer scale of AI spending. After a year of cheering every capex increase, many are now questioning the path to profitability.
A historical debate is underway on whether bubbles are “good, actually”. A Financial Times analysis of the 19th-century railway mania shows that economic historians are “downbeat” about bubbles. They argue that such manias are rarely a good thing, leading to massive misallocations of capital and significant social costs.
But some investors argue the true value lies in complementary industries. Another view is that the real, long-term beneficiaries of the AI infrastructure boom will be complementary sectors like the energy producers. This “picks and shovels” argument suggests investors should look beyond the data centers to the utilities powering them, and other companies making them possible.
The AI boom could be a “good bubble” that transforms energy and computing hardware. Ben Thompson makes this case directly, comparing the AI boom to the dot-com era. He argues that while the dot-com bubble burst, it left behind the crucial fiber-optic infrastructure that enabled the modern internet; the AI bubble will similarly leave behind massive innovation in computing hardware and in energy production.
4 - Energy is increasingly perceived as a bottleneck for AI progress. Innovation (and industrial policies) are coming to the rescue
Microsoft’s CEO says they don’t have enough energy to deploy all the GPUs in their inventory. Microsoft’s CEO S Nadella stated that power, even more than chips, is the new bottleneck, admitting the company has “a bunch of chips sitting in inventory that I can’t plug in”.
The energy demand explosion is creating opportunities for companies selling alternative power equipment. The AI-driven power rush is creating a boom for smaller, specialized equipment makers that provide alternatives to the traditional grid. Companies selling fuel cells and natural-gas generators are seeing increased demand from data centers needing reliable power.
China is using energy subsidies to catalyze progress in AI. China is offering its tech giants, including ByteDance and Tencent, significant electricity subsidies to boost its domestic AI chip industry. The cheap power is designed to offset the higher running costs of less-efficient homegrown chips.
These subsidies and China’s energy capacity are seen as a key advantage in the AI race. Nvidia’s CEO Jensen Huang has warned that China “will win” the AI race due to its industrial policies and massive capacity for producing energy. He argues these advantages allow China to build AI infrastructure at an enormous scale.
5 - A bottleneck emerging in microchips?
A “compute bottleneck” looms as chip manufacturing hits physical limits. At the WSJ this week, George Gilder, the famous guru from the dot-com era, warns that AI progress may hit a wall due to the combination of two physical limitations. First, the “node scaling plateau,” as it becomes prohibitively expensive to shrink transistors beyond the 2nm target. Second, the “reticle limit,” which caps the maximum size of a single monolithic chip. Startups like Cerebras are already testing new approaches to bypass these issues.
6 - Quantum Computing announcements continue. But a consensus emerges among investors that building commercial machines will take time
Quantinuum (a startup) has unveiled a “fault-tolerant” prototype with 48 logical qubits. A new quantum computer from Quantinuum features 48 logical (error-corrected) qubits derived from 98 physical ones, a 2:1 ratio. This machine will be used to identify and explore the types of problems that future, larger quantum computers will be able to solve. But per se it is not even close to a commercial product
For investors, the field remains highly uncertain and risky. The quantum landscape remains “stomach-churning” for investors, as commercial viability could still be a decade or more away, according to the Heard on the Street column of the WSJ. While the potential payoff is enormous, the key challenge is the timing, making any investment in the field intrinsically risky.
7 - Are our economies increasingly vulnerable to a failure of OpenAI (or Nvidia)?
Concerns are growing that OpenAI is becoming “too big to fail”. The company is becoming deeply embedded in the economy, and a potential failure could now trigger a cascade of disruptions across thousands of businesses.
This week, OpenAI signed a $38bn cloud deal with Amazon. This deal adds Amazon to a growing list of tech giants, including Microsoft and Oracle, whose own success is now increasingly linked to OpenAI’s massive infrastructure deployments.
The company seems unworried, calling for more investor “exuberance”. Far from calling for moderation, OpenAI’s CFO stated that bubble-wary investors are actually underestimating the AI revolution and that the market needs “more AI exuberance“ to fund the necessary transformation.
Meanwhile, Nvidia’s $5trn valuation represents a growing risk. Nvidia’s value now exceeds the entire German stock market, giving it a massive weight in global indexes. Its interconnectedness with other tech giants further concentrates market risk.
Some analysts believe the stock could climb to $8.5trn. Despite its size, some analysts see much more potential. Loop Capital predicts a “golden wave” of AI adoption could push Nvidia’s valuation to $8.5trn.
This AI reliance is also raising “bubble” fears in Asian markets. This exposure is global, with Asian markets’ reliance on the AI boom raising “bubble” fears. As an example, key suppliers like TSMC of SK Hynix depend heavily on continued spending by Nvidia and its partners.
Some investors are starting to hedge this concentrated risk. Deutsche Bank is reportedly exploring ways to hedge its growing exposure to the AI lending boom. The bank is considering using derivatives or shorting Big Tech stocks to protect against a potential downturn.
8 - Can AIs be conscious? The answer to this question has the potential to change how we approach AI safety regulations
AI pioneers, including Geoffrey Hinton, claim human-level general intelligence is already here. Several “godfathers of AI” have stated that current large language models are already demonstrating signs of human-level general intelligence. This contradicts the long-held belief that AGI would still be years (or even decades) away.
Growing signs suggest machines can “think,” implying thought is a mechanical process. A long article this week at the New Yorker argues that while AI learns differently from humans, it is showing clear signs of “thinking”. This supports the counter-intuitive and “discouraging” implication that cognition may be a mechanical process, not something uniquely human.
Microsoft’s AI chief, however, believes only biological beings can be conscious. Mustafa Suleyman, Microsoft’s AI chief, has stated that he does not believe silicon-based machines can ever achieve consciousness. He argues that consciousness is an emergent property exclusive to complex biological life.
But what if the essence of life itself computational? Challenging this view, a book by Google scientist Blaise Agüera y Arcas argues that the distinction between biology and machinery is tiny. The book posits that life is fundamentally computational, suggesting the gap between AI and living beings is not as large as we think.
9 - After electric vehicles, could China also win in autonomous cars?
Reports indicate that Chinese robotaxis are now highly competitive with Western leaders. The ride quality in Chinese robotaxis is reported as “smooth” and “seamless,” posing a direct challenge to Western firms like Waymo. This “boringly normal” performance suggests the technology gap is closing rapidly.
However, investors are concerned about the profitability of these new Chinese brands. The shares of newly listed Chinese robotaxi start-ups, such as Pony.ai and WeRide, have “tanked” in Hong Kong. This reflects deep investor concern over high R&D costs and “uncertain prospects for profitability”.
Meanwhile, Western firms have not given up and are showing a “renewed drive” for autonomy. As the EV market slows, Western tech giants and traditional carmakers are now focusing on the autonomous sector. This signals a significant strategic shift from pure electrification to software-defined, autonomous vehicles.
10 - The emerging cybersecurity threat from AI
Big Tech is racing to solve AI’s “big security flaw”: indirect prompt injection. Tech giants like Google and Microsoft are “stepping up efforts“ to fix this fundamental vulnerability. The flaw allows malicious instructions to be hidden in web pages or documents, which could trick future AI agents into performing harmful actions.