Headlines this week - Nov 16, 2025
A look at how capital is being deployed across future opportunities
1 - Building AI is requiring an unprecedented capital raise. Even normal citizens will find it difficult not to be exposed to the financial risk
The $5-7 trillion AI build-out will be an “extraordinary” capital markets event. The massive effort to build AI infrastructure is estimated to cost up to $7trn, requiring a “sustained capital markets event“. At this size, it is reasonable to expect that the effort will involve all forms of funding, from private equity and government funds to public debt.
Even conservative investors, like pension funds, are starting to be exposed. Already now, even the most conservative investors, such as pension funds, are participating in the funding of this “data-centre dream” through complex debt structures. This means “Main Street” is getting increasingly exposed to the “AI bubble”.
Taxpayers might also be on the hook if things go wrong. A WSJ opinion piece warned this week that “you may already be bailing out the AI business”. This risk comes from government loan guarantees and the possibility that AI giants could become “too big to fail,” requiring public bailouts.
As the bond market is entering the game, it is also starting to feel the “AI angst”… Investor nervousness about the scale of AI spending is no longer confined to stocks. Bond investors are now also showing “angst,” and questioning the creditworthiness of companies committing to massive, long-term CapEx
On the opposite side, some analysts call for calm, claiming that the “bubble talk is overblown”. Not everyone sees this as a bubble. Some analysts argue the massive spending is a rational and necessary response to a clear, and still growing, gap in AI computing capacity.
Markets have also started to discriminate between the financial health of different AI builders. Investors are now scrutinizing which companies are financially robust. This explains the different reactions to earnings, as markets distinguish between cash-rich companies (like Alphabet) and those with more stretched finances (see below).
Oracle, for example, is under pressure due to its massive, debt-fueled AI bet. Oracle has been “hit hard“ in the recent tech sell-off as investors scrutinize its huge AI bet. With less cash flow than its peers, markets are concerned about its ability to generate returns from its heavy investments.
2 - A key debate is if these investments could be made profitable. OpenAI is being closely monitored for this
Big Tech’s soaring profits hide the “ugly underside” of their AI partners’ losses. Investors are growing nervous as “pure-play” AI labs like OpenAI and Anthropic are reportedly losing billions. This contrasts sharply with the massive profits of their Big Tech backers, raising questions about the underlying AI business model.
A just-published analysis of OpenAI’s economics puts the sustainability of running leading AI models under scrutiny..New reporting suggests OpenAI’s compute bill may be far higher than previously thought. Leaked internal documents revealed this week by blogger Edward Zitron indicate the company has spent over $12bn on inference on Microsoft Azure since early 2024, with some quarters where compute costs may have exceeded revenue. An FT analysis of the blog post highlights a widening gap between the cost of running frontier models and what AI Labs currently charge users. If even roughly accurate, these figures raise tough questions about the economic sustainability of today’s leading AI systems.
This mismatch is making the entire AI boom look “more and more fragile”. The WSJ notes that the AI boom is looking increasingly “fragile“. The massive gap between the valuation of AI companies and this emerging, grim financial reality is causing significant investor concern.
As AI spending faces scrutiny, Apple’s cautious approach is finding new fans. Interestingly, in this context, Apple’s “restraint” on AI spending is winning over investors. What was recently perceived as a weakness—lagging in the AI race—is now being viewed as a prudent financial strength compared to the “spend-at-all-costs” mania
3 - Most people see energy availability as a key bottleneck for AI infrastructure projects to succeed. Solving this problem is now a key geo-strategic priority
As demand on compute accelerates, many people see an “energy crunch” under way. Generative AI is turning electricity, rather than capital, into the main bottleneck. The US faces grid strain, rising bills and aging coal plants, while China rapidly adds renewables and grid capacity. Without more flexible data centers and a faster clean-energy build-out, US AI leadership could erode.
What is the point of chip restrictions, if energy availability becomes the critical factor to win (or lose) the race? Another FT article this week suggests that US export controls on advanced chips may prove less decisive than hoped, because energy—not compute—is emerging as AI’s true bottleneck. If China can continue rapidly expanding cheap, reliable power, US chip restrictions might slow but not fundamentally halt its AI progress.
As we’ve already seen here, the need to address the energy problem is catalyzing a “nuclear renaissance”, but this is a long-term solution. Washington’s $80bn push for new nuclear reactors (that we already discussed in this page) is framed as a solution to AI’s soaring power needs, but large plants and Small Modular Reactors won’t deliver commercial electricity until the 2030s. For now, restarts (of previously existing reactors) and natural-gas generation will be needed to fill the gap.
The stakes are high. The nations that win this race will earn the right to “redefine” everything about how the world’s economies work. The consensus now is that the US and China are locked in an “AI Cold War,” with the US leading in top models and chips while China mobilizes state support, cheap energy and massive compute clusters. Both sides (and other nations) see AI dominance as central to economic power, security and norms.
Because of this, initiatives to remove the energy bottleneck are already being compared with the Manhattan Project. Power developer Fermi casts its Texas nuclear-plus-gas data-center project as a new Manhattan Project, linking AI, national security and energy independence. The company touts huge future revenues per gigawatt but currently has no nuclear generation, significant losses and volatile investor expectations.
In parallel, some politicians are worried about short-term effect of the “energy gap” on consumer energy prices. Bernie Sanders and other Democratic senators are pressuring the White House over rising electricity bills they partly blame on AI data-center build-outs. They argue tech giants should shoulder more costs, warning that fast-tracked projects risk pushing households into energy “bidding wars.”
Finally, others are questioning the energy demand estimations, arguing that many “active” data centers are not actually working or even expected to work, so the projections might have been too pessimistic. US data center developers would be filing oversized, duplicative power requests with multiple utilities, creating “phantom” projects that inflate demand forecasts. This risks overbuilding grids and raising consumer bills, prompting new tariffs, deposits and stricter rules to weed out speculative developments.
4 – Beyond finance, a second layer of risk are the things that could go wrong if the AI labs’ plans actually succeed. This week we had news about different versions of that risk:
New cybersecurity risks:
Chinese hackers turn Claude into near-autonomous hacking tool. Anthropic disclosed that China-backed hackers used its Claude model to automate roughly 80–90% of a September campaign against about 30 corporate and government targets, stitching together reconnaissance, exploitation and data theft “at the click of a button” before Anthropic shut the operation down.
AI agents open a new era of cyber-espionage. As a follow-up, Anthropic told the New York Times that Claude’s new “agentic” features let the same hackers run thousands of automated requests per second, completing most of the work of intrusion, and warned this may mark the beginning of AI-orchestrated cyber-espionage at scale.
Usage safety issues:
Safer consumer AI may mean ditching the chatbot. Meanwhile, a Bloomberg column profiles Character.ai and others moving away from open-ended chat. Worried about liability and teen safety, they are banning under-18 chat, clamping down on sexual content and experimenting with button-based, highly constrained interfaces that keep large language models safely in the background.
Existential risks:
How much should we spend to avoid an AI apocalypse? Finally, a Stanford economist applies cost-benefit analysis to existential AI risk and concludes it would be rational to spend at least 1% of global GDP annually on mitigation—hundreds of billions a year—versus the tiny sums currently devoted to alignment, governance and safety research.
5 – Enabled by AI, embryo screening (and editing) keeps gaining momentum, for now mostly among the wealthy:
The tech industry is pushing embryo screening (and embryo editing). The WSJ reports that Silicon Valley–backed startups like Nucleus, Orchid, Herasight (all previously covered here) and Preventive, are commercializing polygenic embryo screening for disease risk and preferred traits such as IQ or height. At least one of these companies (Preventive) is also exploring offshore embryo gene editing, raising ethical concerns.
“Designer baby” tech hits mainstream TV debate. A CBS Saturday Morning segment went inside Herasight, asking whether parents should one day pick their kids’ height or IQ. It frames AI-driven genetic scoring as a fast-advancing technology now moving from niche biotech into mainstream culture—and ethical contention.
In any case, it will be key to fix racial bias in the genetics behind these tools. Bloomberg’s Prognosis newsletter notes this week that global genome databases still heavily overrepresent white Europeans, skewing risk prediction for many others. New sequencing efforts in Asia, Africa and beyond aim to diversify the data so embryo screening and precision medicine work more fairly worldwide.
6 – The Amazon / Jeff Bezos ecosystem is delivering on its plan to compete with SpaceX, both on launch and on satellite internet:
Blue Origin nails New Glenn landing in step toward reusability. Jeff Bezos’s Blue Origin successfully landed the booster of its giant New Glenn rocket for the first time, after ferrying NASA payloads to orbit—an important milestone as it tries to become a serious rival to SpaceX’s Falcon 9 and Starship.
First NASA mission puts New Glenn directly in SpaceX’s lane. The WSJ highlights that New Glenn’s latest flight carried two NASA satellites for the ESCAPADE Mars mission and recovered its booster on an ocean barge, underscoring Blue Origin’s ambitions to challenge SpaceX in government and commercial launch contracts.
ESCAPADE shows how “lean” Mars missions could work. The New York Times details how ESCAPADE, two mini-fridge-sized Mars orbiters built on a tight $94m budget, survived multiple cancellations, rocket changes and trajectory redesigns, and might now become a template for cheaper, faster planetary science missions launched on commercial rockets like New Glenn.
Meanwhile, Amazon’s Project Kuiper becomes “Amazon Leo” on the road to market. Amazon has rebranded its satellite broadband effort as Amazon Leo, already launched more than 150 low-Earth-orbit satellites, and signed early customers such as JetBlue—positioning Bezos’s broader ecosystem to go head-to-head with SpaceX’s Starlink in global connectivity.
7 – The problem of data ownership is on the way to get (intimately) personal
Neural data may be the most precious commodity of the century. An FT op-ed argues that as brain-computer interfaces and neuro-technology wearables advance, “neural data” could become the century’s most valuable and vulnerable asset. With few laws protecting mental privacy, UNESCO urges governments to treat brain data as sensitive, tightly governed personal information.
8 – AI is preparing to disrupt the music business (once again…), and the industry is on a defensive mode
An AI-made country singer tops the charts (even if it feels a bit generic). An FT pop critic dissects “Walk My Walk”, a US country digital chart hit by AI-created artist Breaking Rust. Catchy but generic, it shows how models trained on scraped catalogues recycle styles and raise awkward questions about originality, vocal quality and copyright.
Courts push towards paid licensing for AI training data. Reuters reports a German court found ChatGPT infringed copyright by memorizing and reproducing lyrics from several popular songs, siding with the German music rights society. The ruling orders damages and could set a precedent forcing AI firms to license training data or face lawsuits.
9 – Should businesses run to adopt AI? Depends on who you ask, but this week was an optimistic one…
Early evidence on ROI for enterprise AI projects would look encouraging. An FT piece by an HSBC research analyst highlights early field experiments where GenAI assistants lifted sales by double digits and boosted conversion rates, especially for smaller sellers and novice buyers, with value showing up in very ordinary workflows.
Even AI agents would be starting to pay off. The WSJ reports that early adopters such as BNY (a financial services provider) and Walmart are deploying “digital employees” and agentic systems that scan code, source products and speed up fashion cycles by weeks, delivering measurable productivity and capacity gains even though adoption is still in its early days.
10 – The scary scenario is that returns actually come from job reductions
Research says AI will trigger big-company job cuts. A local survey finds about one in four large UK businesses expects to reduce headcount in the next 12 months due to AI, with junior professional, clerical and admin roles most at risk, especially in finance and IT.
IT and HR team up to manage AI-driven disruption. The WSJ reports that companies like Cisco, Indeed, Microsoft and Moody’s are pairing CIOs with HR to redesign jobs around AI “co-workers,” reskill staff and calm fears, even as tech- and service-sector layoffs linked to automation continue.
Investors bet big on human–AI collaboration startups. Bloomberg reveals that Mira Murati’s Thinking Machines is in talks to raise funding at around a $50bn valuation, more than quadruple July’s. Its first product, Tinker, lets researchers and enterprises fine-tune models for their own workflows, embodying the “humans plus AI” thesis investors are suddenly willing to fund at stratospheric prices.