“Algorithms are quietly using more electricity than some small countries—just to learn how to write, draw, and talk to us. You’re scrolling a feed, asking a chatbot, or starting your car, and somewhere, a distant data center lights up. The real question is: who’s steering whom?”
NVIDIA says AI compute demand has been growing about 10× every 18 months—faster than Moore’s Law ever did for chips. That explosive curve is quietly rewriting what “an algorithm” even means. We’re moving from fixed rules someone coded once, to systems that re‑shape themselves as they run—so your car, your phone, even your fridge can all be running tiny versions of GPT‑like brains. Those “tinyML” models on microcontrollers already ship in the billions, making everyday objects less like tools and more like collaborators. At the same time, generative systems that co‑create text, images, and code with you are becoming standard features, not exotic demos. By 2025, Gartner says most new apps will plug into some form of generative AI. In this episode, we’ll explore where that trajectory leads—and how it might change work, regulation, and your own negotiating power with machines.
Think of what’s coming less as “smarter apps” and more as a shift in who gets to decide how technology behaves. Foundation models won’t just live in the cloud; cut‑down versions will sit in cars, appliances, even toys, quietly learning from patterns around you. At the same time, regulators are drafting rules that treat powerful models more like nuclear plants than phone apps—licensed, audited, and watched. And as models start writing code, negotiating prices, or drafting laws, we’ll need to ask: when a system optimises, whose goals is it really serving?
Call it “phase change” computing. We’re crossing from a world where most software was written once and updated occasionally, into one where behaviour is continuously revised in response to streams of interaction. That shift shows up in three places at once: where models run, how they’re trained, and who gets to shape them.
On your devices and in physical spaces, models are getting smaller, faster, and more specialised. A hearing aid that adapts to your favourite café’s noise profile; a car that refines how it recognises *your* driveway over time; a factory sensor that spots a tiny vibration pattern and prevents a breakdown—these are narrow, local learners. They won’t rival giant cloud systems in raw capability, but they’ll be tightly tuned to context, and they’ll act with fewer round‑trips to distant servers. That makes them harder to monitor centrally and easier to miss in policy debates.
At the other end of the spectrum, gigantic systems are becoming “general‑purpose infrastructure.” Instead of training a separate model for every task, governments, startups, and criminal groups alike can fine‑tune the same base model for customer support, propaganda, contract analysis, or malware design. Control shifts from writing rules to choosing training data, guardrails, and incentives. Whoever curates that pipeline effectively sets the norms for millions of downstream uses.
The next frontier is algorithms that negotiate and coordinate with each other. Think of software agents bidding for warehouse space, setting ad prices, or scheduling charging times for electric cars. Their goals might be aligned with yours—or only with the companies that deploy them. The risk isn’t just rogue super‑intelligence; it’s swarms of narrow systems, each “doing its job,” collectively creating congestion, volatility, or exclusion that no one explicitly chose.
As these systems spread, explainability and audit trails stop being academic issues and become operational necessities. Logs, “nutrition labels” for models, and third‑party audits are emerging as the minimum needed to answer a basic question: when automated decisions shape markets, work, and public services, how do we trace responsibility back to specific choices, data, and incentives?
Your phone quietly doing on‑device transcription is just the start. Picture a grocery chain where shelf sensors “talk” to pricing agents every few minutes: one agent lowers the price on strawberries before they spoil, another nudges up the cost of a trending snack, all without a human typing in numbers. Or a logistics fleet where routing agents for thousands of trucks bargain for road space and charging slots, creating traffic waves no city planner explicitly designed.
A useful way to see this is like an investment portfolio that rebalances itself in real time: instead of a manager occasionally shuffling assets, thousands of micro‑decisions keep shifting weight across stocks, bonds, and cash as news breaks and signals change. No single trade is dramatic, but the aggregate movement can reshape markets.
Now translate that to workplaces. Contract‑review agents haggle over clauses before a lawyer ever opens the file, hiring agents filter candidates across platforms, and “personal” shopping agents lock in deals while you sleep. Your role drifts from doing the task to setting constraints and veto points.
In creative fields, co‑writing tools already suggest plot twists, ad variants, and interface layouts. Next, persistent agents may track audience reactions across channels and propose pivots in style or strategy, subtly steering what “fits” your brand. The frontier question becomes: when these systems constantly predict and pre‑shape demand, how much of culture is discovered—and how much is manufactured?
Your challenge this week: When you interact with any “smart” system—recommendations, auto‑drafted emails, route suggestions—pause once and ask, “If an invisible agent negotiated this outcome on my behalf, what goal might it be optimising *instead* of mine?” Note whether it favours convenience, profit, engagement, or risk‑avoidance. By the end of the week, see where your interests and the system’s apparent goals diverge most sharply.
As these systems spread, their decisions start to stack—like compound interest on small choices. A tiny tweak in route suggestions reshapes foot traffic; subtle shifts in pricing alter which shops survive; quiet prioritisation in search or feeds nudges which skills stay valuable. Over time, the map of opportunity itself bends. The open question is who gets to redraw that map: a few platform owners, public institutions, or networks of communities setting shared constraints and red lines.
Soon, opting out won’t mean logging off; it will mean choosing which “auto-pilot” you accept in finance, health, or work. Like gardeners picking what to prune and what to let grow wild, we’ll need shared norms about which decisions stay human, which we delegate, and which we forbid outsourcing entirely. The future of algorithms is also the future of our boundaries.
Here’s your challenge this week: Pick one everyday decision you currently make manually (like news reading, route planning, or content recommendations) and rebuild it using an algorithmic approach you control. In a simple spreadsheet or notebook, define your “inputs” (e.g., time of day, mood, topic, source), your “scoring rules” (e.g., +2 for diverse sources, –3 for clickbait indicators), and run this system for three days instead of relying on platform algorithms. At the end of the three days, compare how your choices changed versus your usual app-driven choices, and write a 3-sentence reflection on what you’d want from future algorithms based on that experience.

