Governments once treated social media like a quirky new toy. Now, they’re treating it more like a powerful, risky drug. In one recent case, U.S. regulators forced a major game maker to refund players tricked into buying things they never meant to buy.
Regulators are now turning from single “bad actors” to the systems that make manipulation profitable. Instead of only punishing one misleading pop-up or one aggressive app, they’re asking: what is it about the way platforms are built—and paid—that keeps users hooked and data flowing? That shift is crucial in an environment where digital advertising swallowed roughly 70 % of all ad spending in 2023, shaping which voices get amplified and which quietly disappear.
We’re entering a phase where laws don’t just say “don’t lie to users,” but start to specify how feeds are ranked, how personalized ads are justified, and when design crosses the line into coercion. Some rules focus on privacy, some on competition, others on protecting children, but together they’re probing the core machinery of the attention economy rather than just smoothing its roughest edges.
Policymakers are also rethinking who should carry the burden of staying safe online. Instead of assuming every person can read endless terms and tweak dozens of settings, they’re starting to ask: why not require safer defaults in the first place? That’s why new rules increasingly target high-risk features like autoplay, hyper-targeted ads for kids, and opaque recommendation systems. Some laws demand outside audits, some force clearer choices at sign-up, and others require basic “nutrition labels” for data use, so users see costs and tradeoffs upfront instead of decoding fine print.
Laws now zoom in on three pressure points of the attention economy: data, design, and dominance.
On data, privacy regimes like the GDPR and California’s CCPA don’t just say “ask for consent.” They constrain what can be collected, how long it can be kept, and when it can be combined. That matters because the more precisely a platform can profile you, the more it can fine‑tune what holds your gaze—and the more valuable your attention becomes to advertisers. Huge fines, like the EUR 1.2 billion penalty against Meta for unlawful cross‑border transfers, signal that “data first, ask later” is no longer an acceptable business default in major markets.
On design, newer rules go after the specific interface choices that keep people, especially kids, scrolling. The UK Children’s Code and California’s Age‑Appropriate Design Code push services toward high‑privacy, low‑pressure defaults for minors: turning off precise geolocation, limiting nudge notifications, and in some cases restricting autoplay or streak‑based rewards that encourage “extended use.” The target isn’t fun or engagement itself, but features that quietly convert hesitation into habit or purchases. When the FTC forced Epic Games to return USD 245 million over Fortnite’s confusing button layouts and pre‑checked options, it was a signal that “dark patterns” are no longer just bad ethics—they’re a liability.
Then there’s dominance. Competition and platform‑accountability laws, from antitrust cases in the U.S. to the EU’s Digital Services Act, treat large platforms as systemic actors with added duties. Very Large Online Platforms must assess how their recommendation systems affect mental health, civic discourse, and vulnerable groups, then open their systems to external scrutiny. For services built on frictionless engagement, being required to document and mitigate risks can change which metrics product teams optimize for.
This is where a subtle but important shift appears: instead of assuming individuals can simply “use willpower” against feeds and notification systems, policymakers are trying to move some responsibility upstream—into code, contracts, and corporate strategy. Your choices still matter, but the menu you choose from is starting to be regulated.
Lawmakers are testing how far they can go without dictating product design line by line. One visible experiment: transparency dashboards. Under the EU’s rules, some big platforms now publish ad libraries where you can search who paid to target which messages at which groups. It’s a small window into a machinery that used to be sealed shut, and it gives journalists and researchers raw material to spot patterns—like whether certain age brackets see more gambling or weight‑loss ads.
Other measures work more like circuit breakers. Risk‑assessment duties under the DSA or child‑safety codes can force companies to pause or reconfigure new features if early data show spikes in self‑harm content, compulsive use, or fraud. In practice, that might mean delaying a global launch or quietly rolling back a “growth hack” that crosses a line. Requiring platforms to add friction and transparency is like mandating seat belts in cars—engineers can still chase speed, but certain safety constraints reshape what counts as acceptable design.
Regulation won’t freeze the attention market; it may redirect it. As guardrails harden, some firms will chase a new prize: proving they waste less of your time. “Time‑well‑spent” scores could sit beside battery stats, nudging users toward calmer apps the way nutrition labels reshaped grocery trips. Elections will be stress‑tests: if rules blunt disinformation and outrage incentives, lawmakers elsewhere may copy them; if not, expect louder calls to treat attention like a protected public resource.
Policy shifts won’t rescue our attention on their own, but they can reset the scoreboard. As design rules tighten, space opens for tools that help you treat focus like a limited budget—deciding where to “spend” it each day. Your challenge this week: notice one moment when a setting, label, or prompt actually protected your time, and keep using it.

