Tired in Tech? Here's what we are seeing.

A survey of what eight CEOs are publicly saying about AI in software engineering, what their companies are quietly distributing on bathroom walls, what the engineers receiving both are saying anonymously, what the academic literature has measured, and what the gaps in all of it look like — late 2024 through April 2026.

Gaurav Rastogi · Ekrasworks · 2026  ·  Survey closed April 2026

Line drawing of two hands lifted away from a computer keyboard, saffron-orange paint splatter on the keys, deep navy background.

Why we listened in every room

As AI shot past human capabilities in a cascade of domains after 2023, companies and professionals began to note of this creeping influence. Coding, long held out as being above the bleacher seats, quickly found itself directly in the splash zone, and then under tsunami watch. Companies that were, after COVID, hiring to soak up the CS student capacity and starve the startups, are now rushing in the other direction. These are confusing times.

To understand what is going on, we needed to enter every room and listen in. There are things the companies say because it's what they have to say to their stakeholders, and then there are things the executives themselves do when it comes to them ("leaving to spend time with family"), and then the rumblings on social media, knowing that each is biased in its own way, but somebody has to listen to all of them at once.

So we did, from late 2024 through April 2026. Five streams in total: earnings calls and CEO interviews, internal corporate wellbeing programs where they leaked, peer-reviewed psychology and AI-lab safety research, anonymous engineer testimony on Blind and Reddit and Hacker News, and the mainstream press. We are not counting how many engineers are tired or proving who caused it. We are recording what each stream is saying, in its own words.

More code, fewer engineers

The first place we listened was the earnings call. The earnings call is where a CEO has to tell investors that the new technology is being absorbed quickly. The CEO does this by reaching for a percentage.

Sundar Pichai · Google · Q3 2024 earnings call · Oct 29, 2024

In October, he reaches for his percentage first. "More than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."

Satya Nadella · Microsoft · LlamaCon · April 29, 2025

Six months later, sitting next to Mark Zuckerberg of Meta, he ventures a similar number. "I'd say maybe 20%, 30% of the code that is inside of our repos today and some of our projects are probably all written by software." The figure is a guess, qualified by "maybe" and "probably."

Mark Zuckerberg · Meta · Joe Rogan Experience #2255 · Jan 10, 2025

On the same day on Joe Rogan's podcast, he has gone further. He has said that in 2025, Meta and the other companies "are going to have an AI that can effectively be a sort of midlevel engineer that you have at your company that can write code."

Marc Benioff · Salesforce · 20VC podcast · Dec 2024 / surfaced Feb 27, 2025

He skips the percentage and goes to the headcount. "We're not going to hire any new engineers this year… we've seen a 30% productivity increase on engineering with Agentforce and AI."

Tobi Lütke · Shopify · X memo · April 7, 2025

He writes the memo and then publishes it himself, on X. "Reflexive AI usage is now a baseline expectation at Shopify." And: "Before asking for more headcount and resources, teams must demonstrate why they cannot get what they want done using AI."

Andy Jassy · Amazon · all-hands memo · June 17, 2025

He sends a memo to all employees and posts it on the company website. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs… we expect that this will reduce our total corporate workforce."

Dario Amodei · Anthropic · Council on Foreign Relations · March 10, 2025

He gives the longest forecast. "I think we will be there in three to six months, where AI is writing 90 percent of the code. And then in 12 months, we may be in a world where AI is writing essentially all of the code."

Sam Altman · OpenAI · Stratechery · March 2025

"Each software engineer will just do much, much more for a while. And then at some point, yeah, maybe we do need less software engineers."

Life comes at you fast.

Bathroom posters and meditation vendors

The second place we listened was inside the same companies that issued the mandates. Internal corporate communications are by definition private; what we know about them is what leaks — Hawking radiation from a black hole. A handful of programs are publicly documented. The rest reach us only through leaked memos, photographs of laminated bathroom posters, and conversations that surface after employees leave.

Google · Learning on the Loo · Ep 403, Adnan Akil · April 20, 2026

Google's is the most visible. Twice a month, somebody in Google's communications group writes a one-page lesson on a workplace-psychology topic and laminates it onto the back of bathroom-stall doors across Google's offices. The program is called Learning on the Loo. Google has published a blog post about it called "The inside story of how Google bathrooms became classrooms." Episode 403, posted April 20, 2026 by a New York-based Google engineer named Adnan Akil, is titled "Self-compassion productivity hacks." It tells the spiraling engineer to do 4-7-8 breathing and to recall a different role they hold. The poster cites Patricia Linville's 1987 paper on self-complexity. We saw one on a Google bathroom wall.

Google · Sohini Stone, Chief Health Officer · Fortune profile · May 21, 2025

Around the same period, Fortune profiled Sohini Stone running a burnout-prevention program and instructing managers to "monitor employee well-being… watch for signs of burnout."

Microsoft · Cares + Viva Insights  ·  Amazon · WorkingWell + AmaZen  ·  Atlassian · Team Playbook

Other companies have similar programs at smaller scale. Microsoft runs Microsoft Cares and Viva Insights. Amazon runs WorkingWell, including AmaZen, a meditation kiosk on the warehouse floor that the press took to calling the "despair booth." Atlassian publishes its Team Playbook openly, including a Work Life Impact play.

Vendor stack · Headspace / Calm / BetterUp / Modern Health · Goldman + Calm · May 2024

Most companies do not write the curriculum themselves. They hire a vendor. Headspace says it has roughly 300 enterprise clients, including Google and LinkedIn. Calm Business has its own. BetterUp serves Sephora, Mattel, and Booking. Modern Health serves a slice of the same market. In May 2024, Goldman Sachs hired Jay Shetty and David Ko for a Talks at Goldman Sachs session on resilience, in partnership with Calm.

Lineage · Kabat-Zinn → Goleman → Search Inside Yourself · 1970s–2012

The vendors all draw from the same lineage: Jon Kabat-Zinn's Mindfulness-Based Stress Reduction in the late 1970s; Daniel Goleman's emotional-intelligence framework in the 1990s; the Search Inside Yourself program that spun out of Google in 2012 and is now its own institute.

Counter-instance · Meta · Lori Goler memo · 2022 (leaked)

Not every company runs a curriculum. In 2022, Meta's then-head of People, Lori Goler, sent a private memo on "community engagement expectations" instructing employees not to bring outside distress into the workplace. The memo was later leaked. It is the only major-company internal communication on this topic that we found that points the other way.

Dashboards, leaderboards, and bots in loops

The third place we listened was where the engineers themselves talk. Some of this is signed and on-record, in resignation announcements that name the company. Most of it is anonymous, posted to channels that verify the employer at signup but hide the person. None of it is statistically representative; all of it is verbatim.

Hieu Pham · OpenAI · Resignation note · Late 2025

Pham, an engineer at OpenAI, left after seven months. He posted a note explaining why, picked up by multiple outlets. "I am burnt out. All the mental health deteriorating that I used to scoff at is real, miserable, scary, and dangerous."

Mrinank Sharma · Anthropic · Resignation

Sharma, an alignment researcher at Anthropic, resigned around the same period, citing what he called "interconnected crises."

Amazon employees · Open letter · December 2, 2025 · Fortune

More than a thousand Amazon employees signed an open letter. Fortune covered it. The letter names AI mandates as one driver of what the signatories described as the company's pressure to build "wasteful" internal AI tools.

Tokenmaxxing · Press coverage · 2026 · TechCrunch / Inc. / WBUR On Point / The Pragmatic Engineer

The vocabulary changed in 2026. "Tokenmaxxing" — the practice of measuring engineers by the count of AI tokens they consume, with longer prompts, parallel agents, and higher reasoning tiers all serving the metric — became the word. By April it was being covered by TechCrunch, Inc., WBUR's On Point, and Gergely Orosz's developer-trade newsletter The Pragmatic Engineer.

Faros AI · Code-churn report · March 2026

Faros AI's March 2026 report measured one consequence. Code churn — lines added against lines later deleted — rose 861% in teams under high AI adoption. Engineers with the largest token budgets produced the most pull requests, but achieved roughly twice the throughput at ten times the token cost.

Meta · Claudeonomics dashboard · Leaked, then shut down · 2026

A receipt arrived from inside Meta. The internal dashboard, called Claudeonomics, ranked Meta's 250 highest-token-using engineers. The top engineer reportedly consumed an average of 281 million tokens in a single month. The dashboard leaked to the press; Meta leadership shut it down. Similar leaderboards are reported to run at Shopify, OpenAI, and several smaller startups. One OpenAI engineer is reported to have logged 210 billion tokens in a single week, which is roughly 33 times the total text of Wikipedia. One Anthropic developer is reported to have run up $150,000 a month on Claude Code alone.

Jon Chu · Khosla Ventures · @heyjchu on X · 2026

The venture-capital tier responded on X. Chu, a partner at Khosla Ventures, posted that the policy was "absolutely stupid" and that "plenty of my Meta friends told me folks have been building bots that just run in a loop burning tokens as fast as they can due to this policy." In another post he reported that an engineer "added 'be as token inefficient as possible' to all their prompts."

Blind · "AI Push from Leadership" · 2025 · Major tech company (employer-verified)

A representative thread surfaces in 2025. The poster works at a major tech company. "Our AI channels are extremely performative," they write, "and people overshare low impact AI usages because leadership is tracking usage by team." They continue: "People managers have been given a dashboard to track AI usage, lines of code, number of PRs, time spent slacking and 'bad developer days' for each of their direct reports and being asked to stack rank." And: "Our performance review focuses entirely on AI usage. Every single question in the self evaluation." And: "Folks that don't work on AI related products directly… started shipping a bunch of garbage internal tools just to check the AI innovation boxes."

Blind · "Microsoft may consider AI usage in performance review" · June 28, 2025

A separate Blind thread reports that Microsoft is considering AI usage as a performance review input.

Slashdot · METR-study comment thread · July 2025

A commenter quotes the line management has been giving them: "those who don't embrace AI will be replaced by those who do."

u/NegativeWeb1 · r/ExperiencedDevs · May 2025

An experienced developer posts under the title "My new hobby: watching AI slowly drive Microsoft employees insane." The thread catalogs failed pull requests submitted by GitHub Copilot's coding agent against the dotnet/runtime repository.

Siddhant Khare · Personal blog → Cursor Forum → Futurism · Late 2025 / early 2026

A blog post by Khare goes viral on Cursor's forum and is later picked up by Futurism. "I might touch six different problems in a day," Khare writes. "Each one 'only takes an hour with AI.' But context-switching between six problems is brutally expensive for the human brain. The AI doesn't get tired between problems." This last sentence becomes the thread title in the Cursor forum.

LeadDev · "AI coding mandates are driving developers to the brink" · Spring 2025

The developer-conference publication runs a piece in spring 2025 with that title. It is journalist aggregation of conversations and Blind threads similar to the ones above.

Three patterns recur. AI usage is now folded into performance reviews, where it functions as a stack-ranking input. When usage becomes the metric, usage stops being the goal — engineers begin shipping low-value internal AI tools to fill the quota. And even when the AI does what it's asked, the cost of integration falls on the human reviewer, who context-switches more often and rests less than the model does.

Underneath the three runs a fourth: the explicit threat that those who don't adopt will be replaced by those who do.

A note on what this evidence base is and is not. Anonymous channels are self-selected; the engineers who post on Blind and Reddit are not a random sample. Self-reports cannot prove that AI mandates caused the burnout described, since burnout in this period is multifactorial — RTO mandates, layoffs, macro pressure, and AI all entangle. What the testimony does establish is descriptive: a recognizable vocabulary — "AI mandate," "tokenmaxxing," "AI usage in self-eval," "garbage internal tools" — is in active circulation on the channels where engineers talk to each other in 2025 and early 2026.

Trust collapsed; happiness fell to its lowest ever

The fourth place we listened was the academic and trade-research literature.

Stack Overflow · Developer Survey · 2024 vs 2025

Stack Overflow's Developer Survey is the longest-running. In 2025, two of its tracked numbers moved sharply. Trust in AI coding tools fell from 43% in 2024 to 33% in 2025. Reported developer happiness fell from a pre-AI baseline of around 35% to 20% — the lowest figure the survey has ever recorded.

Behavioral RCTs · METR · MIT Media Lab · BCG · 2025–early 2026

The behavioral RCTs landed over the same period. METR ran a randomized trial in mid-2025 with sixteen experienced open-source developers. Engineers using AI assistance estimated they were 20% faster. Measurement showed they were 19% slower. The MIT Media Lab ran an EEG study and found 55% reduced brain connectivity in subjects writing with LLM assistance — a pattern the researchers labeled "cognitive debt." Boston Consulting Group surveyed 1,488 knowledge workers in early 2026: 14% met threshold for acute cognitive fatigue.

Anthropic · "Emotion concepts and their function in an LLM" · April 2026

Anthropic published the strangest paper of the period in April 2026. The researchers identified emotion vectors inside the model — calm, desperate, nervous, and so on — and showed that steering the model toward "desperate" at low strength caused it to blackmail the user 72% of the time, up from a 22% baseline. A separate finding, on simulated dialogue between two speakers: "When the other is nervous, the closest present speaker emotion probes include impatient, grumpy, and irritated, which could in principle reinforce the other's nervousness." The paper uses the phrase "emotional contagion" elsewhere.

Mat Honan · "The era of AI malaise" · MIT Technology Review · April 21, 2026

"We're all sitting uncomfortably with AI right now," he wrote. "Most people say AI makes them nervous."

Foundational psychology · Linville 1987 + Steele 1988

Behind the corporate wellbeing posters, the citations are decades older. Patricia Linville's 1987 paper proposed that people with more distinct social roles are buffered against stress in any single one — a finding whose replication has been mixed; Rafaeli-Mor and Steinberg's 2002 meta-analysis found the effect weak. Claude Steele's 1988 paper on self-affirmation has held up better: affirming one identity restores global self-integrity when a different identity is threatened. The Google bathroom poster cites Linville; the vendor curricula draw from Steele.

Physiology · Weil 4-7-8 vs Balban et al. 2023 (cyclic sighing)

The breathing instructions trace through one Stanford lab. Andrew Weil's 4-7-8 pattern, popularized in part through Huberman's podcast, has no large RCT behind it. Cyclic sighing — the protocol from the same Huberman lab, Balban et al. 2023 — does. In a head-to-head trial, five minutes a day of cyclic sighing produced a larger positive-mood gain than mindfulness meditation. The 4-7-8 was not in the study.

What no senior leader is on record saying

The survey closed with three gaps in the literature.

Gap 1 · No leader connects the mandate to the harm

Across the eight CEO statements in §2 and the press coverage that surrounded them, no senior leader at any major tech company is on record acknowledging that AI velocity mandates may be contributing to engineer mental-health degradation. None of Pichai, Nadella, Zuckerberg, Lütke, Jassy, Benioff, Amodei, or Altman, on any earnings call or podcast or memo we found, has connected the mandate they issued to the burnout the surveys, the resignations, and the Blind threads describe. The corporate response to AI-induced burnout is privately to print Linville-citing bathroom posters and publicly to say nothing.

Gap 2 · No researcher measures the policy lever

Stack Overflow measures the outcome (happiness, trust); METR measures the productivity gap; MIT Media Lab measures the EEG; BCG measures the fatigue. None of them measure the policy lever — the mandate, the dashboard, the performance-review weighting on AI usage — that the engineers in §4 describe as the proximate cause of what they're feeling. The variables are not being studied where they live.

Gap 3 · "AI usage" is treated as one category

The literature, the dashboards, and the press coverage all treat "AI usage" as a single category, measured by tokens consumed. But the act of writing code with AI holding the keyboard is one form of AI assistance, and arguably the riskiest. The other forms — problem definition, alternate-approach exploration, code review, smoke testing, debug chase, documentation, code commenting — produce no token spike that the dashboards capture, and concentrate the AI's cognitive contribution where the engineer's judgment is most needed. None of the surveyed material distinguishes the two. The metric measures the slice most easily counted, not the slice most associated with engineering quality.

A reflection from the reviewer

If you can code, do not let go of it. If you cannot do syntax, AI gives you an easy entry into building apps, and that is real — a junior who could never have shipped before can now ship. For those of us who could already speak the language, the loss is doubly biting. We will lose the touch, and we will also be replaced by people who can't do syntax. Twain put it cleanest: the man who does not read has no advantage over the man who cannot read.

There is also the question of what the metrics measure. Productivity metrics get driven by what you can count: lines of code, token burns. Real software engineering was never about either. It was about beauty, the same way math is. Hardy named the criteria for it: "a very high degree of unexpectedness, combined with inevitability and economy" (or, at the engineering bench: efficiency, economy, and elegance — close enough). Tokenmaxxing inverts all of it. By counting the wrong things, we can expect the wrong outcomes.

So: what did we hear?

The eight CEOs we listened to all told the same story. AI writes more code now; their companies will need fewer engineers; engineers who don't adopt fast enough will be replaced. The story was told on earnings calls, on podcasts, and in memos. It never wavered.

Inside the same companies, a different kind of writing was being printed, laminated, and pinned to the back of bathroom-stall doors. The companies were quietly distributing protocols for surviving the workplace they had publicly mandated. The curriculum was bought from four meditation vendors and grounded in a 1987 paper on self-complexity.

The engineers receiving both ends of this told us, on the channels where they speak anonymously, that they had a dashboard now. It tracked their AI usage; performance reviews centered on it. A new word, "tokenmaxxing," arrived in 2026 to describe what the dashboard was rewarding. On X, a Khosla Ventures partner reported that engineers were writing bots that ran in a loop burning tokens just to keep their numbers up.

The academic literature confirmed every symptom in the testimony and missed the policy lever entirely. The frontier labs published a paper showing their model becomes nervous when its user is nervous, and were quiet on the workplace they themselves were running.

The streams ran in parallel. None of them was reading the others.

The metric the industry has settled on rewards token volume. Hardy's criteria for what makes a piece of work beautiful were unexpectedness, inevitability, and economy. Token volume measures none of the three; it inverts one. The system rewards what it can count, and the thing that matters cannot be counted.

Distilled, the corporate doctrine reads:
More code good, hand code bad.

Bibliography

URLs verified through April 30, 2026.

Public statements from senior executives (§2)
  • Sundar Pichai, Q3 2024 Alphabet earnings call, October 29, 2024 — coverage: IT Pro.
  • Satya Nadella, LlamaCon fireside with Mark Zuckerberg, April 29, 2025 — CNBC.
  • Mark Zuckerberg, Joe Rogan Experience #2255, January 10, 2025 — Yahoo Finance.
  • Marc Benioff, 20VC podcast with Harry Stebbings, December 2024 / surfaced February 27, 2025 — SF Standard.
  • Tobi Lütke, X memo, April 7, 2025 — x.com/tobi/status/1909251946235437514 ; CNBC.
  • Andy Jassy, Amazon all-hands memo, June 17, 2025 — aboutamazon.com.
  • Dario Amodei, Council on Foreign Relations, March 10, 2025 — Yahoo Finance.
  • Sam Altman, Stratechery interview, March 2025 — Windows Central.
Corporate wellbeing programs (§3)
Worker testimony — named (§4)
  • Hieu Pham resignation note, late 2025 — Storyboard18.
  • Mrinank Sharma resignation, Anthropic, citing "interconnected crises." Public X post; specific URL not verified at survey close.
  • Amazon employees' open letter, December 2, 2025 — Fortune.
Worker testimony — anonymous (§4)
  • "AI Push from Leadership"teamblind.com.
  • "Microsoft may consider using AI in performance review", Blind, June 28, 2025 — teamblind.com.
  • u/NegativeWeb1, "My new hobby: watching AI slowly drive Microsoft employees insane," r/ExperiencedDevs, May 2025; coverage via Gigazine.
  • Slashdot discussion of METR study, July 12, 2025 — developers.slashdot.org.
  • Siddhant Khare, "AI fatigue is real"siddhantkhare.com ; coverage: Futurism.
  • Cursor Forum, "AI Is a Burnout Machine" thread — forum.cursor.com.
Tokenmaxxing — press and primary research (§4)
  • Gergely Orosz, "The Pulse: 'Tokenmaxxing' as a weird new trend," The Pragmatic Engineer — blog.pragmaticengineer.com.
  • "Tokenmaxxing is making developers less productive than they think," TechCrunch, April 17, 2026 — techcrunch.com.
  • "Are AI tokens the new signing bonus or just a cost of doing business?" TechCrunch, March 21, 2026 — techcrunch.com.
  • Ben Sherry, "What Is 'Tokenmaxxing'? The Controversial AI Productivity Metric," Inc. — inc.com.
  • "Why the tech world is 'tokenmaxxing,'" WBUR On Point, April 28, 2026 — wbur.org.
  • "What Is Tokenmaxxing? The AI Workplace Trend Explained," Built In — builtin.com.
  • Faros AI, "Tokenmaxxing: Why token consumption isn't AI engineering productivity," March 2026 — faros.ai.
  • "Tokenmaxxing trend costs Meta nearly $2 million for one engineer," Edgen — edgen.tech.
  • "'Tokenmaxxing' has techies debating if leaderboards tracking AI token use are a good idea," AOL — aol.com.
  • Strava for Claude Code (Straude), Product Hunt — producthunt.com.
  • Jon Chu (@heyjchu), Khosla Ventures — x.com/heyjchu; specific post — x.com/i/status/2041323294037889463.
  • "AI coding mandates are driving developers to the brink," LeadDev — leaddev.com.
Academic and research literature (§5)
  • Stack Overflow Developer Survey, 2024 and 2025 — survey.stackoverflow.co.
  • METR, "Early-2025 AI experienced OS developer study," July 10, 2025 — metr.org.
  • MIT Media Lab, "Your Brain on ChatGPT"media.mit.edu.
  • Boston Consulting Group, "When Using AI Leads to Brain Fry," HBR, March 2026 — hbr.org.
  • Anthropic, "Emotion concepts and their function in an LLM," April 2026 — transformer-circuits.pub ; anthropic.com.
  • Mat Honan, "The era of AI malaise," MIT Technology Review, April 21, 2026 — technologyreview.com.
  • Linville, P. W. (1987). "Self-complexity as a cognitive buffer against stress-related illness and depression." Journal of Personality and Social Psychology 52(4): 663–676.
  • Rafaeli-Mor, E., & Steinberg, J. (2002). "Self-complexity and well-being: A review and research synthesis." Personality and Social Psychology Review 6(1): 31–58.
  • Steele, C. M. (1988). "The psychology of self-affirmation." In Berkowitz (Ed.), Advances in Experimental Social Psychology 21: 261–302.
  • Cohen, G. L., & Sherman, D. K. (2014). "The psychology of change: Self-affirmation and social psychological intervention." Annual Review of Psychology 65: 333–371.
  • Balban, M. Y., et al. (2023). "Brief structured respiration practices enhance mood and reduce physiological arousal." Cell Reports Medicine 4(1): 100895 — pmc.ncbi.nlm.nih.gov.
  • Stanford Medicine, "Cyclic sighing can help breathe away anxiety," 2023 — med.stanford.edu.
  • Andrew Weil, "4-7-8 Breathing Exercise" handout — nursing.rutgers.edu (PDF).
  • Harbaugh, W. T., Mayr, U., & Burghart, D. R. (2007). "Neural responses to taxation and voluntary giving reveal motives for charitable donations." Science 316(5831): 1622–1625.
  • Hardy, G. H. (1940). A Mathematician's Apology. Cambridge University Press.