Proceedings of NeurIPS 2025
Journal of Improbable Careers
Thirty-Ninth Conference on Human Achievement Processing Systems
On the Optimal Trajectory from Tehran to Toronto:
A Multi-Objective Study in Citations, Diffusion, and Startup Equity
Mohammad Norouzi1,2,3,∗
Access Paper →
Decision: Accept (Oral Presentation)
Paper #4827 · NeurIPS 2025
Submitted: ~1985 Revised: Continuously ● Accept (Oral)

On the Optimal Trajectory from Tehran to Toronto: A Multi-Objective Study in Citations, Diffusion Models, and Converting a Google Brain Salary into Startup Equity

Mohammad Norouzi1,2,3,∗
1Sharif University of Technology  ·  2University of Toronto  ·  3Google Brain (ret.)  ·  Ideogram AI (current)
Correspondence: mo@ideogram.ai  |  Twitter: @mo_norouzi
Total Citations
75,000+
Funding Raised
$96.5M
Google Brain Years
7
Reviewer 2 Score
3/10
Abstract

We present a longitudinal case study of a single agent's optimization trajectory across multiple objective functions, including academic impact, technical innovation, and the poorly-understood loss landscape of startup founding. The subject, M. Norouzi, demonstrates that it is possible to co-author papers with Geoffrey Hinton, accumulate citations at a rate typically reserved for foundational theorems, and then voluntarily leave one of the most prestigious research positions in the world to bet everything on making AI draw pictures with correct spelling. Our results suggest this was either visionary or deeply irrational. The $96.5M in venture funding suggests the former. We release no code, as the subject's life is not reproducible.

diffusion models career optimization citation farming (organic) startup founder arc Sharif → UofT pipeline text rendering in images voluntary pay cut Google Brain exodus a16z speed-dial
1 · Introduction

Background & Motivation

The literature on optimal career trajectories is vast and largely useless. Most studies focus on incremental improvements — a promotion here, a lateral move there. This paper documents a far rarer phenomenon: the compound interest career, in which each position creates exponentially more optionality than the last.

The subject begins in Tehran, Iran, where he completes his undergraduate studies at Sharif University of Technology — an institution whose CS department has a disturbing habit of producing people who go on to reshape entire fields. He then relocates to the University of Toronto for a PhD under David Fleet, where he is awarded a Google PhD Fellowship in Machine Learning — a signal that Google was already investing in this particular human before the graduation ceremony.

His PhD thesis on scalable similarity search quietly laid the groundwork for everything that followed: if you can find similar things quickly, you can build systems that generate entirely new things. We suspect he knew this at the time. The thesis committee did not.

2 · Methods

Experimental Setup: The Google Brain Era (2016–2023)

Upon completing his PhD in December 2015, the subject was absorbed into Google Brain, Mountain View in January 2016. He transferred to the Toronto office in 2018, presumably because he missed winter and/or poutine. He rose to the rank of Senior Staff Research Scientist — a title that, within Google's leveling system, roughly translates to "we would prefer you never leave."

The experimental procedure during this period can be summarized as:

while True:
  idea = sample(brilliant_ideas)
  paper = write(idea, coauthors=[Hinton, et al.])
  citations += thousands
  if bored: break

The key projects — our "methods" — were as follows:

2016
Google NMT — Neural machine translation system. Bridged the gap between human and machine translation. Made Google Translate actually work. 10,500+ cites
2016
Neural Combinatorial Optimization — Applied RL to combinatorial optimization. A paper that said "what if we just let the neural network figure out the traveling salesman problem" and it kind of worked. 2,600+ cites
2017
Pixel Recursive Super Resolution — Making blurry images sharp with diffusion. Co-authored with Ryan Dahl, who then went off and created Deno. Everyone Mo touches goes on to do something absurd.
2020
SimCLR — A simple framework for contrastive learning. "Simple" in the title, groundbreaking in practice. Co-authored with Geoffrey Hinton. 32,800+ cites This paper alone has more citations than most researchers' entire careers.
2020
Dream to Control / Mastering Atari — RL agents learning via latent imagination. The AI literally dreams about playing video games and gets better at them. 3,500+ cites
2021
WaveGrad — Gradient-based waveform generation. Made AI produce audio using diffusion models. A quiet preview of bigger things to come.
2021
Palette — Image-to-image diffusion. Colorization, inpainting, uncropping — one model to rule them all. 2,200+ cites
2022
Imagen — Photorealistic text-to-image with deep language understanding. The paper that made the world realize diffusion models were the future. 8,800+ cites Google chose not to release it publicly. We'll come back to this.
2022
Imagen Video — High-definition text-to-video. Showed the world what video generation could look like. Google also didn't release this one. A pattern was forming.
2022
3DiM — Single image to 3D generation. Because 2D was getting boring.
3 · Results

Quantitative Analysis

Table 1 presents a comparative analysis of the subject's output against common benchmarks for "a productive career."

Table 1 · Career Output Metrics vs. Baseline
Metric Typical Researcher Very Good Researcher M. Norouzi
Total citations 500 5,000 75,000+
Papers with 1K+ cites 0 1–2 15+
Papers with Hinton 0 0 5+
VC funding raised $0 $0 $96.5M
Startups founded 0 0–1 1 (so far)
Text that AI can actually spell N/A N/A Yes
Bold values indicate statistically significant superiority (p < "come on, obviously").
Figure 1 · Citation Impact by Project
SimCLR
Google NMT
Imagen
Dream/Atari
Semi-Supervised
Imagen Video
Palette
Neural Combo Opt
Citation counts as of early 2026. The y-axis of this chart cried and asked to be logarithmic. We refused.
4 · Retraction Notice

Departure from Google Brain

⚠ Retraction of Employment — Effective 2023

After seven years at Google Brain and the title of Senior Staff Research Scientist, M. Norouzi has retracted his employment. The decision appears to be driven by an irreconcilable difference between "building the most impressive research that Google won't ship" and "actually letting people use the thing."

Google's Imagen — the text-to-image model the subject helped create — was never publicly released. Nor was Imagen Video. Nor 3DiM. The subject appears to have grown tired of writing papers about systems the world would never touch.

The retraction is considered final. Google has been notified but has not issued a competing offer large enough to override the founder's conviction function.

This retraction follows a well-documented pattern in the literature, commonly referred to as the "Google Brain to Startup Pipeline." Notable prior work includes the departures of the entire team that went on to found [redacted for competitive reasons]. The common causal factor appears to be: showing researchers what's possible, then not letting them ship it.

5 · New Submission

Ideogram AI — The Startup Paper

In 2023, the subject co-founded Ideogram AI in Toronto with a team of ex-Google Brain researchers. The thesis was deceptively simple: what if people could generate images where the text actually says what they typed?

This may sound trivial. It was not. At the time, every major image generation model — Midjourney, DALL-E, Stable Diffusion — produced beautiful images with text that looked like it had been written by a drunk doctor. "Happy Birthday" would emerge as "Hpqy Birhty." "Open" would render as "Oepn." The entire field had collectively given up on text rendering.

Ideogram launched and immediately nailed it.

Table 2 · Ideogram Funding Trajectory
RoundAmountLead InvestorsYear
Seed$16.5Ma16z, Index Ventures2023
Series A$80Ma16z (again)2024
Total$96.5MAnd counting.
a16z invested twice. When Andreessen Horowitz doubles down, it is generally considered a positive signal.

The product launched as a free tool in August 2023. Within the first month, it had 500,000+ users. By 2025, Ideogram had shipped versions 1.0, 2.0, 2a, and 3.0 — each setting new benchmarks for text rendering, photorealism, and style coherence. Bloomberg, Globe and Mail, VentureBeat, and every AI newsletter you subscribe to covered it.

The company is based in Toronto, which the subject has now established as a legitimate AI hub rather than just "the city where Hinton lives."

6 · Discussion

The Hinton Coefficient & Other Observations

A common critique in career studies is the "Hinton Coefficient" — the suspicion that anyone who co-authors with Geoffrey Hinton automatically receives a citation boost via academic gravity. We address this directly.

Yes, the subject co-authored multiple papers with Hinton. But he also co-authored papers without Hinton that received thousands of citations (Imagen, Palette, WaveGrad, Neural Combinatorial Optimization). More importantly, Hinton himself chose to collaborate with Norouzi repeatedly — and Hinton does not co-author out of charity.

We further note that the subject has published with Samy Bengio, Quoc Le, Jimmy Ba, David Fleet, Honglak Lee, Dale Schuurmans, Nando de Freitas, and approximately half of everyone who has ever won a NeurIPS Best Paper Award. This is either evidence of exceptional collaborative ability or proof that he sits in a very specific chair at a very specific cafeteria.

The subject's research also spans an unusually wide range: computer vision, NLP, reinforcement learning, audio generation, 3D reconstruction, medical imaging, hashing, and program synthesis. Most researchers pick a lane. Norouzi picked all of them.

7 · Peer Review

Reviewer Comments

Reviewer 1 (Career Studies Quarterly) 9/10
Strengths:
  • Exceptionally strong experimental results across all career metrics.
  • The Sharif → UofT → Google Brain → Startup pipeline is well-motivated and thoroughly validated.
  • The transition from pure research to product is handled more gracefully than any comparable work in the literature.
  • SimCLR alone would justify publication. Everything else is supplementary material that happens to be world-class.
Weaknesses:
  • The paper does not adequately explain how one person can contribute to this many subfields without violating conservation of energy.

Recommendation: Strong accept. Oral presentation. Please give this person a podium.

Reviewer 2 (Anonymous, Probably Bitter) 3/10
Weaknesses:
  • The subject claims 75,000+ citations but provides no ablation study on which citations were earned organically vs. inherited via Hinton proximity.
  • Leaving a Senior Staff position at Google Brain is presented as "visionary" but could equally be modeled as a high-variance decision with insufficient risk analysis.
  • "Text that AI can actually spell" is a feature, not a scientific contribution. I remain unconvinced this merits a company.
  • The funding section reads like a press release. Where is the critical analysis? Where are the failed experiments? Did no VC say no?
  • Iran → Toronto is a well-traveled path. The novelty claim is overstated.
Questions for Authors:
  • What is the subject's h-index without Hinton? (We suspect: still unreasonably high, but we want to see him sweat.)
  • Has the subject ever had a paper rejected? If so, please provide details for balance.
  • How does one "co-found" a company with ex-Google Brain researchers? Is there a Slack channel?

Recommendation: Reject. The career is clearly overfitting to the "success" metric and will not generalize to other researchers.

Author Rebuttal to Reviewer 2

We thank Reviewer 2 for their thorough feedback, which we suspect was written between the hours of 2–4 AM.

Regarding the Hinton coefficient: we direct the reviewer to Imagen (8,800 citations, no Hinton), Palette (2,200 citations, no Hinton), and Google NMT (10,500 citations, no Hinton). The subject's citation count without Hinton co-authorships still exceeds 40,000.

Regarding the "overfitting" concern: we note that the subject's career has generalized across vision, language, audio, RL, 3D, and medical imaging. If this is overfitting, we recommend Reviewer 2 try it.

Regarding whether any VC said no: we decline to answer on the grounds that it would make this paper less fun.

Reviewer 3 (Area Chair, Confused) 7/10
Strengths:
  • The career trajectory is undeniably impressive and well-documented.
  • Ideogram's product-market fit is strong. The text rendering differentiation is real.
Weaknesses:
  • I'm not sure this is a paper. It might be a LinkedIn profile that became sentient.
  • The abstract promises the life "is not reproducible." I tried. Can confirm.
Questions for Authors:
  • Is the subject aware that most people have one career highlight, not fifteen?
  • When does the subject sleep?

Recommendation: Weak accept. The paper is strong but the subject is making the rest of us look bad.

8 · Editorial Decision

Final Verdict

Decision: Accept — Oral Presentation
"The career of Mohammad Norouzi represents a rare instance of compounding excellence — where each chapter creates the foundation for something more ambitious than the last. From foundational research on similarity search to co-authoring some of the most cited papers in modern AI, to building a company that solved a problem the entire field had given up on. Despite Reviewer 2's objections, we find the results speak for themselves."
— Senior Area Chair, Journal of Improbable Careers
9 · Acknowledgments

Acknowledgments

The authors thank David Fleet for supervising the PhD that started it all, Geoffrey Hinton for co-authoring the papers that made the citation counter break, and Google Brain Toronto for providing the research environment and free snacks that fueled seven years of groundbreaking work.

We thank a16z and Index Ventures for writing checks large enough to make leaving Google feel like a reasonable life choice.

We thank Sharif University of Technology for continuing to be an unreasonable source of talent in the global AI ecosystem, and Iran for exporting one of its finest minds to Canada, which Canada will definitely not give back.

We thank the Ideogram team for joining a startup founded on the radical premise that AI-generated text should be legible.

Finally, we thank Reviewer 2, whose tireless negativity keeps the entire peer review system honest, and whose own citation count we decline to look up.

This paper was not generated by Ideogram, though we suspect the figures could have been.   No Google Brain researchers were harmed in the making of this startup. Several were recruited.

References

Selected Bibliography

[1] Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. "A Simple Framework for Contrastive Learning of Visual Representations." ICML, 2020. 32,800+

[2] Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., et al. "Google's Neural Machine Translation System." Technical Report, 2016. 10,500+

[3] Saharia, C., Chan, W., Saxena, S., ... Norouzi, M., et al. "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding." NeurIPS, 2022. 8,800+

[4] Hafner, D., Lillicrap, T., Norouzi, M., & Ba, J. "Mastering Atari with Discrete World Models." ICLR, 2021. 1,400+

[5] Norouzi, M. "Scalable Similarity Search." PhD Thesis, University of Toronto, 2015. [Where it all began.]

[6] Ideogram AI. "About Us." ideogram.ai, 2023–present. [Cited by: 500,000+ users.]

[7] Reviewer 2. "I Still Think This Is Overfitting." Unpublished manuscript, every review cycle, forever.