On the Optimal Trajectory from Tehran to Toronto: A Multi-Objective Study in Citations, Diffusion Models, and Converting a Google Brain Salary into Startup Equity
Correspondence: mo@ideogram.ai | Twitter: @mo_norouzi
We present a longitudinal case study of a single agent's optimization trajectory across multiple objective functions, including academic impact, technical innovation, and the poorly-understood loss landscape of startup founding. The subject, M. Norouzi, demonstrates that it is possible to co-author papers with Geoffrey Hinton, accumulate citations at a rate typically reserved for foundational theorems, and then voluntarily leave one of the most prestigious research positions in the world to bet everything on making AI draw pictures with correct spelling. Our results suggest this was either visionary or deeply irrational. The $96.5M in venture funding suggests the former. We release no code, as the subject's life is not reproducible.
Background & Motivation
The literature on optimal career trajectories is vast and largely useless. Most studies focus on incremental improvements — a promotion here, a lateral move there. This paper documents a far rarer phenomenon: the compound interest career, in which each position creates exponentially more optionality than the last.
The subject begins in Tehran, Iran, where he completes his undergraduate studies at Sharif University of Technology — an institution whose CS department has a disturbing habit of producing people who go on to reshape entire fields. He then relocates to the University of Toronto for a PhD under David Fleet, where he is awarded a Google PhD Fellowship in Machine Learning — a signal that Google was already investing in this particular human before the graduation ceremony.
His PhD thesis on scalable similarity search quietly laid the groundwork for everything that followed: if you can find similar things quickly, you can build systems that generate entirely new things. We suspect he knew this at the time. The thesis committee did not.
Experimental Setup: The Google Brain Era (2016–2023)
Upon completing his PhD in December 2015, the subject was absorbed into Google Brain, Mountain View in January 2016. He transferred to the Toronto office in 2018, presumably because he missed winter and/or poutine. He rose to the rank of Senior Staff Research Scientist — a title that, within Google's leveling system, roughly translates to "we would prefer you never leave."
The experimental procedure during this period can be summarized as:
while True:
idea = sample(brilliant_ideas)
paper = write(idea, coauthors=[Hinton, et al.])
citations += thousands
if bored: break
The key projects — our "methods" — were as follows:
Quantitative Analysis
Table 1 presents a comparative analysis of the subject's output against common benchmarks for "a productive career."
| Metric | Typical Researcher | Very Good Researcher | M. Norouzi |
|---|---|---|---|
| Total citations | 500 | 5,000 | 75,000+ |
| Papers with 1K+ cites | 0 | 1–2 | 15+ |
| Papers with Hinton | 0 | 0 | 5+ |
| VC funding raised | $0 | $0 | $96.5M |
| Startups founded | 0 | 0–1 | 1 (so far) |
| Text that AI can actually spell | N/A | N/A | Yes |
Departure from Google Brain
After seven years at Google Brain and the title of Senior Staff Research Scientist, M. Norouzi has retracted his employment. The decision appears to be driven by an irreconcilable difference between "building the most impressive research that Google won't ship" and "actually letting people use the thing."
Google's Imagen — the text-to-image model the subject helped create — was never publicly released. Nor was Imagen Video. Nor 3DiM. The subject appears to have grown tired of writing papers about systems the world would never touch.
The retraction is considered final. Google has been notified but has not issued a competing offer large enough to override the founder's conviction function.
This retraction follows a well-documented pattern in the literature, commonly referred to as the "Google Brain to Startup Pipeline." Notable prior work includes the departures of the entire team that went on to found [redacted for competitive reasons]. The common causal factor appears to be: showing researchers what's possible, then not letting them ship it.
Ideogram AI — The Startup Paper
In 2023, the subject co-founded Ideogram AI in Toronto with a team of ex-Google Brain researchers. The thesis was deceptively simple: what if people could generate images where the text actually says what they typed?
This may sound trivial. It was not. At the time, every major image generation model — Midjourney, DALL-E, Stable Diffusion — produced beautiful images with text that looked like it had been written by a drunk doctor. "Happy Birthday" would emerge as "Hpqy Birhty." "Open" would render as "Oepn." The entire field had collectively given up on text rendering.
Ideogram launched and immediately nailed it.
| Round | Amount | Lead Investors | Year |
|---|---|---|---|
| Seed | $16.5M | a16z, Index Ventures | 2023 |
| Series A | $80M | a16z (again) | 2024 |
| Total | $96.5M | And counting. | |
The product launched as a free tool in August 2023. Within the first month, it had 500,000+ users. By 2025, Ideogram had shipped versions 1.0, 2.0, 2a, and 3.0 — each setting new benchmarks for text rendering, photorealism, and style coherence. Bloomberg, Globe and Mail, VentureBeat, and every AI newsletter you subscribe to covered it.
The company is based in Toronto, which the subject has now established as a legitimate AI hub rather than just "the city where Hinton lives."
The Hinton Coefficient & Other Observations
A common critique in career studies is the "Hinton Coefficient" — the suspicion that anyone who co-authors with Geoffrey Hinton automatically receives a citation boost via academic gravity. We address this directly.
Yes, the subject co-authored multiple papers with Hinton. But he also co-authored papers without Hinton that received thousands of citations (Imagen, Palette, WaveGrad, Neural Combinatorial Optimization). More importantly, Hinton himself chose to collaborate with Norouzi repeatedly — and Hinton does not co-author out of charity.
We further note that the subject has published with Samy Bengio, Quoc Le, Jimmy Ba, David Fleet, Honglak Lee, Dale Schuurmans, Nando de Freitas, and approximately half of everyone who has ever won a NeurIPS Best Paper Award. This is either evidence of exceptional collaborative ability or proof that he sits in a very specific chair at a very specific cafeteria.
The subject's research also spans an unusually wide range: computer vision, NLP, reinforcement learning, audio generation, 3D reconstruction, medical imaging, hashing, and program synthesis. Most researchers pick a lane. Norouzi picked all of them.
Reviewer Comments
- Exceptionally strong experimental results across all career metrics.
- The Sharif → UofT → Google Brain → Startup pipeline is well-motivated and thoroughly validated.
- The transition from pure research to product is handled more gracefully than any comparable work in the literature.
- SimCLR alone would justify publication. Everything else is supplementary material that happens to be world-class.
- The paper does not adequately explain how one person can contribute to this many subfields without violating conservation of energy.
Recommendation: Strong accept. Oral presentation. Please give this person a podium.
- The subject claims 75,000+ citations but provides no ablation study on which citations were earned organically vs. inherited via Hinton proximity.
- Leaving a Senior Staff position at Google Brain is presented as "visionary" but could equally be modeled as a high-variance decision with insufficient risk analysis.
- "Text that AI can actually spell" is a feature, not a scientific contribution. I remain unconvinced this merits a company.
- The funding section reads like a press release. Where is the critical analysis? Where are the failed experiments? Did no VC say no?
- Iran → Toronto is a well-traveled path. The novelty claim is overstated.
- What is the subject's h-index without Hinton? (We suspect: still unreasonably high, but we want to see him sweat.)
- Has the subject ever had a paper rejected? If so, please provide details for balance.
- How does one "co-found" a company with ex-Google Brain researchers? Is there a Slack channel?
Recommendation: Reject. The career is clearly overfitting to the "success" metric and will not generalize to other researchers.
We thank Reviewer 2 for their thorough feedback, which we suspect was written between the hours of 2–4 AM.
Regarding the Hinton coefficient: we direct the reviewer to Imagen (8,800 citations, no Hinton), Palette (2,200 citations, no Hinton), and Google NMT (10,500 citations, no Hinton). The subject's citation count without Hinton co-authorships still exceeds 40,000.
Regarding the "overfitting" concern: we note that the subject's career has generalized across vision, language, audio, RL, 3D, and medical imaging. If this is overfitting, we recommend Reviewer 2 try it.
Regarding whether any VC said no: we decline to answer on the grounds that it would make this paper less fun.
- The career trajectory is undeniably impressive and well-documented.
- Ideogram's product-market fit is strong. The text rendering differentiation is real.
- I'm not sure this is a paper. It might be a LinkedIn profile that became sentient.
- The abstract promises the life "is not reproducible." I tried. Can confirm.
- Is the subject aware that most people have one career highlight, not fifteen?
- When does the subject sleep?
Recommendation: Weak accept. The paper is strong but the subject is making the rest of us look bad.
Final Verdict
Acknowledgments
The authors thank David Fleet for supervising the PhD that started it all, Geoffrey Hinton for co-authoring the papers that made the citation counter break, and Google Brain Toronto for providing the research environment and free snacks that fueled seven years of groundbreaking work.
We thank a16z and Index Ventures for writing checks large enough to make leaving Google feel like a reasonable life choice.
We thank Sharif University of Technology for continuing to be an unreasonable source of talent in the global AI ecosystem, and Iran for exporting one of its finest minds to Canada, which Canada will definitely not give back.
We thank the Ideogram team for joining a startup founded on the radical premise that AI-generated text should be legible.
Finally, we thank Reviewer 2, whose tireless negativity keeps the entire peer review system honest, and whose own citation count we decline to look up.
† This paper was not generated by Ideogram, though we suspect the figures could have been. ‡ No Google Brain researchers were harmed in the making of this startup. Several were recruited.
Selected Bibliography
[1] Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. "A Simple Framework for Contrastive Learning of Visual Representations." ICML, 2020. 32,800+
[2] Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., et al. "Google's Neural Machine Translation System." Technical Report, 2016. 10,500+
[3] Saharia, C., Chan, W., Saxena, S., ... Norouzi, M., et al. "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding." NeurIPS, 2022. 8,800+
[4] Hafner, D., Lillicrap, T., Norouzi, M., & Ba, J. "Mastering Atari with Discrete World Models." ICLR, 2021. 1,400+
[5] Norouzi, M. "Scalable Similarity Search." PhD Thesis, University of Toronto, 2015. [Where it all began.]
[6] Ideogram AI. "About Us." ideogram.ai, 2023–present. [Cited by: 500,000+ users.]
[7] Reviewer 2. "I Still Think This Is Overfitting." Unpublished manuscript, every review cycle, forever.