Eli Tyre

is creating Existential Risk Reduction (and some additional rationality)

1

patron

$46

per month
I work on projects aimed at preventing existential catastrophe over the next 100 years, including...
  • Building cognitive tools for AI alignment researchers (primarily those at MIRI
  • Teaching workshops for recruiting technical talent into AI safety with CFAR
  • Facilitating strategy-intuition transfer conversations
  • Running small research or strategy events to grapple with crucial considerations
  • Debugging/ coaching with a few leaders of EA orgs.
  • Other stuff

I hustle.

I also spend time trying to generally model the world and writing about it.

I am comfortably well funded. That is, I am paid enough (by various EA orgs) to live frugally and to put some money away in index funds every month.

Donations here will be used to fund my personal training: There are various workshops and trainings that would help me acquire skill or expertises that are relevant to the work I'm doing. Currently, I decide whether or not to do a given training on the basis of both constraints on my time and the monetary cost. If I had an extra ~$10,000 a year training budget, monetary-cost would be eclipsed by time-cost. At that point, my decision about whether or not to do a given training would be based solely on my estimate of how good it would be for the world if I do it, vs. how good it would be for the world if I spend the time working on projects.

Marginal dollars would marginally move me towards that world.

After funding training, additional donations are saved (in index funds) so that I have runway to pursue unfunded projects or donated to MIRI.


Goals
$46 of $833 per month
I will have a yearly training budget sufficient to determine all of my choices about which trainings to do solely on the basis of expected benefit to the world, instead of being influenced by financial factors.
1 of 1
I work on projects aimed at preventing existential catastrophe over the next 100 years, including...
  • Building cognitive tools for AI alignment researchers (primarily those at MIRI
  • Teaching workshops for recruiting technical talent into AI safety with CFAR
  • Facilitating strategy-intuition transfer conversations
  • Running small research or strategy events to grapple with crucial considerations
  • Debugging/ coaching with a few leaders of EA orgs.
  • Other stuff

I hustle.

I also spend time trying to generally model the world and writing about it.

I am comfortably well funded. That is, I am paid enough (by various EA orgs) to live frugally and to put some money away in index funds every month.

Donations here will be used to fund my personal training: There are various workshops and trainings that would help me acquire skill or expertises that are relevant to the work I'm doing. Currently, I decide whether or not to do a given training on the basis of both constraints on my time and the monetary cost. If I had an extra ~$10,000 a year training budget, monetary-cost would be eclipsed by time-cost. At that point, my decision about whether or not to do a given training would be based solely on my estimate of how good it would be for the world if I do it, vs. how good it would be for the world if I spend the time working on projects.

Marginal dollars would marginally move me towards that world.

After funding training, additional donations are saved (in index funds) so that I have runway to pursue unfunded projects or donated to MIRI.


Recent posts by Eli Tyre