Hagen was born in northeastern Germany, and grew up in Rostock. He started
playing with computers at a relatively young age, which ultimately led
to an interest in programming, a technical mindset in general, as well
as some other geeky inclinations.
His original exposure to the ideas of transhumanism and existential risks dates back to 2003, or slightly before. He read a bunch of articles about these subjects and joined the SL4 mailing list in early 2004.
In retrospect, these exposures were formative. Sebastian had been looking for something to do with his life (being dissatisfied with what most people do due to insufficient impact), and here it was: Humanity will soon have the technology to fix all that’s wrong with our civilization, but there’s a good chance we’d destroy ourselves in the process — either by UFAI, with nanoweaponry, or through various social-structure breakdowns caused by or taken in an attempt to deal with the threat of these.
Mainstream society doesn’t take existential risks sufficiently seriously. This kind of stuff should be funded at a high level and taught in schools, but it’s not. Humanity is doing it wrong. Allocating more resources to these projects therefore offers a much better expected payoff than most things Sebastian could do with his time.
Sebastian started donating to MIRI around this time, though being in secondary school the absolute amounts were rather limited. From SL4 he learned about Overcoming Bias when it originally started, and joined that community as well; and later the same for Less Wrong, when it split off from OB. He learned a lot from these blogs: all the things that are obviously broken with standard human thinking — and documented in the scientific literature, at that — and a lot of techniques on how to do better.
He moved to Dublin and started working full-time as a Site Reliability Engineer in 2012 (having done an internship in 2011), and kept donating part of his income to MIRI. He hadn’t finished his degree at the time, and (having proven to himself that he didn’t need one to get good employment) ended up dropping out.
Which is where Sebastian is now. He’s still a transhumanist, an aspiring rationalist and is deeply interested in the future of humanity and how to deal with existential risks — UFAI and MNT in particular. As far as he can tell, FAI is the only reasonable way to fix these risks in generality and still allow us to reap the benefits from these ultratechnologies.