Welcome!

I’m currently at Anthropic working on evaluating misalignment risk. I previously worked at Conjecture and at Google in a team collaborating with DeepMind.

I often write to think things through, but I then don’t robustly critique or tidy my writing, resulting in writing that conveys something with low confidence. But I still like to have them linked: