I am excited to announce that we are now offering fellowships at our new ASI safety org, AIXI Labs!

Apply here: https://docs.google.com/forms/d/e/1FAIpQLSdpKSZKcMUZCkMaBncmbDSSdVcHKodDmdALUTOOheGF_NMHuQ/viewform?usp=publish-editor

Focus. If you have been closely engaged with this community (or AIT, AIXI, and/or ML), you should consider applying! There is a good chance we can find a research match, since we are interested in supporting a variety of work in this area with significant (academic-style) freedom. However, AIXI Labs is ultimately focused on pursuing our ASI safety agenda based on UAI, and we will prefer candidates who are excited about this type of work (and/or interested in decreasing X-risk from ASI). Refer to our website (also linked above) for more details about our approach, and this talk for a (very high-level) overview of some previous work in the area.

Qualifications. The median applicant we have in mind is a computer science PhD student, but anyone at roughly this level with a background in AIT OR [an interest in AIT and strong ML skills] should consider applying.

Structure. We plan to run this program quarterly (every 3 months) with a rolling application. The first round will start as soon as possible. Successful fellows may (in rare cases) return for later iterations. Previous applicants will also be considered for later rounds, so it is encouraged to apply now even if you can’t start right away!

Benefits. We will offer a varying stipend (of roughly £5,000-10,000) and optionally assistance relocating to London. For ML-focused projects, we can also offer a moderate compute budget. Fellows will work closely with Cole Wyeth, Aram Ebtekar, and Marcus Hutter on AI safety research.

Acknowledgements. These fellowships are supported as part of the UK AISI’s Alignment Project (thanks to the AI Safety Tactical Opportunities Fund).

Posted in

Leave a comment