Our Team

  • James Hindmarch

    PROGRAMME LEAD AND CURRICULUM DEVELOPER

    James is currently working as the programme lead and curriculum developer for ARENA. He is organising ARENA’s future iterations, as well as working on the new LLM evals week of ARENA.

    James has just finished his BA and MMath (Masters' of Mathematics, University of Cambridge) with a principal focus on logic and the foundations of mathematics, and a secondary focus on algebra. He also participated in the second iteration of SERI MATS, supervised by John Wentworth, where he conducted research into agent foundations.

  • Chloe Li

    STRATEGY AND CURRICULUM DEVELOPER

    Chloe is working as the strategy and curriculum developer of ARENA. She is providing new technical materials for ARENA and advising on strategy to ensure ARENA pursues its mission effectively.

    For the past year, she was the Director of Cambridge AI Safety Hub. She ran and TA-ed in the third and fourth iterations of CaMLAB after attending ARENA-v3. Previously, she graduated from neuroscience at Cambridge, and worked on representation engineering/steering with CAIS.

    She enjoys drawing, cycling, and conversations about the brain!

  • Callum McDougall

    FOUNDER AND CURRICULUM DESIGNER

    Callum is a research scientist in DeepMind’s interpretability team. He ran the first three iterations of ARENA, and has also done more open-source work in the mech interp space. He will be taking an advisory role for future iterations of ARENA.

    He is a semi-regular climber, sci-fi movie fan, and the current world record holder for the most clothes pegs removed from a line and held in one hand (yes, really).

    Two years ago, he decided to pursue AI safety research full-time (even though he probably peaked with the clothes pegs thing).

    You can read more of his writing here.

  • James Fox

    DIRECTOR

    James is currently LISA's Research Director. He is responsible for ARENA’s funding and ensuring that ARENA's overall strategic direction and impact aligns with its mission.

    James has just completed his DPhil (Computer Science, University of Oxford) on technical AI safety, supervised by Tom Everitt (Google DeepMind) and Michael Wooldridge & Alessandro Abate (Oxford). His research has mainly focused on game theory, causality, reinforcement learning, and agent foundations. James also has an MSci in Natural Sciences (Physics) from the University of Cambridge, has worked on various AI governance projects with the Centre for Study of Existential Risk, and has consulted for several AI start-ups.

  • David Quarel

    HEAD TA & REINFORCEMENT LEARNING

    David Quarel is a PhD student at the Australian National University (ANU), supervised by Marcus Hutter, focusing on AI safety, Universal Artificial Intelligence and Mechanistic Interpretability. He holds a BSc in physics and mathematics (2013-2017) and a MComp, specialising in AI and ML (2017-2019). Prior to starting his PhD, he spent two years teaching full-time at the ANU in mathematics, theoretical computer science, and digital hardware design. He delivered guest lectures and has years of experience with developing and delivering course content, and co-authored a textbook. David has also taught at previous iterations of ARENA as well as CaMLAB, and worked as a research assistant with KASL, an AI safety lab based at Cambridge University.

    David enjoys road cycling, rock climbing, teaching, rationalism, and is trying to get more into the habit of writing.

  • Joly Scriven

    OPERATIONS LEAD

    Joly is currently working as the Operations Lead at ARENA. He provides work in strategy and communications to support ARENA, and makes sure that everything about the programme runs smoothly.

    He studied Philosophy and French at the University of Oxford, and has also completed an MSc in Philosophy and Public Policy at the LSE, where he graduated with the highest score in his year. Here, his studies focused on moral philosophy, along with its applications in practical ethics and government policy – in particular, he wrote his Master’s dissertation on the topic of strong longtermism. He has previously worked in operations for Longview Philanthropy, and is an alumnus of BlueDot Impact’s AI Safety Fundamentals course.

    He enjoys all manner of sports, travelling, learning languages, and a good cup of coffee.

Our TAs

  • Robert Cooper

    TA: ALL-ROUND

    Robert is an independent AI safety researcher who trains new neural network languages architectures that are designed with interpretability in mind. He spent the previous decade doing machine learning and systems programming for startups and Facebook. His hobbies include partner acrobatics and talking strangers into trying flying trapeze.

  • Sunishchal Dev

    TA: EVALS

    Dev is an AI Safety Researcher and Data Scientist. He is a MATS 6.0 Scholar with a focus on improving the reliability of AI safety benchmarks. He has implemented several agent evals as part of METR's bounty program. Previously, he spent 8 years building AI solutions for enterprises. His projects ranged from supply chain optimization for consumer goods/biofuels to scheduling optimization for concert tours/sporting events.

    In his free time, Dev loves listening to electronic music, attending festivals, hiking, brewing loose-leaf tea, and playing co-op video games.

  • Nicky Pochinkov

    TA: MECH INTERP & REINFORCEMENT LEARNING

    Nicky is an independent AI safety researcher, focusing mostly on higher-level mechanistic interpretability of Language Models. In particular, he's developing frameworks for long-term behaviour modelling in language models, and exploring modularity and capability separability (including Machine Unlearning).

    Before getting started in AI Safety and taking part in SERI MATS in 2022 under John Wentworth, Nicky competed internationally in both the Mathematical and Chemistry Olympiads, studied Theoretical Physics, interned in Software Engineering, and took part in a startup accelerator program (Patch).

    He enjoys cooking and eating vegan cuisine, rock-climbing, travelling, listening to audiobooks, viewing anime (at 3x speed), playing video and board games, and self-hosting various services.

  • Daniel Tan

    TA: ALL-ROUND

    Daniel is a mechanistic interpretability researcher based at the LISA office in London. He is interested in developing better frameworks for understanding the computations performed by large language models, leveraging insights from sparse autoencoders and representational geometry.

    He is currently doing his PhD at University College London. In the past, he has worked on steering vectors and circuit analysis using transcoders. Previously, he graduated from Stanford University.

    He enjoys chatting and playing Magic: The Gathering.

  • Dennis Akar

    TA: REINFORCEMENT LEARNING

    Dennis is an independent AI safety researcher focusing on mechanistic interpretability of language models. He is interested in behaviour modelling, capability separability, and using mechinterp methods for training and fine-tuning to improve safety.

    Previously, he has taken part in MATS with Lee Sharkey, John Wentworth, and the research sprint phase for Neel Nanda to investigate superposition in LLMs. He also interned at multiple companies using ML/DL for cancer research and graduated from the University of Cambridge with an MPhil in Advanced Computer Science, where he investigated inductive biases in graph neural networks.

    He enjoys running, rock climbing, playing the piano, board games, and helping you figure out potential problems you might be having.