Description
You want to build large scale ML systems from the ground up. You care about making safe, steerable, trustworthy systems. As a Software Engineer, you'll touch all parts of our code and infrastructure, whether that's making the cluster more reliable for our big jobs, improving throughput and efficiency, running and designing scientific experiments, or improving our dev tooling. You're excited to write code when you understand the research context and more broadly why it's important.
Note: This is an "evergreen" role that we keep open on an ongoing basis. We receive many applications for this position, and you may not hear back from us directly if we do not currently have an open role on any of our teams that matches your skills and experience. We encourage you to apply despite this, as we are continually evaluating for top talent to join our team. You are also welcome to reapply as you gain more experience, but we suggest only reapplying once per year.
You may be a good fit if you:
- Have significant software engineering experience
- Are results-oriented, with a bias towards flexibility and impact
- Pick up slack, even if it goes outside your job description
- Enjoy pair programming (we love to pair!)
- Want to learn more about machine learning research
- Care about the societal impacts of your work
Strong candidates may also have experience with:
- High performance, large-scale ML systems
- GPUs, Kubernetes, Pytorch, or OS internals
- Language modeling with transformers
- Reinforcement learning
- Large-scale ETL
- Security and privacy best practice expertise
- Machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL
- Low level systems, for example linux kernel tuning and eBPF
- Technical expertise: Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems
Representative projects:
- Optimizing the throughput of a new attention mechanism
- Comparing the compute efficiency of two Transformer variants
- Making a Wikipedia dataset in a format models can easily consume
- Scaling a distributed training job to thousands of GPUs
- Writing a design doc for fault tolerance strategies
- Creating an interactive visualization of attention between tokens in a language model
Deadline to apply: None. Applications will be reviewed on a rolling basis.
Please mention the word **ENTHRAL** and tag RMzguNjguMTM0LjE5NA== when applying to show you read the job post completely (#RMzguNjguMTM0LjE5NA==). This is a beta feature to avoid spam applicants. Companies can search these words to find applicants that read this and see they're human.