Preparing for your next Quant Interview?
Practice Here!
OpenQuant
2026-05-16

AI Researcher, LLMs

logo
Hudson River Trading
AI Researcher, LLMs
New York, US
Apply Now
Job Description

Hudson River Trading (HRT) is seeking an LLM-focused AI Researcher to join the HAIL team. HAIL (HRT AI Labs) is the team at HRT responsible for developing and maintaining our most powerful AI models, which are used by our trading teams to drive a significant fraction of our trading. The LLM team within HAIL builds large language models from scratch — pretraining, post-training, and serving — that are deployed across the firm to accelerate research, engineering, and trading workflows. Our goal is to produce LLMs that are genuinely best-in-world for the use cases we care about.

In this role, you'll work on every stage of the model development loop: tokenizer training, dataset curation and mixing, pretraining, post-training, evaluation design, inference, and live trading. HAIL researchers have great independence to pursue the directions they think would be most impactful as part of a small, focused team with minimal bureaucracy. They are enabled by state-of-the-art research clusters with very high GPU-to-researcher ratios, and supported by excellent engineering, hardware, and systems teams to realize their vision.

The work is challenging and visibly impactful, and the bar is high: we are not looking for people who will adapt existing recipes. We want people who can drive frontier capability gains and ship the code that gets us there.

Qualifications

  • Two or more years of hands-on experience training, post-training, or evaluating LLMs at scale
  • Direct experience with at least one of: large-scale pretraining, post-training (SFT, RL), LLM evaluation design and statistical analysis, or distributed training/inference systems
  • Research taste: ability to identify what's worth trying next, design ablations that change decisions, and update beliefs from evidence
  • Strong engineering skills valued: GPU kernel development, lower-level PyTorch internals, or published work on a frontier-style LLM result

The estimated base salary range for this position is 200,000 to 300,000 USD per year (or local equivalent). The base pay offered may vary depending on multiple individualized factors, including location, job-related knowledge, skills, and experience. This role will also be eligible for discretionary performance-based bonuses and a competitive benefits package.

For More Quantitative Finance Jobs visit OpenQuant

Share this job
Share On
Apply Now