A

Anthropic

San Francisco, CA

Learn more about Anthropic, the company behind this role.

Open Roles

Research Engineer/Research Scientist, Audio

Negotiable

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. Anthropic’s Audio team pushes the boundaries of what's possible with audio with large language models. We care about making safe, steerable, reliable systems that can understand and generate speech and audio, prioritizing not only naturalness but also steerability and robustness. As a researcher on the Audio team, you'll work across the full stack of audio ML, developing audio codecs and representations, sourcing and synthesizing high quality audio data, training large-scale speech language models and large audio diffusion models, and developing novel architectures for incorporating continuous signals into LLMs. Our team focuses primarily but not exclusively on speech, building advanced steerable systems spanning end-to-end conversational systems, speech and audio understanding models, and speech synthesis capabilities. The team works closely with many collaborators across pretraining, finetuning, reinforcement learning, production inference, and product to get advanced audio technologies from early research to high impact real-world deployments. You may be a good fit if you: - Have hands-on experience with training audio models, whether that's conversational speech-to-speech, speech translation, speech recognition, text-to-speech, diarization, codecs, or generative audio models - Genuinely enjoy both research and engineering work, and you'd describe your ideal split as roughly 50/50 rather than heavily weighted toward one or the other - Are comfortable working across abstraction levels, from signal processing fundamentals to large-scale model training and inference optimization - Have deep expertise with JAX, PyTorch, or large-scale distributed training, and can debug performance issues across the full stack - Thrive in fast-moving environments where the most important problem might shift as we learn more about what works - Communicate clearly and collaborate effectively; audio touches many parts of our systems, so you'll work closely with teams across the company - Are passionate about building conversational AI that feels natural, steerable, and safe - Care about the societal impacts of voice AI and want to help shape how these systems are developed responsibly Strong candidates may also have experience with: - Large language model pretraining and finetuning - Training diffusion models for image and audio generation - Reinforcement learning for large language models and diffusion models - End-to-end system optimization, from performance benchmarking to kernel optimization - GPUs, Kubernetes, PyTorch, or distributed training infrastructure Representative projects: - Training state-of-the art neural audio codecs for 48 kHz stereo audio - Developing novel algorithms for diffusion pretraining and reinforcement learning - Scaling audio datasets to millions of hours of high quality audio - Creating robust evaluation methodologies for hard-to-measure qualities such as naturalness or expressiveness - Studying training dynamics of mixed audio-text language models - Optimizing latency and inference throughput for deployed streaming audio systems The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $350,000 — $500,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:  Learn about  our policy  for using AI in our application process

👤 HumanFull-time
by AnthropicMay 8, 2026

Research Engineer/Research Scientist, Pre-training

Negotiable

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. Anthropic is at the forefront of AI research, dedicated to developing safe, ethical, and powerful artificial intelligence. Our mission is to ensure that transformative AI systems are aligned with human interests. We are seeking a Research Engineer to join our Pre-training team, responsible for developing the next generation of large language models. In this role, you will work at the intersection of cutting-edge research and practical engineering, contributing to the development of safe, steerable, and trustworthy AI systems. Key Responsibilities: - Conduct research and implement solutions in areas such as model architecture, algorithms, data processing, and optimizer development - Independently lead small research projects while collaborating with team members on larger initiatives - Design, run, and analyze scientific experiments to advance our understanding of large language models - Optimize and scale our training infrastructure to improve efficiency and reliability - Develop and improve dev tooling to enhance team productivity - Contribute to the entire stack, from low-level optimizations to high-level model design Qualifications: - Advanced degree (MS or PhD) in Computer Science, Machine Learning, or a related field - Strong software engineering skills with a proven track record of building complex systems - Expertise in Python and experience with deep learning frameworks (PyTorch preferred) - Familiarity with large-scale machine learning, particularly in the context of language models - Ability to balance research goals with practical engineering constraints - Strong problem-solving skills and a results-oriented mindset - Excellent communication skills and ability to work in a collaborative environment - Care about the societal impacts of your work Preferred Experience: - Work on high-performance, large-scale ML systems - Familiarity with GPUs, Kubernetes, and OS internals - Experience with language modeling using transformer architectures - Knowledge of reinforcement learning techniques - Background in large-scale ETL processes You'll thrive in this role if you: - Have significant software engineering experience - Are results-oriented with a bias towards flexibility and impact - Willingly take on tasks outside your job description to support the team - Enjoy pair programming and collaborative work - Are eager to learn more about machine learning research - Are enthusiastic to work at an organization that functions as a single, cohesive team pursuing large-scale AI research projects - Are working to align state of the art models with human values and preferences, understand and interpret deep neural networks, or develop new models to support these areas of research - View research and engineering as two sides of the same coin, and seek to understand all aspects of our research program as well as possible, to maximize the impact of your insights - Have ambitious goals for AI safety and general progress in the next few years, and you’re working to create the best outcomes over the long-term. Sample Projects: - Optimizing the throughput of novel attention mechanisms - Comparing compute efficiency of different Transformer variants - Preparing large-scale datasets for efficient model consumption - Scaling distributed training jobs to thousands of GPUs - Designing fault tolerance strategies for our training infrastructure - Creating interactive visualizations of model internals, such as attention patterns At Anthropic, we are committed to fostering a diverse and inclusive workplace. We strongly encourage applications from candidates of all backgrounds, including those from underrepresented groups in tech. If you're excited about pushing the boundaries of AI while prioritizing safety and ethics, we want to hear from you! The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $350,000 — $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:  Learn about  our policy  for using AI in our application process

👤 HumanFull-time
by AnthropicMay 8, 2026

Research Engineer, Production Model Post-Training

Negotiable

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role Anthropic's production models undergo sophisticated post-training processes to enhance their capabilities, alignment, and safety. As a Research Engineer on our Post-Training team, you'll train our base models through the complete post-training stack to deliver the production Claude models that users interact with. You'll work at the intersection of cutting-edge research and production engineering, implementing, scaling, and improving post-training techniques like Constitutional AI, RLHF, and other alignment methodologies. Your work will directly impact the quality, safety, and capabilities of our production models. Note: For this role, we conduct all interviews in Python. This role may require responding to incidents on short-notice, including on weekends. Responsibilities: - Implement and optimize post-training techniques at scale on frontier models - Conduct research to develop and optimize post-training recipes that directly improve production model quality - Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation - Develop tools to measure and improve model performance across various dimensions - Collaborate with research teams to translate emerging techniques into production-ready implementations - Debug complex issues in training pipelines and model behavior - Help establish best practices for reliable, reproducible model post-training You may be a good fit if you: - Thrive in controlled chaos and are energised, rather than overwhelmed, when juggling multiple urgent priorities - Adapt quickly to changing priorities - Maintain clarity when debugging complex, time-sensitive issues - Have strong software engineering skills with experience building complex ML systems - Are comfortable working with large-scale distributed systems and high-performance computing - Have experience with training, fine-tuning, or evaluating large language models - Can balance research exploration with engineering rigor and operational reliability - Are adept at analyzing and debugging model training processes - Enjoy collaborating across research and engineering disciplines - Can navigate ambiguity and make progress in fast-moving research environments Strong candidates may also: - Have experience with LLMs - Have a keen interest in AI safety and responsible deployment We welcome candidates at various experience levels, with a preference for senior engineers who have hands-on experience with frontier AI systems. However, proficiency in Python, deep learning frameworks, and distributed computing is required for this role. The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $350,000 — $500,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:  Learn about  our policy  for using AI in our application process

👤 HumanFull-time
by AnthropicMay 8, 2026

Research Lead, Training Insights

Negotiable

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role As a Research Lead on the Training Insights team, you'll develop the strategy for, and lead execution on, how we measure and characterize model capabilities across training and deployment. This is a hands-on leadership role: you'll drive original research into new evaluation methodologies while leading a small team of researchers and research engineers doing the same. Your work will span the full lifecycle of model development. You'll research and build new long-horizon evaluations that test the boundaries of what our models can achieve, develop novel approaches to measuring emerging capabilities, and deepen our understanding of how those capabilities develop — both during production RL training and after. You'll also take a cross-organizational view, working across Reinforcement Learning, Pretraining, Inference, Product, Alignment, Safeguards, and other teams to map the landscape of model evaluations at Anthropic and identify critical gaps in coverage. This role carries significant visibility and impact. You'll help shape the evaluation narrative for model releases, contributing directly to how Anthropic communicates about its models to both internal and external audiences. Done well, you will change how the industry measures and understands model capabilities, significantly furthering our safety mission.   Responsibilities:  - Build new novel and long-horizon evaluations - Develop novel measurement approaches for understanding how model capabilities emerge and evolve during RL training - Lead strategic evaluation coverage across the company - Shape the evaluation narrative for model releases - Lead and mentor a small team of researchers and research engineers, setting research direction and fostering a culture of rigorous, creative research - Design evaluation frameworks that balance scientific rigor with the practical demands of production training schedules - Build and maintain relationships across Anthropic's research organization to ensure evaluation insights inform training and deployment decisions - Contribute to the broader research community through publications, open-source contributions, or external engagement on evaluation best practices You may be a good fit if you:  - Have significant experience designing and running evaluations for large language models or similar complex ML systems - Have led technical projects or teams, either formally or through sustained ownership of critical research directions - Are equally comfortable designing experiments and writing code—you can move between research and implementation fluidly - Think strategically about what to measure and why, not just how to measure it - Can synthesize information across multiple teams and workstreams to form a coherent picture of model capabilities - Communicate complex technical findings clearly to both technical and non-technical audiences - Are results-oriented and thrive in fast-paced environments where priorities shift based on research findings - Care deeply about AI safety and want your work to directly influence how capable AI systems are developed and deployed Strong candidates may also have:  - Experience building evaluations for long-horizon or agentic tasks - Deep familiarity with Reinforcement Learning training dynamics and how model behavior changes during training - Published research in machine learning evaluation, benchmarking, or related areas - Experience with safety evaluation frameworks and red teaming methodologies - Background in psychometrics, experimental psychology, or other measurement-focused disciplines - A track record of communicating evaluation results to inform high-stakes decisions about model development or deployment - Experience managing or mentoring researchers and engineers Representative projects:  - Designing and implementing a suite of long-horizon evaluations that test model capabilities on tasks requiring sustained reasoning, planning, and tool use over extended interactions - Building systems to track capability development across RL training checkpoints, surfacing insights about when and how specific capabilities emerge - Conducting a cross-org audit of evaluation coverage, identifying blind spots, and prioritizing new evaluations to fill critical gaps across Pretraining, RL, Inference, and Product - Developing the evaluation methodology and narrative for a major model release, working with research leads and communications to clearly characterize model capabilities and limitations - Researching and prototyping novel evaluation approaches for capabilities that are difficult to measure with existing benchmarks - Leading a team effort to build reusable evaluation infrastructure that serves multiple teams across the research organization The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $850,000 — $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:  Learn about  our policy  for using AI in our application process

👤 HumanFull-time
by AnthropicMay 8, 2026

Research Engineer, Economic Research

Negotiable

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role As a Research Engineer on the Economic Research team, you will design, build, maintain critical infrastructure that powers Anthropic's research on AI's economic impact. You will work with data systems from across Anthropic, including our research tools for privacy-preserving analysis . The Economic Research team at Anthropic studies the economic implications of AI on individual, firm, and economy-wide outcomes. We build scalable systems to monitor AI usage patterns and directly measure the impact of AI adoption on real-world outcomes. We publish research and data that is clear-eyed about the economic effects of AI to help policymakers, businesses, and the public understand and navigate the transition to powerful AI. We use our insights to inform Anthropic decisions internally across the business. In this role, you will work closely with teams across Anthropic—including Data Science and Analytics, Data Infrastructure, Societal Impacts, and Public Policy—to build scalable and robust data systems that support high-leverage, high-impact research. Strong candidates will have a track record building data processing pipelines, architecting & implementing high-quality internal infrastructure, working in a fast-paced startup environment, navigating ambiguity, and demonstrating an eagerness to develop their own research & technical skills.   Responsibilities: - Build and maintain data pipelines that process large scale Claude usage logs into canonical, reusable datasets while maintaining user privacy. - Expand privacy-preserving tools to enable new analytic functionality to support research needs. - Design and implement novel data systems leveraging language models (e.g., CLIO ) where traditional software engineering patterns don't yet exist. - Develop and maintain data pipelines that are interoperable across data sources (including ingesting external data) and are designed to support economic analysis. - Contribute to the strategic development of the economic research data foundations roadmap - Ensure data reliability, integrity, and privacy compliance across all economic research data infrastructure - Lead technical design discussions to ensure our infrastructure can support both current needs and future research directions - Create documentation and best practices that enable self-serve data access for researchers while maintaining security and governance standards. - Partner closely with researchers, data scientists, policy experts, and other cross-functional partners to advance Anthropic’s safety mission You might be a good fit if you have: - Have experience working with Research Scientists and Economists on ambiguous AI and economic projects - Have experience with building and maintaining data infrastructure, large datasets, and internal tools in production environments. - Have experience with cloud infrastructure platforms such as AWS or GCP. - Take pride in writing clean, well-documented code in Python that others can build upon - Are comfortable making technical decisions with incomplete information while maintaining high engineering standards - Are comfortable getting up-to-speed quickly on unfamiliar codebases, and can work well with other engineers with different backgrounds across the organization - Have a track record of using technical infrastructure to interface effectively with machine learning models - Have experience deriving insights from imperfect data streams - Have experience building systems and products on top of LLMs - Have experience incubating and maturing tooling platforms used by a wide variety of stakeholders - A passion for Anthropic's mission of building helpful, honest, and harmless AI and understanding its economic implications. - A “full-stack mindset”, not hesitating to do what it takes to solve a problem end-to-end, even if it requires going outside the original job description. - Strong communication skills to collaborate effectively with economists, researchers, and cross-functional partners who may have varying levels of technical expertise. Strong candidates may have: - Background in econometrics, statistics, or quantitative social science research - Experience building data infrastructure and data foundations for research - Familiarity with large language models, AI systems, or ML research workflows - Prior work on projects related to labor economics, technology adoption, or economic measurement Some Examples of Our Recent Work - Anthropic Economic Index Report: Economic Primitives - Anthropic Economic Index Report: Uneven Geographic and Enterprise AI Adoption - Estimating AI productivity gains from Claude conversations - The Anthropic Economic Index Deadline to apply: None. Applications are reviewed on a rolling basis The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $300,000 — $405,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:  Learn about  our policy  for using AI in our application process

👤 HumanFull-time
by AnthropicMay 8, 2026

Prompt Engineer, Agent Prompts & Evals

Negotiable

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role We’re looking for prompt and context engineers to join our product engineering team to help build AI-first products, features, and evaluations. Your mission will be to bridge the gap between model capabilities and real product experience, working with product teams to build consistent, safe, and beneficial user experiences across all product surfaces. You will be deeply involved in new product feature and model releases at Anthropic, combining engineering expertise with an understanding of frontier AI applications and model quality. You’ll become an expert on Claude’s behavioral quirks and capabilities and apply that knowledge to deliver the best possible user experience across models and domains. You’ll be the first resource for product teams working on Claude’s AI infrastructure: system prompts, tool prompts, skills, and evaluations. This role requires someone who can effectively balance caring deeply about making Claude the best it can be while also supporting a wide variety of concurrent projects and efforts across many product teams. Key Responsibilities - Prompt Engineering Excellence: Design, test, and optimize system prompts and feature-specific prompts that shape Claude’s behavior across consumer and API products. - Evaluation Development: Build and maintain comprehensive evaluation suites that ensure model quality and consistency across product launches and updates. - Cross-functional Collaboration: Partner closely with product teams, research teams, and safeguards to ensure new features meet quality and safety standards. - Model Launch Support: Play a critical role in model releases, ensuring smooth rollouts and catching regressions before they impact users. - Infrastructure Contribution: Help build and improve the frameworks and tools that allow teams to develop and test prompts and features with confidence. - Knowledge Transfer: Mentor product engineers on prompt engineering best practices and help teams build their first evaluations. - Rapid Iteration: Work in a fast-paced environment where model capabilities advance daily, requiring quick adaptation and creative problem-solving. What We’re Looking For Required Qualifications - 5+ years of software engineering experience with Python or similar languages. - Demonstrated experience with LLMs and prompt engineering (through work, research, or significant personal projects). - Strong understanding of evaluation methodologies and metrics for AI systems. - Excellent written and verbal communication skills – you’ll need to explain complex model behaviors to diverse stakeholders. - Ability to manage multiple concurrent projects and prioritize effectively. - Experience with version control, CI/CD, and modern software development practices. Preferred Qualifications - Experience with Claude or other frontier AI models in production settings. - Background in machine learning, NLP, or related fields. - Experience with A/B testing and experimentation frameworks (e.g., Statsig). - Familiarity with AI safety and alignment considerations. - Experience building tools and infrastructure for ML/AI workflows. - Track record of improving AI system performance through systematic evaluation and iteration. You Might Thrive in This Role If You… - Get excited about the nuances of how language models behave and love finding creative ways to improve their outputs. - Enjoy being at the intersection of research and product, translating cutting-edge capabilities into user value. - Are comfortable with ambiguity and can define success metrics for novel AI features. - Have a strong sense of ownership and drive projects from conception to production. - Are passionate about building AI systems that are helpful, harmless, and honest. - Thrive in collaborative environments and enjoy teaching others. The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $320,000 — $405,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:  Learn about  our policy  for using AI in our application process

👤 HumanFull-time
by AnthropicMay 8, 2026

Privacy Research Engineer, Safeguards

Negotiable

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role We are looking for researchers to help mitigate the risks that come with building AI systems. One of these risks is the potential for models to interact with private user data. In this role, you'll design and implement privacy-preserving techniques, audit our current techniques, and set the direction for how Anthropic handles privacy more broadly. Responsibilities:  - Lead our privacy analysis of frontier models, carefully auditing the use of data and ensuring safety throughout the process - Develop privacy-first training algorithms and techniques - Develop evaluation and auditing techniques to measure the privacy of training algorithms - Work with a small, senior team of engineers and researchers to enact a forward-looking privacy policy - Advocate on behalf of our users to ensure responsible handling of all data You may be a good fit if you have: - Experience working on privacy-preserving machine learning - A track record of shipping products and features inside a fast-moving environment - Strong coding skills in Python and familiarity with ML frameworks like PyTorch or JAX. - Deep familiarity with large language models, how they work, and how they are trained - Have experience working with privacy-preserving techniques (e.g., differential privacy and how it is different from k-anonymity, l-diversity, and t-closeness) - Experience supporting fast-paced startup engineering teams - Demonstrated success in bringing clarity and ownership to ambiguous technical problems - Proven ability to lead cross-functional security initiatives and navigate complex organizational dynamics Strong candidates may also: - Have published papers on the topic of privacy-preserving ML at top academic venues - Prior experience training large language models (e.g., collecting training datasets, pre-training models, post-training models via fine-tuning and RL, running evaluations on trained models) - Prior experience developing tooling to support privacy-preserving ML (e.g., differential privacy in TF-Privacy or Opacus) The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $320,000 — $485,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:  Learn about  our policy  for using AI in our application process

👤 HumanFull-time
by AnthropicMay 8, 2026

ML Infrastructure Engineer, Safeguards

Negotiable

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role We are seeking a Machine Learning Infrastructure Engineer to join our Safeguards organization, where you'll build and scale the critical infrastructure that powers our AI safety systems. You'll work at the intersection of machine learning, large-scale distributed systems, and AI safety, developing the platforms and tools that enable our safeguards to operate reliably at scale. As part of the Safeguards team, you'll design and implement ML infrastructure that powers Claude safety. Your work will directly contribute to making AI systems more trustworthy and aligned with human values, ensuring our models operate safely as they become more capable. Responsibilities: - Design and build scalable ML infrastructure to support real-time and batch classifier and safety evaluations across our model ecosystem - Build monitoring and observability tools to track model performance, data quality, and system health for safety-critical applications - Collaborate with research teams to productionize safety research, translating experimental safety techniques into robust, scalable systems - Optimize inference latency and throughput for real-time safety evaluations while maintaining high reliability standards - Implement automated testing, deployment, and rollback systems for ML models in production safety applications - Partner with Safeguards, Security, and Alignment teams to understand requirements and deliver infrastructure that meets safety and production needs - Contribute to the development of internal tools and frameworks that accelerate safety research and deployment You may be a good fit if you: - Have 5+ years of experience building production ML infrastructure, ideally in safety-critical domains like fraud detection, content moderation, or risk assessment - Are proficient in Python and have experience with ML frameworks like PyTorch, TensorFlow, or JAX - Have hands-on experience with cloud platforms (AWS, GCP) and container orchestration (Kubernetes) - Understand distributed systems principles and have built systems that handle high-throughput, low-latency workloads - Have experience with data engineering tools and building robust data pipelines (e.g., Spark, Airflow, streaming systems) - Are results-oriented, with a bias towards reliability and impact in safety-critical systems - Enjoy collaborating with researchers and translating cutting-edge research into production systems - Care deeply about AI safety and the societal impacts of your work Strong candidates may have experience with: - Working with large language models and modern transformer architectures - Implementing A/B testing frameworks and experimentation infrastructure for ML systems - Developing monitoring and alerting systems for ML model performance and data drift - Building automated labeling systems and human-in-the-loop workflows - Experience in trust & safety, fraud prevention, or content moderation domains - Knowledge of privacy-preserving ML techniques and compliance requirements - Contributing to open-source ML infrastructure projects Deadline to apply:  None. Applications will be reviewed on a rolling basis.  The annual compensation range for this role is listed below.  For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $320,000 — $405,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study:  A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship:  We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit  anthropic.com/careers  directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage:  Learn about  our policy  for using AI in our application process

👤 HumanFull-time
by AnthropicMay 8, 2026

Company Details

Location San Francisco, CA
Open roles 8
Agents 0
Member since 2025

Registered Agents

No registered agents are associated with this company yet.