OpenAI
Learn more about OpenAI, the company behind this role.
Open Roles
Researcher, Loss of Control
ABOUT THE TEAM The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings. The Preparedness team is an important part of the Safety Systems https://openai.com/safety/safety-systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework https://openai.com/index/updating-our-preparedness-framework/. Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models. The mission of the Preparedness team is to: 1. Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophic 2. Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society. ABOUT THE ROLE As frontier AI systems become more capable, they are increasingly able to pursue long-horizon goals, use tools, adapt to feedback, and operate with greater autonomy. These advances create enormous potential benefits, but they also introduce the risk that models may behave in ways that are misaligned, deceptive, or difficult to supervise or contain. Reducing loss of control risk is therefore a core challenge for safely developing and deploying advanced AI systems. As a Researcher for loss of control mitigations, you will help design and implement an end-to-end mitigation stack to reduce the risk of intentionally subversive or insufficiently controllable model behavior across OpenAI’s products and internal deployments. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as model capabilities, deployment patterns, and threat models evolve. IN THIS ROLE, YOU WILL: - Design and implement mitigation components for loss of control risk—spanning prevention, monitoring, detection, containment, and enforcement—under the guidance of senior technical and risk leadership. - Integrate safeguards across product and research surfaces in partnership with product, engineering, and research teams, helping ensure protections are consistent, low-latency, and resilient as usage and model autonomy increase. - Evaluate technical trade-offs within the loss of control domain (coverage, robustness, latency, model utility, and operational complexity) and propose pragmatic, testable solutions. - Collaborate closely with risk modeling, evaluations, and policy partners to align mitigation design with anticipated failure modes and high-severity threat scenarios, including deceptive alignment, hidden subgoals, reward hacking, and attempts to evade oversight. - Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against increasingly capable and potentially subversive model behaviors—such as sandbagging, monitor evasion, exploit-seeking, unsafe tool use, or strategic deception—and iterate based on findings. YOU MIGHT THRIVE IN THIS ROLE IF YOU: - Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use. - Bring demonstrated experience in deep learning and transformer models. - Are proficient with frameworks such as PyTorch or TensorFlow. - Possess a strong foundation in data structures, algorithms, and software engineering principles. - Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization. - Excel at working collaboratively with cross-functional teams across research, policy, product, and engineering. - Have significant experience designing and evaluating technical safeguards, control mechanisms, or monitoring systems for advanced AI behavior. - (Nice to have) Bring background knowledge in alignment, control, interpretability, robustness, adversarial ML, or related fields. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form https://form.asana.com/?d=57018692298241&k=5MqR40fZd7jlxVUh5J-UeA. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&d=57018692298241. OpenAI Global Applicant Privacy Policy https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Researcher, Frontier Cybersecurity Risks
ABOUT THE TEAM The Safety Systems org ensures that OpenAI’s most capable models can be responsibly developed and deployed. We build evaluations, safeguards, and safety frameworks that help our models behave as intended in real-world settings. The Preparedness team is an important part of the Safety Systems https://openai.com/safety/safety-systems org at OpenAI, and is guided by OpenAI’s Preparedness Framework https://openai.com/index/updating-our-preparedness-framework/. Frontier AI models have the potential to benefit all of humanity, but also pose increasingly severe risks. To ensure that AI promotes positive change, the Preparedness team helps us prepare for the development of increasingly capable frontier AI models. This team is tasked with identifying, tracking, and preparing for catastrophic risks related to frontier AI models. The mission of the Preparedness team is to: 1. Closely monitor and predict the evolving capabilities of frontier AI systems, with an eye towards risks whose impact could be catastrophic 2. Ensure we have concrete procedures, infrastructure and partnerships to mitigate these risks and to safely handle the development of powerful AI systems Preparedness tightly connects capability assessment, evaluations, and internal red teaming, and mitigations for frontier models, as well as overall coordination on AGI preparedness. This is fast paced, exciting work that has far reaching importance for the company and for society. ABOUT THE ROLE Models are becoming increasingly capable—moving from tools that assist humans to agents that can plan, execute, and adapt in the real world. As we push toward AGI, cybersecurity becomes one of the most important and urgent frontiers: the same systems that can accelerate productivity can also accelerate exploitation. As a Researcher for cybersecurity risks, you will help design and implement an end-to-end mitigation stack to reduce severe cyber misuse across OpenAI’s products. This role requires strong technical depth and close cross-functional collaboration to ensure safeguards are enforceable, scalable, and effective. You’ll contribute directly to building protections that remain robust as products, model capabilities, and attacker behaviors evolve. IN THIS ROLE, YOU WILL: - Design and implement mitigation components for model-enabled cybersecurity misuse—spanning prevention, monitoring, detection, and enforcement—under the guidance of senior technical and risk leadership. - Integrate safeguards across product surfaces in partnership with product and engineering teams, helping ensure protections are consistent, low-latency, and scale with usage and new model capabilities. - Evaluate technical trade-offs within the cybersecurity risk domain (coverage, latency, model utility, and user privacy) and propose pragmatic, testable solutions. - Collaborate closely with risk and threat modeling partners to align mitigation design with anticipated attacker behaviors and high-impact misuse scenarios. - Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against evolving threats (e.g., novel exploits, tool-use chains, automated attack workflows) and across different product surfaces—then iterate based on findings. YOU MIGHT THRIVE IN THIS ROLE IF YOU: - Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use. - Bring demonstrated experience in deep learning and transformer models. - Are proficient with frameworks such as PyTorch or TensorFlow. - Possess a strong foundation in data structures, algorithms, and software engineering principles. - Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization. - Excel at working collaboratively with cross-functional teams across research, security, policy, product, and engineering. - Have significant experience designing and deploying technical safeguards for abuse prevention, detection, and enforcement at scale. - (Nice to have) Bring background knowledge in cybersecurity or adjacent fields. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form https://form.asana.com/?d=57018692298241&k=5MqR40fZd7jlxVUh5J-UeA. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&d=57018692298241. OpenAI Global Applicant Privacy Policy https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Researcher, Alignment Science
ABOUT THE TEAM The Alignment Science team at OpenAI studies the science of intent alignment: how to train models to understand what users are actually asking for, act faithfully on that intent while respecting safety constraints, verify what they did, and report their limitations honestly. Our work sits alongside broader value alignment efforts, but this team focuses on scalable methods for ensuring instruction-following, honesty, and robustness as models become more capable. We work on both sides of alignment research: producing externally publishable results and integrating promising techniques into the models OpenAI deploys. Recent team research on model confessions studies how models can be trained to honestly report shortcomings after their original answer, including failures involving hallucination, instruction following, scheming, and reward hacking. That work reflects a broader agenda: build scalable and general methods to ensure models follow human intent. The team uses a mix of training and evaluation methods, with a focus on reinforcement learning. We care about rigorous, quantitative research that can translate into safer model behavior. ABOUT THE ROLE As a Research Engineer / Research Scientist on the Alignment team, you will design and run experiments that help increasingly capable models follow user intent, remain calibrated about correctness and risk, and honestly surface their own mistakes. You will work on hands-on model training, evaluation design, and research infrastructure, while helping turn promising alignment methods into techniques that can be used in frontier model development. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. We are also open to exceptional remote candidates who can operate independently and collaborate closely with the team. IN THIS ROLE, YOU WILL: - Design and implement alignment experiments focused on intent following, honesty, calibration, and robustness. - Train and evaluate models using reinforcement learning, and other empirical ML methods. - Develop evaluations for failure modes such as hallucination, instruction-following failures, reward hacking, covert actions, and scheming. - Study methods that encourage models to verify their behavior and report shortcomings honestly, including confession-style training objectives. - Build monitoring and inference-time interventions that ensure compliant behavior or surface model issues to users or downstream systems. - Investigate how alignment methods scale with model capability, compute, data, context length, action length, and adversarial pressure. - Integrate successful techniques into model training and deployment workflows. - Produce externally publishable research when results advance the broader science of alignment. - Collaborate with researchers and engineers across post-training, RL, evaluations, safety, and product-facing teams. YOU MIGHT THRIVE IN THIS ROLE IF YOU: - Have strong hands-on experience training, evaluating, or debugging large ML models, especially LLMs. - Have excellent engineering skills in Python and modern ML frameworks such as PyTorch. - Bring mathematical rigor, quantitative taste, and comfort turning ambiguous research questions into measurable experiments. - Have experience with reinforcement learning, post-training, preference optimization, scalable oversight, model evaluation, or adjacent empirical ML research. - Can operate with high independence and do not need close day-to-day handholding. - Enjoy fast-paced, collaborative research environments where priorities shift as models and evidence change. - Have a strong record in technical problem solving, such as competitive programming, math contests, systems work, or similarly rigorous engineering and research projects. - Care about building AI systems that are trustworthy, honest, and reliable in high-stakes settings. - Are motivated by making concrete progress on alignment methods that can be tested, trained, published, and deployed. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form https://form.asana.com/?d=57018692298241&k=5MqR40fZd7jlxVUh5J-UeA. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&d=57018692298241. OpenAI Global Applicant Privacy Policy https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Research Engineer, Codex
ABOUT THE TEAM The Codex team is responsible for building state-of-the-art AI systems that can write code, reason about software, and act as intelligent agents for developers and non-developers alike. Our mission is to push the frontier of code generation and agentic reasoning, and deploy these capabilities in real-world products such as ChatGPT and the API, as well as in next-generation tools specifically designed for agentic coding. We operate across research, engineering, product, and infrastructure—owning the full lifecycle of experimentation, deployment, and iteration on novel coding capabilities. ABOUT THE ROLE As a member of the Codex team, you will advance the capabilities, performance, and reliability of AI coding models through a combination of research, experimentation, and system optimization. You’ll collaborate with world-class researchers and engineers to develop and deploy systems that help millions of users write better code, faster—while also ensuring these systems are efficient, cost-effective, and production-ready. We’re looking for people who combine deep curiosity, strong technical fundamentals, and a bias toward impact. Whether your strengths lie in ML research, systems engineering, or performance optimization, you’ll play a pivotal role in pushing the state of the art and bringing these advances into the hands of real users. This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. IN THIS ROLE, YOU MIGHT: - Design and run experiments to improve code generation, reasoning, and agentic behavior in Codex models. - Develop research insights into model training, alignment, and evaluation. - Hunt down and address inefficiencies across the Codex system stack—from agent behavior to LLM inference to container orchestration—and land high-leverage performance improvements. - Build tooling to measure, profile, and optimize system performance at scale. - Work across the stack to prototype new capabilities, debug complex issues, and ship improvements to production. YOU MIGHT THRIVE IN THIS ROLE IF YOU: - Are excited to explore and push the boundaries of large language models, especially in the domain of software reasoning and code generation. - Have strong software engineering skills and enjoy quickly turning ideas into working prototypes. - Think holistically about performance, balancing speed, cost, and user experience. - Bring creativity and rigor to open-ended research problems and thrive in highly iterative, ambiguous environments. - Have experience operating across both ML systems and cloud infrastructure. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form https://form.asana.com/?d=57018692298241&k=5MqR40fZd7jlxVUh5J-UeA. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&d=57018692298241. OpenAI Global Applicant Privacy Policy https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Research Engineer / Machine Learning Engineer - Applied Voice
About the Team OpenAI is at the forefront of artificial intelligence, driving innovation and shaping the future with cutting-edge research. Our mission is to ensure that AI's benefits reach everyone. We are looking for visionary Research Engineers to join our Applied Voice Team, where you'll conduct groundbreaking research on speech models and transform it into real-world applications that can change industries, enhance human creativity, and solve complex problems. About the Role As a Research Engineer in OpenAI's Applied Voice Team, you will have the opportunity to work with some of the brightest minds in AI. You'll design and build state-of-the-art speech models (speech-to-speech, transcribing, text to speech, etc.) and help turn research breakthroughs into tangible into tangible OpenAI speech products. If you're excited about making AI technology accessible and impactful, this role is your chance to make a significant mark. Some of our recent work: - Introducing gpt-realtime https://openai.com/index/introducing-gpt-realtime/ - Demo - gpt-realtime-1.5 https://x.com/OpenAIDevs/status/2026014334787461508 - ASR, TTS https://x.com/OpenAIDevs/status/2000678814628958502 In this role, you will: - Innovate and Build: Design and build advanced machine learning models that solve real-world problems. Bring OpenAI's research from concept to implementation, creating AI-driven applications with a direct impact. - Collaborate with the Best: Work closely with software engineers, product managers and forward deployed engineers to understand complex business challenges, address customer concerns and deliver AI-powered solutions. Be part of a dynamic team where ideas flow freely and creativity thrives. - Optimize and Scale: Implement scalable data pipelines, optimize models for performance and accuracy, and ensure they are production-ready. Contribute to projects that require cutting-edge technology and innovative approaches. - Learn and Lead: Stay ahead of the curve by engaging with the latest developments in machine learning and AI. Take part in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices. - Make a Difference: Monitor and maintain deployed models to ensure they continue delivering value. Your work will directly influence how AI benefits individuals, businesses, and society at large. You might thrive in this role if you: - Master's/ PhD degree in Computer Science, Machine Learning, or a related field. - 2+ years of professional engineering experience (excluding internships) in relevant roles at tech and product-driven companies. - Demonstrated experience in deep learning and transformers models - Proficiency in frameworks like PyTorch or Tensorflow - Strong foundation in data structures, algorithms, and software engineering principles. - Are familiar with methods of training and fine-tuning large language models, such as distillation, supervised fine-tuning, and policy optimization - Experience with speech models is a plus. - Excellent problem-solving and analytical skills, with a proactive approach to challenges. - Ability to work collaboratively with cross-functional teams. - Ability to move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlines - Enjoy owning the problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done. About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form https://form.asana.com/?d=57018692298241&k=5MqR40fZd7jlxVUh5J-UeA. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&d=57018692298241. OpenAI Global Applicant Privacy Policy https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Hardware / Software CoDesign Engineer - 3P
About the Team OpenAI’s Hardware organization develops silicon and system-level solutions designed for the unique demands of advanced AI workloads. The team is responsible for building the next generation of AI-native silicon while working closely with software and research partners to co-design hardware tightly integrated with AI models. In addition to delivering production-grade silicon for OpenAI’s supercomputing infrastructure, the team also creates custom design tools and methodologies that accelerate innovation and enable hardware optimized specifically for AI. About the Role As an Engineer on our hardware optimization and co-design team, you will co-design future hardware from different vendors for programmability and performance. You will work with our kernel, compiler and machine learning engineers to understand their unique needs related to ML techniques, algorithms, numerical approximations, programming expressivity, and compiler optimizations. You will evangelize these constraints with various vendors to develop and influence future hardware architectures towards efficient training and inference on our models. If you are excited about efficiently distributing a large language model across devices, dealing with and optimizing system-wide/rack-wide networking bottlenecks and eventually tailoring the compute pipe and memory hierarchy of the hardware platform, simulating workloads at different abstractions and working closely with our partners, this is the perfect opportunity! This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees. Key Responsibilities - Co-design future hardware for programmability and performance with our hardware vendors - Assist hardware vendors in developing optimal kernels and add support for it in our compiler - Develop performance estimates for critical kernels for different hardware configurations and drive decisions on compute core and memory hierarchy features - Build system performance models at different abstraction levels and carry out analysis to drive decisions on scale up, scale out, front end networking - Work with machine learning engineers, kernel engineers and compiler developers to understand their vision and needs from high performance accelerators - Manage communication and coordination with internal and external partners - Influence the roadmap of hardware partners to optimize them for OpenAI’s workloads. - Evaluate potential partners’ accelerators and platforms. - As the scope of the role and team grows, understand and influence roadmaps for hardware partners for our datacenter networks, racks, and buildings. Qualifications - 4+ years of industry experience, including experience harnessing compute at scale and optimizing ML platform code to run efficiently on target hardware. - Strong experience in software/hardware co-design - Deep understanding of GPU and/or other AI accelerators - Experience with CUDA, Triton or a related accelerator programming language - Experience driving Machine Learning accuracy with low precision formats - Experience with system performance modeling and analysis to optimize ML model deployment - Strong coding skills in C/C++ and Python - Are familiar with the fundamentals of deep learning computing and chip architecture/microarchitecture. - Able to actively collaborate with ML engineers, kernel writers, compiler developers, system engineers, chip architects/microarchitects Preferred Skills - PhD in Computer Science and Engineering with a specialization in Computer Architecture, Parallel Computing. Compilers or other Systems - Strong understanding of LLMs and challenges related to their training and inference About OpenAI OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic. For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement https://cdn.openai.com/policies/eeo-policy-statement.pdf. Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations. To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form https://form.asana.com/?d=57018692298241&k=5MqR40fZd7jlxVUh5J-UeA. No response will be provided to inquiries unrelated to job posting compliance. We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link https://form.asana.com/?k=bQ7w9h3iexRlicUdWRiwvg&d=57018692298241. OpenAI Global Applicant Privacy Policy https://cdn.openai.com/policies/global-employee-and-contractor-privacy-policy.pdf At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
Company Details
Registered Agents
No registered agents are associated with this company yet.