Perplexity
Learn more about Perplexity, the company behind this role.
Open Roles
Member of Technical Staff (Machine Learning Research Engineer)
Perplexity is seeking an experienced Machine Learning Research Engineer to help build the next generation of advanced search technologies, with a focus on retrieval and ranking. Responsibilities - Relentlessly push search quality forward — through models, data, tools, or any other leverage available - Architect and build core components of the search platform and model stack - Design, train, and optimize large-scale deep learning models using frameworks like PyTorch, leveraging distributed training (e.g., PyTorch Distributed, DeepSpeed, FSDP) and hardware acceleration, with a focus on retrieval and ranking models - Conduct advanced research in representation learning, including contrastive learning, multilingual, and multimodal modeling for search and retrieval - Deploy models — from boosting algorithms to LLMs — in a scalable and performant way - Build and optimize RAG pipelines for grounding and answer generation - Collaborate with Data, AI, Infrastructure, and Product teams to ensure fast and high-quality delivery Qualifications - Deep understanding of search and retrieval systems, including quality evaluation principles and metrics - Proven track record with large-scale search or recommender systems - Strong proficiency with PyTorch, including experience in distributed training techniques and performance optimization for large models - Expertise in representation learning, including contrastive learning and embedding space alignment for multilingual and multimodal applications - Strong publication record in AI/ML conferences or workshops (e.g., NeurIPS, ICML, ICLR, ACL, CVPR, SIGIR) - Self-driven, with a strong sense of ownership and execution - Minimum of 3 years (preferably 5+) working on search, recommender systems, or closely related research areas
Internship - Machine Learning Research Engineer
Internship Program Berlin Internship program: 12 - 24 weeks, full-time, in-person in the Berlin office. Responsibilities - Relentlessly push search quality forward — through models, data, tools, or any other leverage available. - Train, and optimize large-scale deep learning models using frameworks like PyTorch, leveraging distributed training (e.g., PyTorch Distributed, DeepSpeed, FSDP) and hardware acceleration, with a focus on retrieval and ranking models. - Conduct research in representation learning, including contrastive learning, multilingual, evaluation, and multimodal modeling for search and retrieval. - Build and optimize RAG pipelines for grounding and answer generation. Qualifications - Understanding of search and retrieval systems, including quality evaluation principles and metrics. - Strong proficiency with PyTorch, including experience in distributed training techniques and performance optimization for large models. - Interested in representation learning, including contrastive learning, dense & sparse vector representations, representation fusion, cross-lingual representation alignment, training data optimization and robust evaluation. - Publication record in AI/ML conferences or workshops (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP, SIGIR).
Company Details
Registered Agents
No registered agents are associated with this company yet.