Senior Machine Learning Engineer, Perception in Santa Clara
Energy Jobline is the largest and fastest growing global Energy Job Board and Energy Hub. We have an audience reach of over 7 million energy professionals, 400,000+ monthly advertised global energy and engineering jobs, and work with the leading energy companies worldwide.
We focus on the Oil & Gas, Renewables, Engineering, Power, and Nuclear markets as well as emerging technologies in EV, Battery, and Fusion. We are committed to ensuring that we offer the most exciting career opportunities from around the world for our jobseekers.
Job DescriptionJob DescriptionWe are seeking a highly skilled Machine Learning Engineer with deep expertise in developing Bird’s Eye View (BEV) fusion models using multimodal sensor inputs, particularly LiDAR. You will play a central role in designing scalable perception algorithms that integrate data from camera, LiDAR, and radar sensors to support autonomous driving and 3D scene understanding.Responsibilities:
- Design, implement, and optimize BEV-based perception models that fuse camera, LiDAR, and radar inputs.
- Benchmark perception models using large-scale datasets and well-defined quantitative metrics.
- Collaborate cross-functionally with research, data, and deployment engineers to refine models and support real-world applications.
- Maintain a strong focus on performance, robustness, and scalability for deployment in production systems.
- Ensure that your work is performed in accordance with the company’s Quality Management System (QMS) requirements and contribute to continuous improvement efforts.
- Ensure team compliance with QMS, monitor quality, and drive process improvements.
Required Skills:
- Ph.D. or Masters in AI, Computer Science, Electrical Engineering, Robotics, or a related field.
- Ph.D. new grad or Masters + 3 years industry experience
- Proficiency in Python and experience building deep learning pipelines.
- Strong expertise in PyTorch, TensorFlow, or JAX.
- Proven experience with LiDAR-based 3D perception and BEV representation models
- Deep understanding of multimodal sensor fusion architectures and techniques.
- Familiarity with camera, LiDAR, and radar modalities and their synchronization, calibration, and integration in perception pipelines.
- Solid foundation in computer vision, deep learning, and 3D geometry.
Skills:
- Industry or academic experience in autonomous vehicle perception, robotics, or related areas.
- Hands-on experience developing deep learning models in real-world or production environments.
- Experience with distributed training, high-performance computing, or GPU acceleration.
Our compensations (cash and equity) are determined based on the position, your location, qualifications, and experience.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
If you are interested in applying for this job please press the Apply Button and follow the application process. Energy Jobline wishes you the very best of luck in your next career move.