Skip to main content

Staff ML Engineer - Infrastructure

Job DescriptionJob Description

About Us

Chips are at the center of today's tech-driven world. But how we design them has not changed in decades, while their complexity and specialization have skyrocketed due to increasing performance demands from applications like AI. We want to change that.

Our team is small, technical, and fast-moving. We’ve built and shipped at the intersection of AI, EDA, and systems software, with deep roots at companies like Qualcomm, Nvidia, Google, Meta, and the Allen Institute for AI. We’re backed by top investors including Khosla Ventures, Cerberus, and Clear Ventures, and already deployed with 10+ innovative customers—from Fortune 100s to cutting-edge AI silicon startups.

About This Role

This role offers a unique opportunity to be part of the founding team at ChipStack, where we are reinventing how modern silicon chips are designed. You will work alongside highly experienced chip designers who have built complex chips, ML scientists who have trained LLMs at scale, and top-notch infrastructure and software engineers. You will get to leverage your experience building ML and data infrastructure and apply it to some of the hardest problems in chip design.

About You

You want to be at a startup because you love to be at the center of all the dynamism that a startup offers.
You are willing to put in the hours and go the extra mile to ensure every customer has an exceptional experience.
You are self-motivated with a sense of urgency and can operate independently without much guidance.
You are not afraid of difficult problems and enjoy venturing into areas you have not explored before.

This Role

We’re looking for a strong, experienced ML Infrastructure Engineer to join our founding team. We are seeking someone with experience designing and scaling ML infrastructure and training systems. You’ll be responsible for building the core infrastructure that enables training, fine-tuning, evaluation, and deployment of LLMs across cloud and on-premise environments. Your work will directly impact product capabilities and speed of iteration.

What's needed

  • 5+ years of experience in ML infrastructure or adjacent roles

  • Deep expertise in Python and experience with training frameworks like PyTorch or TensorFlow

  • Strong systems engineering skills and experience with distributed training, data pipelines, and performance optimization

  • Experience deploying ML models to production (REST APIs, batch jobs, streaming pipelines)

  • Proficiency with cloud platforms (e.g., GCP, AWS) and containerized systems (Docker, Kubernetes)

  • Experience managing GPU/TPU workloads efficiently

  • Good communication skills and the ability to work directly with engineers and customers

What's good to have

  • Prior experience training or fine-tuning LLMs

  • Experience setting up observability, monitoring, and evaluation pipelines for ML models

  • Exposure to chip design fundamentals (via coursework or elsewhere)

  • Experience at an early-stage startup

Our Culture

Challenge status quo: We are innovators who can challenge the status quo and push forward our vision of the world.
Strong opinions, loosely held: We are low on ego, but high on collaboration. We are okay to be wrong and are always open to learning.
Ship fast, ship quality: We ruthlessly prioritize what matters. We build a few things, but at lightning speed with high quality.
Proud of our craft: Attention to detail is in our DNA. We take pride in what we build and ensure they exceed the high standards of the semiconductor industry.

Staff ML Engineer - Infrastructure

San Jose, CA
Full time

Published on 07/05/2025

Share this job now