Senior Data Engineer -23158-1
Job DescriptionJob Description
Apply now: Senior Data Engineer, Location is Hybrid (Burbank, CA). The start date is September 30, 2025, for this 4-month contract position with potential extension.
Job Title: Senior Data Engineer
Location-Type: Hybrid (3 days onsite – Burbank, CA)
Start Date Is: September 30, 2025 (or 2 weeks from offer)
Duration: 4 months (Contract, potential extension)
Compensation Range: $80.00 – $85.00/hr W2
Job Description:
We are seeking a Senior Data Engineer to join a lean-agile product delivery team focused on building scalable, governed, and AI/ML-ready data solutions. As part of a cross-functional pod, you will design and implement high-performance data pipelines, support analytics and machine learning workflows, and embed governance into all aspects of data delivery. This role requires strong AWS expertise, hands-on engineering, and the ability to collaborate across engineering, product, and architecture teams.
Day-to-Day Responsibilities:
-
Design & Build Scalable Data Pipelines: Develop batch and streaming pipelines using AWS- tools (Glue, Lambda, Step Functions, Kinesis) and orchestration frameworks like Airflow.
-
Optimize & Monitor: Ensure pipelines are resilient, cost-efficient, and scalable.
-
Enable Analytics & AI/ML: Deliver structured, high-quality data to BI tools and ML workflows; partner with data scientists to support feature engineering and model deployment.
-
Ensure Governance & Quality: Embed validation, lineage, tagging, and metadata standards into pipelines; contribute to enterprise data catalog.
-
Collaborate & Mentor: Participate in Agile ceremonies, architecture syncs, and backlog refinement. Mentor junior engineers and advocate for reusable services across pods.
Requirements:
Must-Haves:
-
7 years of experience in data engineering, with hands-on expertise in AWS services (Glue, Kinesis, Lambda, RDS, DynamoDB, S3).
-
Proficiency with SQL, Python, and PySpark for data transformations.
-
Experience with orchestration tools such as Airflow or Step Functions.
-
Proven ability to optimize pipelines for both batch and streaming use cases.
-
Understanding of data governance practices, including lineage, validation, and cataloging.
-
Experience with modern data platforms such as Snowflake, Databricks, Redshift, or Informatica.
Nice-to-Haves:
-
Experience influencing platform-first approaches across pods.
-
Strong collaboration and mentoring skills.
-
Knowledge of advanced governance practices and large-scale data platform operations.
Soft Skills:
-
Excellent communication skills for cross-functional collaboration.
-
Ability to mentor and guide junior engineers.
-
Proactive problem solver with strong organizational and teamwork skills.