Senior Data Engineer - CDO / Data & AI Engineering in Atlanta
Energy Jobline is the largest and fastest growing global Energy Job Board and Energy Hub. We have an audience reach of over 7 million energy professionals, 400,000+ monthly advertised global energy and engineering jobs, and work with the leading energy companies worldwide.
We focus on the Oil & Gas, Renewables, Engineering, Power, and Nuclear markets as well as emerging technologies in EV, Battery, and Fusion. We are committed to ensuring that we offer the most exciting career opportunities from around the world for our jobseekers.
Job DescriptionJob DescriptionJob Description: Senior Data Engineer – CDO / Data AI Engineering
Job Title: Senior Data Engineer
Location: Onsite/Hybrid - Middletown, Atlanta, or Plano
Role Type: Full-Time
Experience Level: Developer (5 years)
Education: B.S. in Computer Science or related field
Overview
We are seeking an experienced Senior Data Engineer to design, build, and maintain
enterprise-scale data platforms using modern cloud- technologies. The ideal
candidate will have deep expertise in Palantir Foundry, Databricks, and/or Snowflake,
with a strong background in building scalable ETL/ELT pipelines and data warehousing
solutions.
Daily Responsibilities:
- Design and implement end-to-end data pipelines using Palantir Foundry, Databricks, and Snowflake
- Develop scalable ETL/ELT workflows using Python, PySpark, and SQL for processing large-scale datasets
- Build and maintain data lake and data warehouse architectures on cloud platforms(Azure (Data Factory, Synapse, ADLS), Databricks, Snowflake)
- Implement data governance, quality checks, and metadata management frameworks
- Create and optimize Databricks notebooks, Delta Lake tables, and Unity Catalog implementations
- Design Palantir Foundry Ontologies, and transformation pipelines
- Collaborate with data scientists, analysts, and business stakeholders to deliver data products
- Optimize query performance and storage strategies in Snowflake (clustering , partitioning, materialized views)
- Implement real-time streaming pipelines using Spark Streaming, or similar technologies
- Develop CI/CD pipelines for data workflows using Git, Jenkins, or similar tools
Required Qualifications:
- 5+ years of hands-on data engineering experience
- Expert-level proficiency in Python and PySpark for data processing
- Strong SQL skills with experience in complex query optimization
- Hands-on experience with at least two of: Palantir Foundry, Databricks, Snowflake
- Experience with cloud platforms (Azure) and their data services
- Knowledge of data modeling (dimensional modeling, star schema, snowflake schema)
- Experience with orchestration tools (Airflow, dbt, Luigi, or similar)
- Strong understanding of data governance, security, and compliance requirements
Desired Qualifications:
- Experience with IBM WatsonX.data or similar AI data platforms
- Knowledge of Delta Lake, Iceberg, or similar lakehouse formats
- Experience with GenAI, RAG architectures, or LangChain
- Familiarity with MCP (Model Context Protocol) for AI agent integrations
If you are interested in applying for this job please press the Apply Button and follow the application process. Energy Jobline wishes you the very best of luck in your next career move.