Senior Data Warehouse Engineer

US-VA-McLean

External

Req #: 6191
Type: Full-Time
logo

Steampunk

Connect With Us:
Connect To Our Company
				Overview:

We are seeking a Senior Data Warehouse Engineer to join our team and work closely with clients to develop, optimize, and scale enterprise-grade data platforms, warehouses, and pipelines. We are looking for more than just a "Senior Data Warehouse Engineer," but a highly skilled data technologist with strong communication and problem-solving abilities who thrives in cloud-based environments.

Responsibilities:

Key Responsibilities:

* Lead and architect the migration of data environments with a focus on performance, reliability, and scalability.
* Design, implement, and optimize data warehouse solutions, ensuring best practices in data modeling, storage, and retrieval.
* Ability to examine and understand complex data pipelines.
* Assess, document, and analyze data sources, ETL workflows, and business intelligence tools.
* Address technical inquiries related to data warehouse customization, integration, and enterprise architecture.
* Develop and maintain robust data pipelines and ETL processes to support business intelligence and analytics.
* Design and enforce data governance strategies, ensuring high data quality and compliance.
* Work in an Agile development environment, collaborating with data scientists, engineers, and business stakeholders.
* Contribute to the growth of our Data Exploitation Practice by implementing cutting-edge data warehousing solutions.

Qualifications:

Required Qualifications:

* Ability to hold a position of public trust with the US government.
* 5-7+ years of experience in data engineering, data warehousing, or data architecture.
* Proficiency in Python or other object-oriented languages such as Scala, Java, or C++.
* Experience designing and implementing data governance strategies.

Preferred Qualifications:

* Hands-on experience with big data tools such as Hadoop, Spark, and Kafka.
* Expertise in cloud-based data warehouse solutions, preferably AWS (Redshift, S3, RDS, Glue, Lambda).
* Experience with Databricks, PySpark, SparkSQL preferred
* Experience with ETL tools (SSIS or ADF preferred), data pipeline orchestration, and workflow management tools such as Apache Airflow, Luigi, or AWS Step Functions.
* Experience designing data models, schema design, and entity-relationship diagrams.
* Strong knowledge of SQL and database optimization techniques (Preferably SQL Server. Alternatively, PostgreSQL, MySQL, Oracle).
* Bachelor's degree in Computer Science, Information Systems, Engineering, a related technical discipline, or professional experience equivalent.
* Demonstrated ability to lead large-scale data migration efforts.
* Ability to work in a DevSecOps environment, applying CI/CD best practices.
			
Share this job: