ETL Engineer

US-VA-McLean

External

Req #: 6418
Type: Full-Time
logo

Steampunk

Connect With Us:
Connect To Our Company
				Overview:

In today's rapidly evolving technology landscape, an organization's data has never been a more important aspect in achieving mission and business goals. Our data exploitation experts work with our clients to support their mission and business goals by creating and executing a comprehensive data strategy using the best technology and techniques, given the challenge.

At Steampunk, our goal is to build and execute a data strategy for our clients to coordinate data collection and generation, to align the organization and its data assets in support of the mission, and ultimately to realize mission goals with the strongest effectiveness possible.

For our clients, data is a strategic asset. They are looking to become a facts-based, data-driven, customer-focused organization. To help realize this goal, they are leveraging visual analytics platforms to analyze, visualize, and share information. At Steampunk you will design and develop solutions to high-impact, complex data problems, working with the best and data practitioners around. Our data exploitation approach is tightly integrated with Human-Centered Design and DevSecOps.

Responsibilities:

We are looking for a seasoned ETL Engineer to work with our team and our clients to develop enterprise grade data pipelines. We are looking for more than just an "ETL Engineer", but a technologist with excellent communication and customer service skills and a passion for data and problem solving.

* Assess and understand ETL jobs and workflows
* Create reusable data pipelines from source to target systems
* Test, validate, and deploy ETL pipelines
* Support reporting, business intelligence, and data science end users through ETL and ELT operations
* Work with data architects to create data models and design schemas for RDBMS, warehouse, and data lake systems
* Key must have skill sets - Python, SQL
* Work within an Agile software development lifecycle
* You will contribute to the growth of our Data Exploitation Practice!

Qualifications:

* Ability to hold a position of public trust with the US government.
* 2-4 years industry experience coding commercial software and a passion for solving complex problems.
* 2-4 years direct experience in Data Engineering with experience in tools such as:
* ETL Tools: Python, Informatica, Pentaho, Talend
* Big data tools: Hadoop, Spark, Kafka, etc.
* Relational SQL and NoSQL databases, including Postgres and Cassandra.
* Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
* AWS cloud services: EC2, EMR, RDS, Redshift (or Azure equivalents)
* Data streaming systems: Storm, Spark-Streaming, etc.
* Search tools: Solr, Lucene, Elasticsearch
* Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

* Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases.
* Experience with message queuing, stream processing, and highly scalable 'big data' data stores.
* Experience manipulating structured and unstructured data for analysis
* Experience constructing complex queries to analyze results using databases or in a data processing development environment
* Experience working in an Agile environment
* Experience supporting project teams of developers and data scientists who build web-based interfaces, dashboards, reports, and analytics/machine learning models
			
Share this job: