Data Engineer

careers

Req #: 29722
logo

SoftwareOne

Connect With Us:
Connect To Our Company
				Overview:

SoftwareOne and Crayon have come together to form a global, AI-powered software and cloud solutions provider with a bold vision for the future. With a footprint in over 70 countries and a diverse team of 13,000+ professionals, we offer unparalleled opportunities for talent to grow, make an impact, and shape the future of technology. At the heart of our business is our people. We empower our teams to work across borders, innovate fearlessly, and continuously develop their skills through world-class learning and development programs. Whether you're passionate about cloud, software, data, AI, or building meaningful client relationships, you'll find a place to thrive here. Join us and be part of a purpose-driven culture where your ideas matter, your growth is supported, and your career can go global. 

Responsibilities:

We are looking for a Data Engineer to build and maintain scalable data pipelines and ingestion frameworks using a mix of open-source and custom-built tooling. 

You will design and operate reliable data systems that ingest, validate, and process data from APIs, databases, and event streams-ensuring high data quality, performance, and availability. 

Key Responsibilities 

Build and maintain scalable data pipelines and ingestion frameworks 

Design robust data fetching systems for APIs, databases, and streaming sources 

Ensure data quality, validation, and consistency across all pipelines 

Optimize systems for performance, scalability, and cost efficiency 

Implement observability, monitoring, and fault-tolerant architectures 

Manage batch and real-time data processing workflows 

Collaborate with platform, analytics, and product teams to deliver trusted datasets 

Contribute to data architecture, standards, and best practices 

Core Skills & Experience 

Strong experience in data engineering and distributed systems 

Proficiency in Databricks, Azure Data Factory, Python, SQL, Terraform, REST APIs.  

Experience with data pipeline tools  

Hands-on with streaming technologies  

Experience with cloud platforms  

Knowledge of data storage systems 

Understanding of data modeling, partitioning, and indexing strategies 

Experience with API integrations and event-driven architectures 

Familiarity with CI/CD and Infrastructure as Code 

Qualifications:

Strong understanding of data consistency, reliability, and fault tolerance 

Experience building production-grade, scalable data systems 

Ability to work across teams and translate business needs into data solutions 

Focus on data quality, observability, and performance optimization 

Why Join Our Team 

Our job is to take cloud and operational data and turn it into platforms that generate revenue and save customers money. 

What does that look like day to day? 

CCC (Cloud Cost Control), covers Azure, AWS, GCP, and M365 for a large and growing enterprise customer base. You're not building a report for one stakeholder you're building infrastructure that runs FinOps across a multi-tenant production environment at real scale. 

The data problems are genuinely interesting. We ingest billing data from multiple cloud providers, normalize it, run anomaly detection, and pipe it into a web application. Getting that right consistently, at scale, across providers with different APIs and schemas is harder than it sounds. 

The stack is modern and the ownership is real. Databricks, Azure Data Factory, Python, SQL, Terraform, REST APIs. You won't just be closing tickets you'll have a say in how things are built.
			
Share this job: