Staff AI/ML Platform Engineer

US-CO-Littleton

Attract-careers1

Req #: 96791
Type: Fulltime-Regular
logo

EchoStar

Connect With Us:
Connect To Our Company
				Overview:

Our Technology teams challenge the status quo and reimagine capabilities across industries. Whether through research and development, technology innovation or solution engineering, our team members play a vital role in connecting consumers with the products and platforms of tomorrow.

Responsibilities:

Candidates must be willing to participate in at least one in-person interview, which may include a live whiteboarding or technical assessment session.

We are seeking an experienced Staff AI/ML Platform Engineer to build and operate production AI/ML infrastructure that enables data scientists and engineers to deploy AI-driven solutions optimizing network performance, customer experience, and operations for Boost Wireless. You will own the implementation and management of scalable ML platforms, MLOps pipelines, and real-time inference systems across our enterprise and cloud-native environments. As a key technical contributor, you'll partner with architects, data scientists, and network engineers to translate AI architecture designs into reliable, production-grade platforms that deliver measurable business impact.

Key Responsibilities:
* Architect, build, and maintain enterprise AI/ML platforms and infrastructure supporting network optimization, operational analytics, and customer intelligence initiatives

* Design and implement scalable MLOps pipelines for model training, deployment, monitoring, and automated retraining across multi-cloud and edge environments

* Develop and operate real-time data processing and inference frameworks to power high-volume streaming analytics and low-latency decisioning

* Integrate AI/ML services into OSS/BSS ecosystems, ensuring seamless interoperability, performance reliability, and automated, data-driven decision capabilities

* Deploy and optimize AI-driven solutions for enterprise network slicing using cloud-native and edge computing technologies to enhance scalability and performance

* Establish robust model lifecycle management, governance, and monitoring frameworks while mentoring junior engineers and partnering with senior architects to drive operational excellence

Qualifications:
Education and Experience:
* Bachelor's or Master's in Computer Science, Engineering, or a related technical discipline
* 8+ years of platform engineering or DevOps experience, including at least 3 years building AI/ML infrastructure in telecom or enterprise environments

Skills and Qualifications:
* Deep expertise in cloud-native architectures, Kubernetes orchestration, containerization (Docker), and edge deployment patterns within distributed environments

* Hands-on experience with enterprise AI/ML platforms (IBM Watson, AWS Bedrock, Databricks, Google Vertex AI) and MLOps tooling such as MLflow, Kubeflow, and SageMaker

* Strong proficiency in big data ecosystems including Kafka, Spark, Hadoop, and distributed computing frameworks supporting real-time analytics

* Solid understanding of ML frameworks (TensorFlow, PyTorch, Scikit-learn) and modern model serving technologies for scalable inference

* Advanced software engineering capabilities, including CI/CD pipelines, infrastructure-as-code (Terraform, CloudFormation), observability tools, and integration with OSS/BSS systems and network protocols

* Demonstrated problem-solving, collaboration, and stakeholder communication skills, with preferred experience in AI-driven network optimization, network slicing, telemetry systems, model monitoring, A/B testing, feature stores, and telecom data privacy and compliance frameworks
			
Share this job: