Staff Data Scientist - AI Products

US-CO-Englewood

Attract-careers1

Req #: 97536
Type: Fulltime-Regular
logo

EchoStar

Connect With Us:
Connect To Our Company
				Overview:

Our Technology teams challenge the status quo and reimagine capabilities across industries. Whether through research and development, technology innovation or solution engineering, our team members play a vital role in connecting consumers with the products and platforms of tomorrow.

Responsibilities:

Candidates must be willing to participate in at least one in-person interview, which may include a live whiteboarding or technical assessment session.

As a Staff Data Scientist in our AI Office, you will bridge the gap between complex data science and tangible business outcomes. You are responsible for solving the challenge of identifying high-impact AI opportunities and determining their feasibility within our vast data ecosystem. By transforming large-scale structured and unstructured data into production-grade GenAI and machine learning solutions, you will ensure our AI products drive measurable value for our customers and the mission.
What Success Looks Like (Objectives)
* Partner with stakeholders to identify high-impact, feasible opportunities for the AI Office and demonstrate how data science can drive business outcomes

* Deliver production-ready machine learning models for customer segmentation, churn prediction, and lifetime value estimation to directly impact department OKRs

* Architect and deploy GenAI and LLM solutions that automate complex workflows, such as transcript analysis and compliance validation, using cutting-edge AI applications

* Lead the transformation of large-scale structured and unstructured data into actionable insights that inform product strategy and user modeling

* Collaborate with cross-functional teams to build scalable, cloud-based solutions while providing technical mentorship to elevate the team's collective expertise

* Master the art of storytelling by communicating complex models and findings to both technical and non-technical audiences via intuitive dashboards and presentations

Qualifications:
Core Skills and Competencies (What you'll bring)
* You don't just design models; you champion them by balancing immediate feature requests against long-term technical health and the elimination of technical debt in production environments

* Expert-level mastery of statistical modeling and machine learning-including regression, clustering, and neural networks-with the ability to design and analyze rigorous A/B tests and controlled experiments

* Advanced proficiency in building and deploying LLM-based solutions, demonstrating a deep understanding of prompt engineering and the management of unstructured data (text and images)

* Skilled in Python and SQL for building robust data pipelines and preprocessing large-scale datasets using big data tools like Spark, BigQuery, or Redshift

* A proven ability to analyze ambiguous information, make rational judgments regarding model architecture, and take responsibility for the results in a fast-paced environment

* Strong ability to build working relationships across teams to achieve shared goals, while utilizing enterprise platforms like AWS, Databricks, or Dataiku to maintain ML systems

* Excellence in writing clearly and succinctly for diverse audiences, ensuring that technical findings are translated into an inspiring vision for stakeholders

Additional Qualifications
* Experience with enterprise-level AI governance and compliance

* Familiarity with emerging trends in multimodal AI and agentic workflows

Minimum Requirements
* Bachelor's degree in Statistics, Machine Learning, Computer Science, Engineering, Mathematics, Physics, or a related quantitative field

* 6+ years of experience in data science and applied machine learning with a proven track record of delivering business impact

* Must have at least 6 years of experience with:

* Python and SQL for data science and pipeline development

* AWS or similar cloud-based enterprise platforms (Databricks, Dataiku)

* Big Data tools such as Spark, BigQuery, or Redshift
			
Share this job: