Big Data Solutions Architect

US-CO-Littleton

Attract-careers1

Req #: 89918
Type: Fulltime-Regular

DISH

Connect With Us:
Connect To Our Company
				Overview:

Our Technology teams challenge the status quo and reimagine capabilities across industries. Whether through research and development, technology innovation or solution engineering, our people play vital roles in connecting consumers with the products and platforms of tomorrow.

Responsibilities:

Key Responsibilities:

* Deploy enterprise-ready, secure, and compliant data-oriented solutions leveraging Data Warehouse, Big Data, and Machine Learning frameworks
* Optimizing data engineering and machine learning pipelines
* Reviews architectural designs to ensure consistency & alignment with defined target architecture and adherence to established architecture standards
* Support data and cloud transformation initiatives
* Contribute to our cloud strategy based on prior experience
* Understand the latest technologies in a rapidly innovative marketplace
* Independently work with all stakeholders across the organization to deliver point and strategic solutions
* Assist solution providers with the definition and implementation of technical and business strategies

Qualifications:

Education and Experience:

* Bachelor's Degree in a technical field
* 12+ years of experience working as a Data Warehouse/Big Data Architect
* Experience in AWS cloud transformation projects is required
* Telecommunication domain experience is preferred

Skills and Qualifications:

* AWS services such as EMR, Glue, S3, Athena, DynamoDB, IAM, Lambda, Cloudwatch, and Data Pipeline
* Advanced Apache Spark processing framework, and spark programming languages such as Scala/Python/Advanced Java with sound knowledge in shell scripting
* Logical & physical table design in Big Data environments to suit processing frameworks
* Functional programming and Spark SQL programming dealing with processing terabytes of data
* Specific experience in writing Big Data engineering for large-scale data integration in AWS; prior experience in writing Machine Learning data pipelines using Spark programming language is an added advantage
* Advanced SQL experience including SQL performance tuning is a must
* Big data frameworks such as MapReduce, HDFS, Hive/Impala, AWS Athena
* Knowledge of using, setting up, and tuning resource management frameworks such as Yarn, Mesos, or standalone spark
* Knowledge in a variety of data platforms such as Redshift, S3, Teradata, Hbase, MySQL/Postgres, MongoDB

#LI-CH2
			
Share this job: