Research Senior Software Data/AWS Engineer

IN-Bangalore

International Careers

Req #: 22776
logo

Waters Corporation

Connect With Us:
Connect To Our Company
				Overview:

Come help us research and develop self-diagnosing, self-healing instruments!

The Waters Global Research department is exploring state-of-the-art capabilities that will stretch your creative talents. Our aim is to enhance our customers' user experience by building more intelligent systems. Our analytical chemistry instruments have a direct impact on laboratory testing, drug discovery and development and food safety, and we strive to enhance our software offerings, making them more intuitive and easier to use.

The current work focuses on training machine learning and other statistical models that perform root error diagnosis using raw signals time series data coming from our instruments. Other projects include automating steps that users currently do manually, such as interpreting raw data results, adjusting anomalous data to clean it up, and optimizing the procedures that the instruments run on.

The role of this Data/AWS engineer will be to develop data pipelines for specialty instrument data and Gen AI processes, support the development of classification and prediction models, create and maintain dashboards to monitor data health, and set up and maintain services in AWS to deploy models, and collect results. These pipelines will be part of foundational emerging data infrastructure for the company. We seek someone with a growth mindset who is self-motivated, a problem solver, and someone energized by working at the nexus of leading-edge software and hardware development.

This position will follow a hybrid model work approach( 3 days a week , Tuesday, Wednesday and Thursday working from GCC office, RMZ ecoworld, Bellandur, Bangalore).

Responsibilities:

*  Build data pipelines in AWS using S3, Lambda, IoT core, EC2, and other services.

*  Create and maintain dashboards to monitor data health.

*  Containerize models and deploy them to AWS

*  Build python data pipelines that can handle data frames and matrices, ingest, transform, and store data using pythonic code practices.

*  Maintain codebase: use OOP and/or FP best practices, write unit tests, etc.

*  Work with Machine Learning engineers to evaluate data and models, and present results to stakeholders in a manner understandable to non-data scientists.

*  Mentor and review code of other members of the team.

Qualifications:

Required Qualifications:

*  Bachelor's in computer science or related field with 5 - 8 years relevant work experience, or equivalent. 

*  Solid experience in AWS services such as S3, EC2, Lambda, and IAM.

*  Experience containerizing and deploying code in AWS.

*  Proficient writing OOP and/or functional programming code in Python (e.g., numpy, pandas, scipy, scikit-learn). 

*  Comfortable with Git version control, as well as BASH or command prompt.

*  Comfortable discovering and driving new capabilities, solutions, and data best practices from blogs, white papers, and other technical documentation.

*  Able to communicate results using meaningful metrics and visualizations to managers and stakeholders and receive feedback.

Desired (considered a plus) Qualifications:

*  Experience with C#, C++ and .NET.

What We Offer:

*  Hybrid role with competitive compensation, great benefits, and continuous professional development.

*  An inclusive environment where everyone can contribute their best work and develop to their full potential.

*  Reasonable adjustments to the interview process according to your needs.
			
Share this job: