The client builds AI-powered solutions, combining the power of deep learning with the accuracy
of ontologies to drive natural-language understanding.
The Technical Operations Engineer
will be responsible for supporting our data platform in production, operating data flows, ETL
operations, monitoring, and Data integration. You will enjoy working with a highly talented and
diverse team of engineers across our Data Platform and Applications teams, implementation
analysts, customer success specialists and other technical operations professionals.
The ideal candidate will have a strong background in Data and Applications Operations both
utilizing a diverse set of ETL tools and building your own in Python, have experience working
with large data sets, and deploying data-driven solutions. Strong experience building a robust
monitoring and alerting system to keep track of applications and Data pipelines operations. You are focused on results, a self-starter, able to put the team first, and have demonstrated success
in handling complex data pipelines both in analytics, machine learning, search and integration
of various types of data types and Data sets as well as applications that integrate with such.
The technical operations engineering team works closely with other engineering peers and
leaders across the technology organization.
Role and Responsibilities :
- Build out Data support, metrics, quality processes and services at scale
- Work with Tech Lead, Product, and peer engineers to develop and monitor standards
for data processing, across many data sources of various sizes / complexity
Build data quality monitoring tools to ensure that Data Platform is providing high
quality and fit for purpose data to our customers
Collaborate with our customer implementation team to process customer data securely
within our standards all the way through our data platform and customer applications.
- Manage data source connectors with third party tool integrations
- Ensure engineering designs are guided by high performance, scalability, security,
following the strict healthcare policy compliance, with low cost
Agile through experimentation, prototyping, and solid execution
Required Qualifications :
Solid understanding of data pipelines, data structures, databases, operations support
and monitoring
- Ability to write robust code in Python, including unit tests and documentation
- 3+ years of experience writing data pipelines and tooling
- Experience in a diverse set of database technologies such as PostgreSQL, Big Query,
Redshift, Snowflake, Elasticsearch, etc.
Ability to support EST timezone
Preferred Experience :
- An excellent track record (5+ years) of delivering operational support and monitoring solutions, including log monitoring through tools like Grafana or ELK
- Experience with working with diverse data sets from relational databases to unstructured datasets in Hadoop
- Experience with Airflow
- Experience with cloud platforms, such as Google Cloud Platform, Amazon Web Services, etc.
- Experience with Healthcare and Life Sciences domain
- A great collaborator who can work across operating styles and can bring together multiple perspectives, able to handle conflicts with the best interests of the company and customers in mind
- Able to translate business and technical requirements into clean / logical design