Data Engineer
Locations:
New York, United States, - None Specified -
Type:
Contract
Published:
February 2, 2026
Ref:
19874
Required Skills:
Data Engineering,Snowflake,SQL Server
Share this job
Apply

Job title: Data Engineer
Job type: Contract
Contract Length:12 months
Rate: $60-80 per hour
Role Location: United States

The Company:

A data-driven healthcare organization modernizing our analytics platform to power value-based care. Our team is transitioning from SQL Server to Snowflake as our cloud data warehouse and implementing dbt, Python, and OpenFlow to build reliable, automated data pipelines.

You’ll join a collaborative, high-skill data team that values maintainability, clarity, and thoughtful design - working alongside engineers who specialize in automation, SQL Server administration, and data architecture.

Role and Responsibilities: 

We’re seeking an experienced Data Engineer (3+ years) with strong SQL, dbt, and Python skills to design, build, and maintain our data transformation and analytics pipelines in Snowflake.

The ideal candidate has hands-on experience with modern ELT tools, data modeling, and cloud-based platforms - and brings a mindset of automation, testing, and documentation to every project.

You’ll collaborate closely with our data architects and integration engineers to ensure data from OpenFlow pipelines is transformed into trusted, analytics-ready models for reporting and advanced analytics.

  • Advanced SQL Development: Write and optimize complex SQL queries and dbt models for data transformation and analysis within Snowflake.
  • DBT Model Development: Build, test, and maintain dbt models that convert raw data into actionable insights.
  • ETL/ELT Pipeline Management: Design and manage efficient pipelines using dbt, OpenFlow, and Python to process and deliver data across systems.
  • SQL Performance Tuning: Optimize query performance, clustering, and cost efficiency in Snowflake.
  • Data Quality Assurance: Ensure that transformed data meets accuracy and consistency standards through dbt testing and validation frameworks.
  • Collaboration: Work closely with data engineers, analysts, and architecture leads to translate data requirements into scalable transformations.
  • Data Documentation: Maintain clear documentation for dbt models, data flows, and dependencies for ongoing visibility and reuse.
  • Version Control: Manage dbt and Python projects in Git, following clean, modular, and testable development practices.
  • Automation Support: Partner with automation engineers to enhance data ingestion and transformation workflows through OpenFlow.

Job Requirements:

  • Proven experience as a dbt Developer or in a similar Data Engineer role.
  • Expert-level SQL skills — capable of writing, tuning, and debugging complex queries across large datasets.
  • Strong experience with Snowflake or comparable data warehouse technologies (BigQuery, Redshift, etc.).
  • Proficiency in Python for scripting, automation, or data manipulation.
  • Solid understanding of data warehousing concepts, modeling, and ELT workflows.
  • Familiarity with Git or other version control systems.
  • Experience working with cloud-based platforms such as AWS, GCP, or Azure.

Accessibility Statement:
Read and apply for this role in the way that works for you by using our Recite Me assistive technology tool. Click the circle at the bottom right side of the screen and select your preferences.
We make an active choice to be inclusive towards everyone every day.? Please let us know if you require any accessibility adjustments through the application or interview process.

Our Commitment to Diversity, Equity, and Inclusion:
Signify’s mission is to empower every person, regardless of their background or?circumstances, with an equitable chance to achieve the careers?they deserve. Building a diverse future, one placement at a?time. Check out our DE&I page here

Apply