Job Title:- Data EngineerLocation:- RemoteExperience Level:- 5+ YearsJob Type:- Full-time
Key Responsibilities:-
Data Pipeline Development:- Design, implement, and maintain scalable data pipelines using Python to extract, transform, and load (ETL) data from various sources into Snowflake.- Data Warehouse Management:- Optimize and manage our Snowflake data warehouse, ensuring efficient data storage, retrieval, and processing.- Data Analysis & Visualization:- Utilize Jupyter notebooks for data analysis and creating ad-hoc queries, reports, and visualizations to support business intelligence and data science needs.- Data Quality & Integrity:- Ensure data accuracy, consistency, and reliability by implementing data validation and monitoring processes.- Collaboration :- Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.- Performance Tuning:- Optimize performance of SQL queries and database schemas to enhance data processing speed and efficiency.- Documentation:- Maintain clear and comprehensive documentation of data workflows, schemas, and processes.
5+ years of experience in data engineering or related field.- Proficiency in Python for data processing and automation.- Strong experience with Snowflake data warehouse, including architecture, performance optimization, and security.- Hands-on experience with Jupyter notebooks for data analysis and reporting.- Proficient in writing complex SQL queries for data extraction and transformation.- Experience with data integration tools and frameworks (e.g., Airflow, DBT).- Familiarity with cloud platforms (AWS, Azure, or GCP) and their data services.- Strong problem-solving skills and attention to detail.- Excellent communication and teamwork abilities.