Role-ETL developerLocation-Remote,Canada Required Experience:
Data warehousing experience with dimensional and data vault data model.Proficient in SQL, PL/SQL, and PythonHands-on experience in creating data pipeline Etl jobs using AWS Glue with PySpark.Creating and testing pipeline jobs locally using aws glue interactive session.Efficient in using PySpark Dataframe API and Spark SQL.Performance tuning of PySpark jobs.Using AWS athena to perform data analysis on Lake data populated into aws glue data catalog through aws glue crawlers.Knowledge in AWS services e.g. DMS, S3, RDS, Redshift, Step Function.Etl development experience with tools e.g. SAP BODS, Informatica.Efficient in writing complex SQL queries to perform data analysis.Good understanding of version control tools like Git, GitHub, TortoiseHg.Description of duties:
Work with a scrum team(s) to deliver product stories according to priorities
Pyspark,Glue,data modelling,datawarehouse
Related Jobs

Security Engineer Ubuntu

Senior Software Engineer

Test Automation Engineer

Senior Android Engineer

Lead Software Engineer

Senior Frontend Engineer

Data Engineer

Production Engineer

Senior Back End Developer

Senior Frontend Engineer Platform

IT Engineer

Senior Software Engineer Cloud Images

Senior Product Designer

Hands On Engineering Manager

Full Stack Software Engineer

Senior Front End Developer

Customer Success Manager

Senior Software Engineer .NET Core

Front End TypeScript Developer
