Minimum Qualifications1-3 years of relevant industry experience in big data systems, data processing and SQL databases1+ years of coding experience in Spark dataframes, Spark SQL, PySpark2+ years of hands on programming skills, able to write modular, maintainable code, preferably Python & SQLGood understanding of SQL, dimensional modeling, and analytical big data warehouses like Hive and SnowflakeFamiliar with ETL workflow management tools like AirflowPreferred QualificationsExperience with version control and CICD tools like Git and Jenkins CIExperience in working and analysing data on notebook solutions like Jupyter, EMR Notebooks, Apache ZeppelinProblem solver with excellent written and interpersonal skills; ability to make sound, complex decisions in a fast-paced, technical environment.Bachelors degree in computer science, Engineering or related field, or equivalent training, fellowship, or work experience