Responsibilities
Design, develop, and maintain large-scale data processing pipelines using Apache Flink and Scala.Collaborate with data engineers to architect and implement scalable, reliable, and maintainable data processing solutions.Work with various data sources such as Apache Kafka, Apache Hadoop, and Amazon S3.Optimize performance, scalability, and fault tolerance of data processing pipelines.Troubleshoot issues with data processing pipelines and resolve errors in a timely manner.Participate in code reviews and ensure that the code is well-documented and follows best practices.Collaborate with cross-functional teams to deliver high-quality software solutions. Required Qualifications
Must have 6+ years of Development, delivery, and Solid experience in Apache Flink or similar distributed stream processing frameworks.Experience with data integration from various sources, such as Kafka, or databases.Experience with cloud-based platforms(AWS) and technologiesProficiency in Scala commonly used with Flink.In-depth understanding of streaming data concepts and processing paradigms.Familiarity with configuration-based systems and a strong ability to design user-friendly interfaces.Knowledge of data serialization formats like JSON.Good knowledge of writing code using Scala commonly used with Flink.Proficient in troubleshooting legacy applications and able to enhance the requirements with Analysts for building a better software product.