Candidates are welcome to join leading Data Management company serving reputed clients in the country and abroad. Long term contract || Work from Home for entire tenure|| 5 days working in a week || Attractive salary||
. Data engineer with strong programming experience in Python . Extensive experience related to processing frameworks such as Spark, Spark Streaming, Airflow, Hive, Sqoop, Kafka etc. . Experience with big data processing within cloud AWS S3.
Develop and deploy batch and streaming data pipelines in cloud ecosystem.Automation of manual processes and performance tuning of existing pipelines.Data loading and processing from multiple source locations into Data lake, Datamart
• Experience developing and administering large data systems. • Solid knowledge of CS fundamentals in algorithms and data structures. • Experience with Hadoop, Spark, Kafka. • Exp. with relational SQL & NoSQL databases including SQL Server & CosmosDB
Overall experience of 8+ years and are adept in running Kafka with Production/complex workloads, analyzing logs, thread dumps and profiler results from Kafka and Zookeeper. Strong experience in Apache Kafka
innovate and solve complex problems.This role will be responsible for all aspects of software development, testing and ensuring compatibility with enterprise and solutions architecture by harnessing modern development technologies