. Data engineer with strong programming experience in Python . Extensive experience related to processing frameworks such as Spark, Spark Streaming, Airflow, Hive, Sqoop, Kafka etc. . Experience with big data processing within cloud AWS S3.
• Experience developing and administering large data systems. • Solid knowledge of CS fundamentals in algorithms and data structures. • Experience with Hadoop, Spark, Kafka. • Exp. with relational SQL & NoSQL databases including SQL Server & CosmosDB
Should have a bachelor’s degree in field of Statistics, Business Intelligence or an equivalent Should have got relevant experience in working with Hadoop, Spark environment and other related tools such as Hive, Presto, Ranger etc.
Exp : 3 Yrs Candidate should have strong development experience in Hadoop, Spark, Scala,Hive Secondary : Java, Unix Shell Script Also testing knowledge is preferred. Location :Bangalore Prefer short notice candidates