• Experience developing and administering large data systems. • Solid knowledge of CS fundamentals in algorithms and data structures. • Experience with Hadoop, Spark, Kafka. • Exp. with relational SQL & NoSQL databases including SQL Server & CosmosDB
Should have a bachelor’s degree in field of Statistics, Business Intelligence or an equivalent Should have got relevant experience in working with Hadoop, Spark environment and other related tools such as Hive, Presto, Ranger etc.
Exp : 3 Yrs Candidate should have strong development experience in Hadoop, Spark, Scala,Hive Secondary : Java, Unix Shell Script Also testing knowledge is preferred. Location :Bangalore Prefer short notice candidates
Design and implement applications using BigData technologies. Design and implement distributed applications using cloud specific services. Implement Hybrid cloud approaches to handle data synchronization in between on-premises and cloud.
Posted: 9 days ago
Get noticed by recruiters
Give your career a boost with Monster's resume services.