Should have a bachelor’s degree in field of Statistics, Business Intelligence or an equivalent Should have got relevant experience in working with Hadoop, Spark environment and other related tools such as Hive, Presto, Ranger etc.
• Experience developing and administering large data systems. • Solid knowledge of CS fundamentals in algorithms and data structures. • Experience with Hadoop, Spark, Kafka. • Exp. with relational SQL & NoSQL databases including SQL Server & CosmosDB
JD for Hadoop: Minimum 6+ years Relevant experience in Hadoop administration • The most essential requirements are: They should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the c
We are looking to hire a talented Big Data Engineer to develop and manage our company’s Big Data solutions. In this role, you will be required to design and implement Big Data tools and frameworks, implement ELT processes.