We are looking to hire a talented Big Data Engineer to develop and manage our company’s Big Data solutions. In this role, you will be required to design and implement Big Data tools and frameworks, implement ELT processes.
• 10+ years of experience in IT with7+ years in Big Data Ecosystem • Exp in development on Hadoop technologies like Python, Pyspark, HDFS, Hive, Pig, Flume, Sqoop, Zookeeper, Spark, MapReduce2, YARN, HBase, Kafka, and Storm.
Design and implement applications using BigData technologies. Design and implement distributed applications using cloud specific services. Implement Hybrid cloud approaches to handle data synchronization in between on-premises and cloud.
JD for Hadoop: Minimum 6+ years Relevant experience in Hadoop administration • The most essential requirements are: They should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the c
We are looking for a Big Data Engineer that will work on collecting, storing, processing, and analyzing huge sets of data. Primary focus- choosing optimal solutions to use then maintaining, implementing, and monitoring them.
Hadoop Administration of the data platform built using Hortonworks on AWS for one of USA’s well known and fastest growing pharma company in NY/NJ region. Support a global team of developers using this platform