Experience: 1 - 10Years
Technical Skill: Hadoop Ecosystem, Java, Scala, Azure, Python, and MongoDB
Role and Responsibilities:
Design, develop, and maintain high-volume Scala-based data processing batch jobs, using industry standard tools and frameworks in Hadoop ecosystem (Spark, Kafka, Scalding, Cascading, Hive, Impala, Avro, Flume, Oozie, and Sqoop).
Design and maintain schemas in the Hadoop/Vertica analytics database and write efficient SQL for loading and querying analytics data.
Integrate data processing jobs and services with applications like Coremetrics, Twitter, etc., using technologies like Flume, Kafka, RabbitMQ, Spring, MongoDB, Elasticsearch, Coherence, MySQL, etc.
Write appropriate unit, integration, and load tests using industry standard frameworks, such as Specs2, ScalaTest, ScalaCheck, JMeter, JUnit, Cucumber, and Grinder.
Desired Candidate Profile