HADOOP Administrator Job Description: Must possess skills that make them a subject matter expert in various Hadoop distributions. This role will entail infrastructure set up, configuration and maintenance of clusters Primary Job Duties: Hadoop Administrator - 80% -
Deploying a Hadoop cluster, maintaining a Hadoop cluster, adding and removing nodes using cluster monitoring tools like Ganglia Nagios or Cloudera Manager, configuring the Name-Node high availability and keeping a track of all the running Hadoop jobs
Implementing, managing and administering the overall Hadoop infrastructure.
Takes care of the day-to-day running of Hadoop clusters
Work closely with the database team, network team, BI team and application teams to make sure that all the big data applications are highly available and performing as expected
Manually setup all the configurations- Core-Site, HDFS-Site, YARN-Site and Map Red-Site
Responsible for capacity planning and estimating the requirements for lowering or increasing the capacity of the Hadoop cluster.
Hadoop admin is also responsible for deciding the of the Hadoop cluster based on the data to be stored in
Ensure that the Hadoop cluster is up and running all the time.
Monitoring the cluster connectivity and performance.
Manage and review Hadoop log files.
Backup and recovery tasks
Resource and security management
Troubleshooting application errors and ensuring that they do not occur again
Team Mentor 20%
Provide assistance to Big data/ Hadoop team members with issues needing technical expertise or complex systems and/ or programming knowledge. Provide on-the-job training for new or less experienced team members
Provide technical training to internal/ external team members to foster stronger cross-departmental relations.
Required Skill Set:
Minimum of 10+ years experience in Hadoop/ Administration
General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
Must know LINUX administration, Networking concepts
Hadoop eco system skills like HBase, Hive, Pig, SPARK, KAFKA, STORM etc.
The most essential requirements are:He/ She should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the cluster, configure name-node high availability, schedule and configure it and take backups.
Familiarity with open source configuration management and deployment tools such as Puppet or Chef and Linux scripting.
Knowledge of Troubleshooting Core Java Applications is a plus
Must have experience with CLOUD and On-Premise installation & configuration
Must have done pre-sales experience on Big Data
Must have good fluency in English – Written & Verbal
Job Description :
To ensure customer service and support all operations. To create customer delight at every interaction.
Interacting with external customers and internal customers and addressing their queries, requests and complaints.
Committed TATs are met consistently
Complaints Management- addressing customer complaints at the branch, system updation, coordination with Sales/HUB/ other functions for resolution.
Refunds processing and dispatch
Undelivered policy documents tracking and management.
Maintenance of all files and registers.
New Business Processing:-
Handling end to end New business processing starting from creation of Client id,Case start up, New business login, Follow up for policy issuance, Quality Check
Follow up with HUB for policy issuance of pending cases