https://media.monsterindia.com/company/xtcsinx/banner1.jpg

Job Summary
Company Name
TATA Consultancy Services Ltd.
Location
Pune
Experience
10 - 15 years
Key Skills
Big Data Designer, BigData, Hadoop, MR, Hive, Oozie, NoSQL DBs, Sqoop
Function
IT
Role
• Software Engineer/ Programmer • System Analyst/ Tech Architect
Industry
IT/ Computers - Software
Posted On
27th Jun 2018
 
Apply
SocialTwist Tell-a-Friend  
About Company

Just as an organization needs the right talent to drive its business objective, people need the right environment to grow and achieve their career goals. The moment you step into TCS, you would be greeted with that unmistakable feeling of being at the place. Along with that, working with TCS affords you with a sense of certainty of a successful career that would be driven by boundless growth opportunities and exposure to cutting- edge technologies and learning possibilities.

The work environment at TCS is built around the belief of growth beyond boundaries. Some of the critical factors that define our work culture are global exposure, cross-domain experience, and work-life balance. Each of these elements goes much deeper than what it ostensibly conveys.
  1. A part of the Tata group, India's largest industrial conglomerate
  2. Over 198,000 of the world's best-trained IT consultants across 42 countries
  3. The 1st Company in the world to be assessed at Level 5 for integrated enterprise-wide CMMI and PCMM
  4. Serving over 900 clients in 55 countries with repeat business from more than 98.3% of the clients annually
  5. 49 of the Top 100 Fortune 500 U.S. Companies are TCS clients


Job Title:

Hadoop Solution Architect

Job Description


Role: Big Data Designer

Desired Experience Range: 10+ Years overall (4+ in Big Data)

Desired Competencies (Technical/Behavioral Competency)

Must-Have:

(Ideally should not be more than 3-5)

1. Should have experience in multiple Big Data tools and technologies such as Hadoop(MR), Hive, Oozie, NoSQL DBs, Sqoop etc; Should have in-depth knowledge of commonly used Big data stack

2. Should be able to design scalable flexible individual big data components based upon the overall skeletop as specified by the Architect – designing a sqoop job for parallelism depending upon data ; design oozie job (fork etc) based on application flow; specifying file encryption / compression tecniques after requirement analysis etc

3. Should be able to specify configuration needs for Big Data cluster, design data model within Hive/HBase/NoSQL DBs etc

4. In addition to big data solutions, needs to have a understanding of end-to-end solution stack - major programming/scripting languages like Java, Linux, PHP, Ruby, Phyton and/or R; experience in working with ETL tools such as Informatica, Talend and/or Pentaho; BI tools; data warehouses; parallel architecture as well as high-scale or distributed RDBMS and/or knowledge on NoSQL platforms

Good-to-Have

1. Have experience with one of the large cloud-computing infrastructure solutions like Amazon Web Services or Elastic MapReduce

2. Have experience with Data and related services - governance, MDM, security, privacy etc

3 . Be able to help with documenting use cases, solutions and recommendations

4. Should have at least an understanding of a variety of hardware platforms including mainframes, distributed platforms, desktops, and mobile devices as well as a deep understanding of databases, data in storage and data in motion

Responsibility of / Expectations from the Role

Be able to describe the design of a big data application module and how the same can be delivered using big data technologies

Be able to design and make recommendations(how to make a component scalable, reusable, performance-friendly) on design based upon the vision the overall Big Data roadmap and extrapolated metrics for the client 

Please apply with your updated CV.


 
    Company Profile
 

Powered by Monster