JD for Hadoop: Minimum 6+ years Relevant experience in Hadoop administration • The most essential requirements are: They should be able to deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor critical parts of the c
• Experience developing and administering large data systems. • Solid knowledge of CS fundamentals in algorithms and data structures. • Experience with Hadoop, Spark, Kafka. • Exp. with relational SQL & NoSQL databases including SQL Server & CosmosDB
As a Linux/Cloud Engineer, you will work with leading-edge software and cloud technologies. You will also assist customers during service related events, including creating support tickets and answering basic technical questions related to their
We are looking for a Big Data Engineer that will work on collecting, storing, processing, and analyzing huge sets of data. Primary focus- choosing optimal solutions to use then maintaining, implementing, and monitoring them.
Job Description: 3 Scale developer Minimum of 5 years of experience working on microservices Architecture on Openshift, Docker, Kubernetes, Spring Boot, Swagger, Apache Kafka(Basic Familiarity), API gateway, and Security Knowledge in Service Oriented