Develop and deploy batch and streaming data pipelines in cloud ecosystem.Automation of manual processes and performance tuning of existing pipelines.Data loading and processing from multiple source locations into Data lake, Datamart
You are a capable, self-motivated data engineer, well-adapted in software development methods including Agile/Scrum. You will be a member of the data engineering team working on tasks ranging from design, development, operations of data warehouse
• 1-2 Years’ Experience with functional programming • Experience with functional programming using Scala with Spark framework. • Strong understanding of Object-oriented programming, data structures and algorithms • Good experience in any of the cloud
Experience in BigData Development using Databricks, Spark and Scala Experience in ETL Development using Apache Spark & Scala using DataBricks Knowledge on JAVA and Scala Programming languages. knowledge on relational Database, Preferably MySql/MSSQL
About the role: As a Data Engineer, you will build a variety of big data analytics solutions, including big data lakes. More specifically, you will: • Design and build scalable data ingestion pipelines to handle real time streams, CDC events, and bat
We are looking candidates, having at least 2+ years of relevant experience with Hadoop Development (Spark, Scala & Hive) with exposure in SQL, Unix Shell Script & Java. This is a Long Term Contract (C2H) & the Job Location: Bangalore / Chennai.
As a Software and DevOps Engineer, you will work with our extraordinary team developing and deploying new technologies on a cutting-edge network. You will design, develop and deploy new and innovative technology.