1) Create and maintain optimal data pipeline architecture,
2) Assemble large, complex data sets that meet functional / non-functional business requirements.
3) Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
4) Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
5) Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
6) Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
1) Advanced working SQL knowledge and experience working with relational databases, query authoring and optimization (SQL) as well as working familiarity with a variety of databases.
2) Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
3) Strong analytic skills related to working with unstructured datasets.
4) A successful history of manipulating, processing and extracting value from large disconnected datasets.
5) Experience with relational SQL and NoSQL databases
6) Experience with object-oriented/object function scripting languages: Python, C#