Data Engineer

Data Engineer

Job Summary:

We are looking for a savvy Data Engineer to join our growing team. You will use various methods to transform raw data into useful data systems. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. To succeed in this data engineering position, you should have strong analytical skills and the ability to combine data from different sources.

Responsibilities
  • Analyze and organize raw data
  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Evaluate business needs and objectives
  • Interpret trends and patterns
  • Conduct complex data analysis and report on results
  • Prepare data for prescriptive and predictive modeling
  • Build algorithms and prototypes
  • Combine raw information from different sources
  • Explore ways to enhance data quality and reliability
  • Identify opportunities for data acquisition
  • Develop analytical tools and programs
  • Collaborate with other architects, other teams, advisers and experts on several projects
Qualifications
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Previous experience as a data engineer or in a similar role
  • Technical expertise with data models, data mining and segmentation techniques
  • Strong analytic skills related to working with unstructured datasets
  • Knowledge of programming languages (e.g. Python and R)
  • Hands-on experience with SQL database design
  • Great numerical and analytical skills
  • Degree in Computer Science, IT, or similar field; a Master’s is a plus
  • Data engineering certification is a plus
  • Experience of working with digital marketing data is a plus
  • Experience supporting and working with cross-functional teams in a dynamic environment
  • Experience with big data tools: Hadoop, Spark, Kafka, Data Spark, Snowflake, etc.
  • Well-versed in python 2.x/3.x; pandas, NumPy, sklearn, matplotlib, jupyter/vs-code SQL
  • Ability to work with both structured relational (e.g. table dumps) and non-relational data (e.g. JSON); and additionally unstructured data, such as text blobs. Knowledge of both SQL and NoSQL is required