• Ort: London, England
  • Veröffentlicht am: 20th May, 2021
  • Reference: 05202021

Job Description

A company who provides expert data and information management advisory, project, support and learning consulting services using both Azure and AWS. They do this by helping our customers reduce costs and increase efficiency through effective data management.

Role & Responsibilities

  • Produce prototypes by translating requirements into demonstrable data pipelines and iterate solution in short cycles

  • Interrogate platform data sources using SQL and produce views of useful information to meet various business requirements

  • Develop rich data visualisation dashboards for data analysis, reconciliations and reporting

  • Produce design artefacts to describe key components and interactions for development by self and team

  • Build python tools and libraries to automate repetitive tasks

  • Maintenance and development of DataOps test frameworks, utilities and related assets

Skills & Qualifications

  • Experience with Spark & Apache Big Data Technologies (pySpark & Databricks are essential)

  • Essential Azure Skills: experience with Azure IoT, Streaming services, Azure data platform tooling (ADF, Logic Apps, Azure Functions, AzureML, ADLS, Synapse etc.)

  • Knowledge of C# & ASP.NET

  • SQL skills for pipeline development and analysis

  • Python for pipeline development and analysis

  • Experience of developing dashboards using data visualisation tools such as Power BI & Tableau

  • Experience of support and diagnosis in a data pipeline environment

  • Data migration delivery experience.