• Location: Pittsburgh, Pennsylvania
  • Date Posted: 10th Oct, 2019
  • Reference: CR10102019NFI


  • Bachelors in Computer Science, Machine Learning or related field

  • Masters preferred


  • Previous experience in using Hadoop ecosystem ML tools (SparkML, Mahout, etc.)

  • Experience validating software through industry accepted testing strategies

  • Experience working in an Agile development environment

  • Proven experiences on delivering distributed systems and services in a production setting

  • A portfolio of relevant publications or open-source projects to share with us


  • Graduate-level expertise (or equivalent industry experience) in ML or NLP

  • Expert knowledge in implementing ML systems at scale in Python, Java, Scala, SparkML, or C/C++ (not just R or MATLAB)

  • Be responsible for the architecture, design, development, and operations of large-scale systems designed for machine learning. (These may include, but not limited to, data management systems, data engineering workflow systems, distributed compute systems, and their web portal & web service components)

  • A strong mathematical background in statistics and machine learning.

  • Experience with some or all of the following:

    • REST API's

    • SQL

    • Amazon Web Services/Azure

    • Windows and Linux