Our products range from globally deployed data pipelines that publish millions of events per second into our data ecosystem all the way down to single page web apps that guide processes inside. Our data drives the ability to draw insights about our players, people, teams, and organization. As a Software Engineer III, you will have the chance build solutions that help us manage our data assets. You’ll shape the technical vision, and lead development efforts, improving the developer experience. You’ll bring your depth of expertise of working with globally distributed systems and large-scale data to help us build efficient solutions, promote best practices, and provide mentorship to other engineers. Responsibilities – Lead in design and implementation of new components and feature sets – Provide and document reliable and cost-effective solutions in multiple problem spaces including applications, deployment, and monitoring – Be a bar-raiser for other engineers through teaching and mentoring – Conduct code reviews for members in the team – Work with different teams to incorporate customer feedback and provide elegant solutions Required Qualifications – Bachelor’s degree in Computer Science or comparable field – 5+ years experience in Python and SQL – 5+ years experience in Java or similar OO experience – Experience with data analysis, processing, and validation – 3+ years experience with big data technologies such as Spark, Hadoop, or Databricks – Professional experience with open source ETL frameworks such as Airflow, Luigi, or similar – Knowledge within a diverse set of public cloud technologies AWS RDS, S3, EC2, Lambda, Google Cloud Big Query, Google Cloud Bigtable, etc. Desired Qualifications – Experience building data pipelines – Experience with data analysis, processing, and validation – Experience with open source ETL frameworks such as Airflow or Luigi