Data Engineer

IT Group Description:The Equity Investment Management Technology (‘EIMT’) team creates software to support the research, portfolio management, and trading activities for our institutional and private client equity products.

This role is within the EIMT
– Research team which is focused on delivering solutions for our buy side equity quantitative and research analysts.

The solutions built include tools for quantitative analysis, business intelligence, and data ingestion.Job Description:We are seeking a Nashville based Data Engineer to join our Equity Investment Management Technology Research team.Describe the role:This is a Data Engineering position focused on enhancing the equity data architecture to provide rapid data onboarding, quality control, and accessibility.

This role will focus on building out infrastructure and frameworks using cloud-based distributed compute and storage technologies, continuous integration and deployment tools, data pipeline orchestration, non-SQL and traditional data warehousing (RDMS) technology.The key job responsibilities include, but are not limited to: Automation of data ingestion supporting various sources and formats both external and internal.

Implementing a quality control framework for ensuring data consistency.

Cataloging new data sets to facilitate data discovery, lineage, and self-service.

Building business intelligence dashboards to provide data insights.

Assist with ad-hoc data and research requests from the investment team.

Provide support for overnight jobs.What makes this role unique or interesting?This role provides the opportunity to experience the full lifecycle of building investment decisions from onboarding data to evaluating factor performance while applying modern processing concepts using cloud-based solutionsWhat is the professional development value of this role?

Learning the equity investment business and engaging directly with end users.

Automating complex data loads and pipelines.

Onboard alternative datasets including learning how to web scrape.

Best practices managing large data sets.

Building technical skills including SQL, Python, and PowerBI.

Applying cloud-based technologies including data lakes and data pipelines.Qualifications, Experience, Education: BS in Computer Science/Engineering, Finance, Mathematics/Statistics or a related major 5 years programming in SQL with experience in relational schema designs and optimizing query performance 2 years using Python or another object oriented language (C#, Java) Experience building both Data and ETL Pipelines Experience with cloud data warehouses such as Snowflake, Azure Synapse, Amazon Redshift or Google BigQuery.

Building visualizations using PowerBI, Tableau, or Qlik, is a strong plusSkills: Solid analytical and technical skills Candidate must be willing to take ownership of projects and show strong client commitment Must demonstrate good communication skills and be comfortable working closely with business users Self starter as well as a good team player A strong desire to document and share work done to aid in long term supportSpecial Knowledge (nice to have, but not required): Experience working in the financial industry or knowledge of basic financial statement concepts Experience using AirflowPeople of color, women, and those who identify as LGBTQ people are encouraged to apply.

AB does not discriminate against any employee or applicant for employment on the basis of race, color, religion, creed, ancestry, national origin, sex, age, disability, marital status, citizenship status, sexual orientation, gender identity, military or veteran status or any other basis that is prohibited by applicable law.

AB’s policies, as well as practices, seek to ensure that employment opportunities are available to all employees and applicants, based solely on job-related criteria.Nashville, Tennessee by Jobble

Related Post