Role Sr Data Engineer Location Los Angeles, CA Employment Type Full TimeDirect Hire The Data Engineer will provide data initiatives assistance to software developers, database architects, data analysts, and data scientists, and will ensure that the best data delivery architecture is used throughout ongoing projects They must be self-sufficient and capable of servicing the data requirements of many teams, systems, and products Will Assists the DMBI team in organizing, leveraging, expanding, optimizing, and distributing Client’s select data from and to various sources and destinations for internal and external vendors using existing andor new technologies while maintaining a high level of industry and best practice awareness Essential Functions Assist the Business Intelligence team in delivering accurate and timely data Follow and apply Agile Data Warehousing Best Practices Provide daily monitoring, management, troubleshooting, and issue resolution for existing and new data solutions, as well as the interfaces of the systems they influence Deliver data to internal customers and external vendors in a variety of formats (CSV, Excel, XML, JSON) and delivery mechanisms (e.g., email, SFTP, S3, etc.), as well as via REST API REST API and web services, among other methods, are used to integrate external data into data warehouses Create high-quality, dependable, and fault-tolerant data solutions based on the needs of internal and external customersusers, using best practices and emerging trendstechnologies throughout the solution lifecycle. Pentaho Data Integration is the preferred ETL tool (PDI) Requirements and Qualifications Self-taught, continually learning new ETLBI tools and technologies, Curious and self-motivated It’s a plus if you’ve worked with NoSQL or other non-relational databases Experience leading a team of Data EngineersETL Developers as well as working as a team member with less supervision is required Job and schedule management, as well as ETL programming tools (ideally Pentaho PDI) proficiency Designing a Data Warehouse and Understanding Data Modeling (Methodologies Kimball, OLAP, EDW) Working knowledge of the AWS ecosystem (Data Lake Formation, Glue, Data Pipelines, EC2, Redshift, S3, Glacier, DynamoDB, Lambda, etc.) SQL DML (Data Manipulation Language) proficiency, with a focus on query tuning and performance Experience in analysis, programming, technical documentation, unit testing, and training as an expert SQL ProgrammerAnalystData Warehouse practitioner A bachelor’s degree in computer science, a college certificate, a technical certification, or similar applicable academic qualifications, as well as a minimum of three years of professional experience are all required Experience with Project Management, SDLC CICD methodologies and Reporting, visualization and dashboarding solutions using Tableau or similar, Version Control Systems, preference is git. JSON, XML, CSV and other data formats Python is a plus