As a Sr. Software Engineer for our Data Platform Engineering team you will join skilled Scala engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL
processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data
warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database
system that’s highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership.
Requirements
Responsibilities:
- Writing Scala code with tools like Apache Spark + Apache Arrow to build a hosted, multi-cluster data warehouse for Web3
- Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques
- Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure
- Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management
- Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow and a wealth of other open source data tools)
- Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components
- Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective
- Understand data and analytics use cases across Web3 / blockchains
Skills & Qualifications
- Bachelor’s degree in computer science or related technical field. Masters or PhD a plus.
- 6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, and others)
- 3+ years experience with Scala and Apache Spark
- A track record of recruiting and leading technical teams in a demanding talent market
- Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required
- Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms… is a strong plus but not required
- Experience with rapid development cycles in a web-based environment
- Strong scripting and test automation knowledge
- Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this.