Location: REMOTEDescription: Experience in big data and cloud big data experience is MUST.
Experience building datawarehouse or datalake on AWS is required.
Experience in shell scripting using Linux, AWS CLI commands and well versed with all AWS services.
Experienced writing lambda functions, unit test scripts for lambda using Python, so python is a must.
Experience writing code in Pyspark for glue jobs, one should be experienced in Spark.
Work on AWS services S3, Lambda, Athena and Glue mainly, one should be experienced in Glue/EMR( If one know Spark, It should be good) as both are similar services, Athena tables creation( if one knows Hive(HQL), it should be good).
Basic understanding on IAM roles, as we will use in glue jobs, lambda functions, etc.
Work with Cloudops team to get the IAM roles created and for automation, but to provide them the requirements one should have good understanding.
Work with other teams like CloudOps, DevOps for code deployments, Roles..
etc.
Understanding on Jenkins, CI/CD automation is must.
Experienced working on Redshift, DMS.
Experience with AWS scheduling mechanism and preferably experience working in ESP scheduler.
As most the work is related to Data and Analytics, one with little bit Hadoop/Big Data knowledge can easily adjusted.
Good SQL knowledge.
Candidate should be experienced handling application integration using AWS native services.
Experiences working in agile methodology and should be good team player and contribute towards design and best practices Candidate should have worked in onsite/offshore model Prior experience on Java and Mulesoft a plus required for application integration.
Contact: ssaxena03@judge.comThis job and many more are available through The Judge Group.
Find us on the web at www.Judge.Com