Job Summary
A data and analytics company has an open position for a Remote ML Data Engineer.
Candidates will be responsible for the following:
- Writing PySpark code to transform our model algorithms to production-ready code
- Designing and developing complex code in SQL or Python to optimize data pipelines for ML analysis and modeling
- Working with the latest AWS cloud computing and data warehousing technologies such as Snowflake
Must meet the following requirements for consideration:
- 3+ years with writing SQL ETL processes
- 3 to 5+ years of working experience in data management technologies
- 3+ years coding in Python or PySpark
- 2-3+ years in cloud computing especially AWS
- Experience working in a Linux and Windows environment
- 2-3+ years developing Big Data and ML technologies to optimize ML model processing