Data Engineer

Atlanta, GA

We are in search of a technologist and problem solver that can wear many hats in our growing organization. The ideal candidate is a self-starter who is good at working both independently and with others knowing when to figure things out on his/her own and when to ask questions and seek assistance. Strong communication skills are a must, both written and verbal, along with the ability to gather information from a variety of sources.   

The role involves providing monitoring support, and system administration centered around predictive maintenance and reliability engineering. The ideal candidate will possess expertise in traditional data science analysis processes as well as software engineering skills needed to deploy and build decision support aides. The team is looking for candidates with a full range of skills in programming, database design, software engineering, and cloud-native computing.  Candidates experienced with process automation and optimization are also highly encouraged to apply.


  • Designing schemas, data models, and data architecture for Hadoop environments
  • Implementing and maintaining data flow scripts using Pyspark / Hive QL / workflow management scripting  
  • Designing, building data assets in Hive  
  • Developing and executing quality assurance and test scripts  
  • Collaborate with analysts to drive code quality and reusability, data integrity, test design, analysis, validation, and documentation  
  • Build scalable data pipelines for both real-time and batch using best practices in data modeling, ETL processes utilizing various technologies such as Pyspark, Kafka, and Airflow  
  • Working with business analysts to understand business requirements and use cases   


  • A Bachelor's degree in Industrial Systems and Engineering, Computer Science, or Computer Engineering    
  • Minimum of 2 years of experience in understanding best practices for building and designing ETL code  
  • Strong SQL experience with the ability to develop, tune and debug complex SQL applications is required
  • Hands-on experience in Python object-oriented programming (at least 2 years)   


  • Knowledge in schema design, developing data models, and proven ability to work with complex data  
  • Hands-on experience with Hadoop, MongoDB, Hive, Kubernetes, Airflow, ElasticSearch, Clickhouse, NoSQL, Rancher, Graphana/Prometheus, Zeppelin, Jupyter  
  • Understanding Hadoop file format and compressions  
  • Understanding of best practices for building Data Lake and analytical architecture on Hadoop   
  • Scripting/programming with UNIX, Java, Scala, Oozie, Stata, etc  
  • Knowledge in real-time data ingestion into Hadoop  
  • Experience in working in large environments such as RDBMS, EDW, NoSQL, etc  
  • Experience with Test-Driven CI/CD, SCM tools such as GIT and Gitlab, Graph databases  

This position works out of our headquarters in the Sandy Springs area of Atlanta.  This is a full-time salaried position with full health benefits, 401k, paid vacation and bonus opportunity.

Currently, Mather Economics is not hiring students that are part of the F-1 OPT under the STEM-designated degree program. We are unable to provide visa sponsorship now or in the future. 

Applicants must be currently authorized to work in the United States full-time

About Mather Economics -- 

Mather Economics is more than just a business consulting firm. We are a strategic partner advising our multi-national clients through a mixture of data products, econometric modeling, and data science as a service. Headquartered in Atlanta, GA, our staff of econometricians, data scientists, and data engineers increase subscriber revenue yield, operating margins, reader engagement, and subscriber churn reduction. Now is an exciting time to be part of a team helping transform our client’s business through analytics and data science.