Sysco (NYSE:SYY) is the global leader in selling, marketing and distributing food products to restaurants, healthcare and educational facilities, lodging establishments and other customers who prepare meals away from home. Its family of products also includes equipment and supplies for the foodservice and hospitality industries. With more than 57,000 associates, the company operates approximately 326 distribution facilities worldwide and serves more than 625,000 customer locations. For fiscal 2020 that ended June 27, 2020, the company generated sales of more than $52 billion.
Sysco LABS supports Sysco’s digital transformation with engineering teams in Colombo, Sri Lanka, and Austin and Houston, Texas, in the USA. Operating with the agility and efficiency of a tech–startup and backed by the domain expertise of the industry leader, Sysco LABS’ mission is to support the innovation of Sysco’s business across the entire foodservice journey.
We are currently on the lookout for a Senior Data Engineer to join our team. You will be a part of a team of engineers responsible for developing and managing end-to-end data processing with automated process flows, that spans from the integration of multiple complex source systems, to the consumption level of data that uses multiple visualization tools. This team uses state of the art cutting-edge technologies which gets updated frequently.
- The design and development oflarge data processing solutions for one of the world’s largest corporations involved in the marketing and distribution of food products.
- Work collaboratively with agile cross functional development teams and be involved in the design and development of data structures, process flows, query/database optimizations, services/APIs, and visualizations while adhering to DevOps principles.
- Adhere to Continuous Integration and Continuous Delivery of solutions, and ensure artifacts are of the highest quality by following software/data engineering best practices.
- Be involved in projects throughout their full software lifecycle – from requirement gathering, development, QA, and deployment, to post-production support.
- A Bachelor’s Degree in Computer Science or equivalent, and 3+ years of experience in developing production enterprise applications and data integration solutions.
- Excellent communication skills.
- Hands-on experience in the development of ETLs and workflows to process large volumes of data using ETL tools (preferably Informatica) and cloud platforms such as AWS Data Pipeline, AWS Glue, EC2 and AWS Lambda.
- Hands-on experience in data modelling and scripting tools such as Python, Shell Scripts and SQL.
- Hands-on experience in working with large relational storage engines (MySql and AWS RDS) and distributed storage engines (HDFS/Hadoop and AWS tools such as S3, Redshift).
- Experience in working with NoSQL databases (Elastic Search, DynamoDB, MongoDB) technologies will be an added advantage.
- Experience working in a Scrum Agile delivery environment and DevOps practices.
- Hand-on experience in visual analytics tools such as Tableau.
- Experience in code management and CICD tools such as Github, GitLab, and Jenkins.
- Experience in API development and user interfaces and related tools (NodeJS, AngularJS, HTML) will be an added advantage.
- Hands-on experience in using distributed processing frameworks such as Apache Spark, Hive, and cloud parallel processing tools such as AWS EMR would be an added advantage.