Sysco (NYSE:SYY) is the global leader in selling, marketing and distributing food products to restaurants, healthcare and educational facilities, lodging establishments and other customers who prepare meals away from home. Its family of products also includes equipment and supplies for the foodservice and hospitality industries. With more than 57,000 associates, the company operates approximately 326 distribution facilities worldwide and serves more than 625,000 customer locations. For fiscal 2020 that ended June 27, 2020, the company generated sales of more than $52 billion.
Sysco LABS supports Sysco’s digital transformation with engineering teams in Colombo, Sri Lanka, and Austin and Houston, Texas, in the USA. Operating with the agility and efficiency of a tech–startup and backed by the domain expertise of the industry leader, Sysco LABS’ mission is to support the innovation of Sysco’s business across the entire foodservice journey.
We are currently on the lookout for a Data Engineer to join our team.
- The design and development of data solutions for one of the world’s largest corporations involved in the marketing and distribution of food products.
- Implement distributed and highly available data processing applications that scale for enterprise demands.
- Adhere to Continuous Integration and Continuous Delivery of solutions.
- Ensure high code quality by following software engineering best practices.
- Work collaboratively in a cross functional team in an Agile delivery environment.
- Adhere to DevOps principles and be involved in projects throughout their full software lifecycle: from development, QA, and deployment, to post-production support.
- A Bachelor’s Degree in Computer Science or equivalent, and 1-2 years of experience in developing enterprise grade data processing applications.
- A strong programming background in data ops (Python, Shell, SQL).
- Experience in processing large volumes of data.
- Hands-on experience in working with relational/NoSQL databases and distributed storage engines (HDFS, S3, Redshift).
- Hands-on experience in ETL design and development using ETL tools (preferably Informatica and cloud tools such as AWS Data Pipelines, Glue, Lambda, EMR, Spark, and Hive).
- Experience in working with streaming data (using tools such as Kenisis, Kafka, Storm, Spark) will be an added advantage.
- Experience in API development and user interfaces and related tools (NodeJS, AngularJS, HTML) will be an added advantage.
- Familiarity in DevOps practices and working in a Scrum Agile delivery environment.
- Experience in code management and CICD tools such as Github, Gitlab, Jenkins.
- Experience in an Agile environment and aligning Pod members on technical vision and path to implementation.
- A strong desire to continue to grow your skillset.
- Strong communication skills that is influential and convincing.