Sysco LABS is the innovation arm of Sysco Corporation (NYSE:SYY), the world’s largest foodservice company. Sysco is the global leader in marketing, selling and distributing food products as well as equipment and supplies to the hospitality industry. Sysco serves over 500,000 customer locations through its team of over 65,000 associates and operates 300 distribution facilities across the globe.
Sysco is re-imagining the global foodservice industry: our Sysco LABS engineering teams based out of Colombo, Sri Lanka, San Mateo, CA and Austin and Houston TX, will help to drive innovation across the entire supply chain – sourcing of food products, merchandising, storage and warehouse operations, order placement and pricing algorithms, delivery of food and supplies to Sysco’s global network – culminating in the in-restaurant dining experience for the end-customer. Operating with the agility and efficiency of a tech–startup and backed by the domain expertise of the industry leader, Sysco and the Sysco LABS team is poised to reimagine one of the biggest industries in the world.
We are currently on the lookout for a Technical Lead / Associate Technical Lead – Data Engineering to join our team. You will be a part of a team responsible for developing and managing end-to-end data processing with automated process flows that spans from the integration of multiple complex source systems, to the consumption level of data that uses multiple visualization tools. This team uses state of the art cutting-edge technologies which gets updated frequently.
- The design and development of large data processing solutions for one of the world’s largest corporations involved in the marketing and distribution of food products.
- Work collaboratively with agile cross functional development teams and provide guidance for design and development of data structures, process flows, query/database optimizations, services/APIs, and visualizations while adhering to DevOps principles.
- Design and develop capacity/scalability plans for fast growing data infrastructure.
- Adhere to Continuous Integration and Continuous Delivery of solutions, and ensure artifacts are of the highest quality by following software/data engineering best practices.
- Be involved in projects and guide the team throughout their full software life cycle – from requirement gathering, development, QA, and deployment, to post-production support.
- A Bachelor’s Degree in Computer Science or equivalent, and 4/5+ years of experience in developing production enterprise applications, data integration solutions and managing teams.
- Excellent communication and leadership skills.
- Hands-on experience in the design and development of ETLs and workflows to process large volumes of data using ETL tools (preferably Informatica), and cloud platforms such as AWS Data Pipeline, AWS Glue, EC2 and AWS Lambda.
- Hands-on experience in using distributed processing frameworks such as Apache Spark, Hive and cloud parallel processing tools such as AWS EMR.
- Hands-on experience in data modelling and scripting tools such as Python, Shell Scripts and SQL.
- Hands-on experience in working with large relational storage engines (MySql and AWS RDS) and distributed storage engines (HDFS/Hadoop and AWS tools such as S3, Redshift).
- In-depth understanding of the database design/optimization methodologies (OLAP/OLTP database design techniques, query plan analysis).
- Experience in working with NoSQL database (Elastic Search, DynamoDB, MongoDB) technologies will be an added advantage.
- Experience working in a Scrum Agile delivery environment and DevOps practices.
- Hand-on experience in visual analytics tools such as Tableau.
- Experience in code management and CICD tools such as Github, Gitlab, and Jenkins.
- Experience in API development and user interfaces, and related tools (NodeJS, AngularJS, HTML) will be an added advantage.