THE BIG PICTURE

Sysco LABS is the captive innovation arm of Sysco Corporation (NYSE: SYY), the world’s largest foodservice company. Sysco is a Fortune 500 company and the global leader in selling, marketing, and distributing food products to restaurants, healthcare, and educational facilities, lodging establishments and other customers who prepare meals away from home. Its family of products also includes equipment and supplies for the foodservice and hospitality industries. With more than 76,000 colleagues, the company operates 334 distribution facilities worldwide and serves approximately 730,000 customer locations. For fiscal year 2024 that ended July 1, 2024, the company generated sales of more than $78.8 billion.

Operating with the agility and tenacity of a tech–startup, powered by the expertise of the industry leader, Sysco LABS is perfectly poised to transform one of the world’s largest industries.

Sysco LABS’s engineering teams based out of Colombo, Sri Lanka and Austin and Houston, TX, innovate across the entire food service journey – from the enterprise grade technology that enables Sysco’s business, to the technology that revolutionizes the way that Sysco connects with restaurants and the technology that shapes the way those restaurants connect with customers.

Sysco LABS technology is present in the sourcing of food products, merchandising, storage and warehouse operations, order placement and pricing algorithms, the delivery of food and supplies to Sysco’s global network, the in-restaurant dining experience of the end-customer and much more.

THE OPPORTUNITY

We are currently on the lookout for a Technical Lead – Data Engineering to join our team.

RESPONSIBILITIES

The designing and development of large data processing solutions for one of the world's largest corporations involved in the marketing and distribution of food products

Working collaboratively with agile cross functional development teams and providing guidance for database design, query optimizations and database optimizations while adhering to DevOps principles

Designing and developing capacity/scalability plans for fast growing data infrastructure 

Adhering to Continuous Integration and Continuous Delivery of solutions, and ensuring high code quality by following software engineering best practices

Being involved in projects throughout their full software lifecycle - from development, QA, and deployment to post-production support

REQUIREMENTS

A Bachelor’s Degree in Computer Science or equivalent, and 5/6+ years of experience developing production enterprise applications, data integration solutions and in managing teams

Excellent communication and leadership skills

Hands-on experience working with large volumes of data and distributed processing frameworks (preferably Apache Spark and Kafka)

Strong Python programming skills for data processing and analysis

Proficiency in batch processing techniques and data pipeline development

Hands-on experience in design and development of ETLs to process large volumes of data, including experience with Informatica

Expertise in data quality management and implementation of data quality frameworks

Familiarity with data lakehouse architectures and related technologies (OLAP/OLTP database design techniques)

Strong skills in query optimization and performance tuning, particularly for large-scale data warehouses and distributed systems

Experience with query plan analysis and execution plan optimization in various database systems, especially Amazon Redshift

Knowledge of indexing strategies, partitioning schemes, and other performance-enhancing techniques; extensive experience with AWS services, particularly:

  • Amazon S3 for data storage and management
  • Amazon Redshift for data warehousing and query optimization
  • AWS Lambda for serverless computing and data processing
  • Amazon ECS (Elastic Container Service) for container orchestration
  • Proficiency in designing and implementing cloud-native data architectures on AWS
  • Experience with AWS data integration and ETL services (e.g., AWS Glue, AWS Data Pipeline)

DevOps practices for data platforms:

  • Extensive experience implementing DevOps practices for data platforms and workflows
  • Proficiency in automating data pipeline deployments, including CI/CD for ETL processes and database schema changes
  • Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation) for provisioning and managing data infrastructure
  • Familiarity with monitoring and observability tools for data platforms (e.g., CloudWatch, DataDog)

Experience working in a Scrum Agile delivery environment and DevOps practices, as well as prior experience working with Cloud IaaS or PaaS providers such as AWS will be an added advantage

BENEFITS

US dollar-linked compensation 

Performance-based annual bonus 

Performance rewards and recognition 

Agile Benefits - special allowances for Health, Wellness & Academic purposes 

Paid birthday leave

Team engagement allowance 

Comprehensive Health & Life Insurance Cover - extendable to parents and in-laws  

Overseas travel opportunities and exposure to client environments  

Hybrid work arrangement

Apply Now
Personal Information
* Required Fields
Qualifications
Work Experience
Prior Employers (if applicable)
Skills(Enter as many as applicable.)
Please upload PDF files less than 5MB only
Sign up for Sysco LABS Vacancy Alerts to be notified when similar opportunities arise
Life @ Sysco LABS
At Sysco LABS, we always go the extra mile but know when to have some fun too - we never pass up an opportunity to celebrate or let our hair down and understand the importance of play in helping us do our best work.