THE BIG PICTURE
Sysco LABS is the captive innovation arm of Sysco Corporation (NYSE: SYY), the world’s largest foodservice company. Sysco is a Fortune 500 company and the global leader in selling, marketing, and distributing food products to restaurants, healthcare, and educational facilities, lodging establishments and other customers who prepare meals away from home. Its family of products also includes equipment and supplies for the foodservice and hospitality industries. With more than 76,000 colleagues, the company operates 334 distribution facilities worldwide and serves approximately 730,000 customer locations. For fiscal year 2024 that ended July 1, 2024, the company generated sales of more than $78.8 billion.
Operating with the agility and tenacity of a tech–startup, powered by the expertise of the industry leader, Sysco LABS is perfectly poised to transform one of the world’s largest industries.
Sysco LABS’s engineering teams based out of Colombo, Sri Lanka and Austin and Houston, TX, innovate across the entire food service journey – from the enterprise grade technology that enables Sysco’s business, to the technology that revolutionizes the way that Sysco connects with restaurants and the technology that shapes the way those restaurants connect with customers.
Sysco LABS technology is present in the sourcing of food products, merchandising, storage and warehouse operations, order placement and pricing algorithms, the delivery of food and supplies to Sysco’s global network, the in-restaurant dining experience of the end-customer and much more.
THE OPPORTUNITY
We are currently on the lookout for a Senior Engineer - Data Engineering to join our team.
RESPONSIBILITIES
Designing and developing data solutions for one of the world’s largest corporations involved in the marketing and distribution of food products
Implementing distributed and highly available data processing applications that scale for enterprise demands
Adhering to Continuous Integration and Continuous Delivery of solutions
Ensuring high code quality by following software engineering best practices
Working collaboratively in a cross-functional team in an Agile delivery environment
Adhering to DevOps principles and being involved in projects throughout their full software lifecycle: from development, QA, and deployment, to post-production support
REQUIREMENTS
A Bachelor’s Degree in Computer Science or equivalent, and 3-5 years of experience in developing enterprise grade data processing applications
A strong programming background in data ops (Python, Shell, SQL)
Experience in processing large volumes of data
Hands-on experience in working with relational/NoSQL databases and distributed storage engines (HDFS, S3, Redshift)
Hands-on experience in ETL design and development using ETL tools (Preferably Informatica and cloud tools such as AWS Data Pipelines, Glue, Lambda, EMR, Spark, Hive)
Experience in working with streaming data (using tools such as Kenisis, Kafka, Storm, Spark) will be an added advantage
Experience in API development and user interfaces and related tools (NodeJS, AngularJS, HTML) will be an added advantage
Working experience in a Scrum Agile delivery environment and with DevOps practices
Experience in code management and CI/CD tools such as Github, Gitlab, Jenkins
Experience in an agile environment and aligning Pod members on technical vision and path to implementation
A strong desire to continue to grow your skillset
Strong communication skills that are influential and convincing
BENEFITS
US dollar-linked compensation
Performance rewards and recognition
Agile Benefits - special allowances for Health, Wellness & Academic purposes
Paid birthday leave
Team engagement allowance
Comprehensive Health & Life Insurance Cover - extendable to parents and in-laws
Overseas travel opportunities and exposure to client environments
Hybrid work arrangement