# Data & AI Engineer

> HyperLight · Singapore, Singapore · Full-time · Posted 2026-05-12

**Workplace:** on_site

**Department:** Fab Ops

## Description

**HyperLight** is at the forefront of the commercialization of thin-film lithium niobate (TFLN) integrated photonics - a material and process technology that is enabling high-performance, scalable optical components across AI/datacom infrastructure, hyperscale computing, quantum computing, sensing, and beyond. Founded in 2018 and backed by leading venture capital, we’ve built a team and a platform focused on real-world mass deployment of TFLN photonics technology.

At the core of our work is the TFLN Chiplet™ platform — a modular, integrated architecture designed for scalability, manufacturability, and seamless integration into complex systems. It offers a rare combination of extraordinary performance and industrial readiness, enabling system developers across applications to deploy the technology fast and ready. We partner with our customers and suppliers from conceptualization, design, and prototyping phases, all the way through mass production to ensure smooth and rapid deployment of TFLN photonic technology.

We believe our platform is the key, in the golden age of integrated photonics, to empower humanity to the next level. We assembled a world class team covering engineering, business, and operations. We believe in the power of integrity, innovation, collaboration, and pragmatic solutions. Our diverse team thrives on challenges and is united by a shared commitment to excellence. We take pride in tackling complex challenges with curiosity, humility, and a deep sense of care for one another.

Our growing team is looking for **a Data & AI Engineer** to join us. This role offers a unique opportunity to work at the intersection of software, data, cloud, and semiconductor manufacturing operations. We are seeking an engineer to build and scale our data platform supporting R&D, testing, and manufacturing operations. You will design and implement data pipelines, develop analytics and data visualization tools, and apply AI/ML techniques to improve product performance and operational efficiency.

As part of your responsibilities, you will be:

-   Design, build, and maintain scalable data pipelines (ETL/ELT) to ingest data from lab equipment, test systems, and enterprise applications
-   Develop and manage data lake / data warehouse architecture for structured and unstructured data
-   Build, maintain, and optimize databases (SQL and NoSQL) for performance, scalability, and reliability
-   Develop dashboards and analytics tools to support engineering, testing, and operations teams (e.g., yield analysis, trend monitoring)
-   Analyze large-scale experimental and manufacturing datasets to identify patterns, anomalies, and optimization opportunities
-   Apply machine learning / AI techniques to real-world problems such as predictive maintenance, defect detection, and process optimization
-   Collaborate cross-functionally with engineering, fabrication, test, and business teams to define data requirements and deliver actionable insights
-   Contribute to data governance, data quality, and best practices across the organization
-   Own end-to-end delivery of data solutions from ingestion to analytics and AI applications

## Requirements

-   Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Physics, or a related field
-   Minimum 2 years of experience in software, data engineering, analytics, or related roles
-   Strong programming skills in Python for data processing, automation, or machine learning workflows
-   Solid experience with SQL and database design (relational and/or NoSQL systems)
-   Experience building and maintaining data pipelines (e.g., Airflow, Spark, or similar frameworks)
-   Understanding of data lake / data warehouse concepts and architectures
-   Experience with data visualization tools (e.g., Power BI, Spotfire, Tableau, or similar)
-   Experience with cloud platforms (AWS preferred), including services such as S3, EC2, Lambda, RDS
-   Familiarity with containerization (e.g., Docker)
-   Basic understanding of machine learning concepts and workflows
-   Strong problem-solving skills and ability to work with complex, noisy datasets
-   Effective communication skills and ability to collaborate in cross-functional teams
-   **Preferred Qualifications:**

-   Familiarity with semiconductor, manufacturing, or hardware-related data environments
-   Hands-on experience with AWS data and compute services such as S3, EC2, EKS, Lambda, Glue, or EMR
-   Experience designing data lake architectures on AWS
-   Experience deploying and managing applications on Kubernetes (K8s), including AWS EKS
-   Hand-on experience with Docker and containerized data/ML workflows
-   Hands-on experience with machine learning frameworks (e.g., PyTorch, TensorFlow, Scikit-learn)
-   Experience with LLMs, vector databases, or AI application frameworks
-   Experience building internal tools or data applications (e.g., dashboards, APIs)
-   Knowledge of JavaScript or TypeScript, and modern frontend frameworks (e.g., React, Next.js) is a plus
-   Experience with modern AI frameworks (e.g., LangChain, CrewAI) or AWS AI services (e.g., Bedrock) is a plus

## Benefits

-   Competitive market-based compensation and benefits
-   Professional growth and mentorship opportunities

## Apply

[Apply at HyperLight](https://apply.workable.com/hyperlight/j/38916E7AE8/apply)

---
Powered by [Workable](https://www.workable.com)
