What Jobs are available for Data Engineer in Hong Kong?
Showing 127 Data Engineer jobs in Hong Kong
Data Engineer/ Senior Data Engineer
Posted today
Job Viewed
Job Description
Our client is now looking for talents to join their team:
Goal of the Project
l Implement a scalable cloud-based data platform to enable sustainable, reusable, secure, efficient ingestion, transformation, and storage of enterprise data across multiple sources.
l Improve data consistency, reliability and controlled accessibility by automating ETL/ELT workflows, enforcing data quality checks, and reducing manual intervention by 30–40%.
l Knowledge and experience of near real-time analytics capabilities to support business intelligence, predictive modeling, and faster decision-making for strategic initiatives.
Primary Responsibilities (Guideline: List core duties in at least 3-5 bullet points)
l Optimize, operate at scale and enhance cloud-based data platform on Microsoft Azure (incl DataBricks)
l Involve in cloud data platform enhancement project lifecycle with external vendor including design, development, testing, deployment, and documentation.
l Work with Data Analyst to perform data preparation, cleansing, build optimized data models and visualization in Power BI
l Perform proof of concept for cloud data products
Secondary Responsibilities (Guideline: List supporting tasks in bullet points)
l Validate data workflows and integration points through rigorous testing (e.g., pipeline validation, schema checks, and performance benchmarking) to ensure solutions meet business and technical requirements.
l Create operational playbooks and automation scripts to streamline deployment, monitoring, and troubleshooting, enabling efficient handover and long-term maintainability of data solutions.
l Document data pipeline architecture, deployment processes, and operational runbooks to support effective troubleshooting and post-go-live maintenance by internal teams
Additional Beneficial Data Knowledge:
l ERP
l CRM
l Marketing
l Asset Management
l Construction
Requirements:
● Bachelor's degree in computer science, Information Technology, Data Science or related field. Related Certification in Cloud Data platform
● Fluency in English.
● years of professional experience related to data platform, especially familiar with data engineer side. Experience in Python, SQL, Spark, Azure Data Factory, Event Hub / Kafka / Event Messaging, Azure Data Lake Storage Gen2, Databricks, Unity Catalog, Power BI
● Experience in software development lifecycle (SDLC).
● Understanding of data lifecycle and governance principles (e.g., data quality, lineage, security, and compliance).
Advantages experience (Not a must to have)
l Databricks Certified Data Engineer Associate or Professional
l Prior work on scalable Data Platform implementations or data platform migrations (or 2-3 project-specific examples).
l Exposure to data monetization or advanced analytics use cases (predictive modeling, AI/ML pipelines).
l Knowledge of Data & Analytics Multi Tenancy, Data Lakehouse platform architecture, DataOps, MLOps, AIOps, ModelOps and FinOps.
l Contribute as a strong single contributor and have ability to manage junior data engineers.
l Ability to manage engineering tasks and contribute to planning for successful execution.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer, Data
Posted today
Job Viewed
Job Description
We offer work from home (Max. 2 days per week), 14-20 days' annual leave, double pay, discretionary bonus, overtime pay, medical/dental/life insurance, five-day work week.
As a Data Management Engineer, you will play a critical role in ensuring the integrity, security, and efficiency of our data platform. You will collaborate closely with cross-functional teams to implement governance frameworks, enforce data standards, and optimize resource usage. Your work will directly support the organization's data strategy and compliance posture.
Job Description :
- Lead the design, implementation and deployment of a master data management architecture that encompasses all customer source systems to enable data sharing across different regions, business units and departments 
- Operationalize Enterprise Master Data Repository, to enforce centralized governance controls at a global level 
- Identify and build data quality rules, investigate and remediate data quality issues 
- Design and build data quality dashboards with Power BI 
- Evaluate, select and implement appropriate data management technologies to address data governance challenges 
- Manage vendors to complete data governance activities, from vendor selection, data discovery, Proof of Concept (PoC) development, implementation to global adoption 
- Design and implement data governance solutions that incorporate AI-driven data management techniques to improve data quality and enhance data governance processes 
- Monitor data platform resource utilization and performance metrics 
- Identify and recommend opportunities for cost optimization and operational efficiency 
- Lead analysis of the current data platforms (e.g., logs) to detect critical deficiencies and recommend solutions for improvement 
- Engage with key data stakeholders to outline data objectives and gather data requirements. Execute solutions encompassing ownership, accountability, streamlined processes, robust procedures, stringent data quality measures, security protocols, and other pertinent areas to drive successful implementation 
- Implement the Architecture Governance Standard, Platform Design Principles, Platform Security, and Data Compliance Standard 
- Implement the Data Classification Standard to enhance data management and security measures within the organization 
- Take charge of the Global Data Quality Forum and establish regional forums if required to foster collaboration and knowledge sharing on data quality practices 
- Conduct market research and collaborate with vendors to evaluate cutting-edge data management technologies, trends, and products. Select and deploy the most suitable solutions for Global Data and Analytics Governance initiatives, ensuring seamless scalability 
Requirement:
- Bachelor's degree from a recognized university in Computer Science, Information Engineering, or related field 
- At least 6 years of experience in Data Engineering, IT, Data Governance, Data Management or related field 
- Knowledge of data management best practices and technologies 
- Knowledge of data governance, security and observability 
- Proven ability to identify innovation opportunities and deliver innovative data management solutions 
- Hands-on experiences in SQL, Python, and PowerBI 
- Experience in Azure Databricks Unity Catalog and DLT 
- Excellent analytical and problem-solving skills 
- Fluent in English speaking and writing 
- Willingness to travel, as needed 
The requirements below are considered as advantages, but not a must .
- Knowledge of data related regulatory requirements and emerging trends and issues 
- Experience in programming languages including PySpark, R, Java, Scala 
- Experience in working with cross-functional teams in global settings 
Interested parties please send full resume with employment history and expected salary to HRA Department, Yusen Logistics Global Management (Hong Kong) Limited by email.
Yusen Logistics Global Management (Hong Kong) Limited is an equal opportunity employer. All information collected will be used for recruitment purpose only.
<<
About Yusen Logistics
Yusen Logistics is working to become the world's preferred supply chain logistics company. Our complete offer is designed to forge better connections between businesses, customers and communities – through innovative supply chain management, freight forwarding, warehousing and distribution services. As a company we're dedicated to a culture of continuous improvement, ensuring everyone who works with us is committed, connected and creative in making us the world's preferred choice.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    AI Engineer/Research Programmers/Research Data Scientist/Big Data engineer
Posted today
Job Viewed
Job Description
AI Engineer/Research Programmers/Research Data Scientist/Big Data engineer
Requirements:
Knowledge in Artificial Intelligence, blockchain and cybersecurity, modern software development, extremely
enthusiastic in programming and learning new technology. smart contract development is an advantage
Knowledge and research experience in various Al algo, Computer Vision, Recommender System, blockchain and Knowledge Graph.
Willing to learn different streams of programming languages and participate in various software system development includes but not limited to Al, web apps, DApp, mobile apps, server system and cybersecurity solutions.
Abilities to work as a team with young teammates and mature supervisor. locally and remotely.
PhD, Master or Bachelor degree in Computer Science, Applied Mathematics, Mathematics, Statistics, Engineering, Technology, Operations Research, Management Science, Knowledge Graph, or any IT or STEM related disciplines.
Result oriented. Flexible working hours acceptable
Working experience: 0 to 5 years
Experience in Education is a plus
"Start-up mentality and self-driven learner is a must.
Working office: Cyberport
Job responsibilities:
(a) Collect internal and external customers business requirement
(b) Prepare functional specifications and test plan
(c) Develop, Implementation of backend applications
(d) Participate in management and operational meetings to report responsible tasks progress and prepare report of meetings
(e) Perform full software development life cycle, design, coding, testing and implementation of backend and frontend system
(f) Compile and analyze data, processes. and codes to troubleshoot problems
(g) Develop administrative interface tor administration and management access
(h) Research on recent development of technology to the adoption on company projects.
(i) Research and publish paper and patent for Al algorithm, included but not limited to Computer vision, Recommender System, blockchain and Knowledge Graph.
(i) Provide IT services to the company when required
(k) Coding with languages such as C#, Python, Tensor flow, R, , Solidity, Java, JavaScript or Go.
Interested parties, please apply together with a detailed resume, stating current and expected salary, and send it via
All personal information received will be treated in strict confidence for employment purpose only. Your application can be
considered as unsuccessful if you have not been reached within 3 months.
On the other hand, it may be transferred to other companies of our group for job openings. Unsuccessful applications will be
destroyed for a period not more than 12 months.
工作類型: 兼職
薪酬: $20,000.00至$32,000.00(每月)
Work Location: 親身到場
預期開始日期: 2025/08/02
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
What You'll Be Doing
- Architect ETL Platforms: Design and build scalable ETL pipelines using Airbyte and Apache Airflow to streamline data processing and transformation.
- Optimize Database Performance: Lead the management and optimization of MongoDB, MySQL, and PostgreSQL databases for high performance and reliability.
- Build Robust Data Pipelines: Develop and maintain efficient, automated data pipelines to ensure seamless data flow across systems.
- Create Impactful Visualizations: Craft compelling data visualizations and application statistics to drive strategic business decisions.
- Develop BI Dashboards: Design and maintain intuitive Business Intelligence dashboards for actionable insights and reporting.
- Manage Log Systems: Leverage the ELK stack (Elasticsearch, Logstash, Kibana) for advanced log management and real-time analytics.
- Integrate Diverse Data Sources: Connect and unify data from third-party APIs to enhance accessibility and functionality.
- Collaborate for Success: Partner with cross-functional teams to understand data requirements and deliver impactful solutions.
- Educational Background: Bachelor's degree in Computer Science, Data Science, Engineering, or a related field. We value diverse educational paths and encourage all qualified candidates to apply.
- Experience Level: 4+ years of hands-on experience in data engineering, database administration, or related roles. We welcome candidates with varied career journeys who bring fresh perspectives.
- MongoDB Mastery: Deep expertise in MongoDB, including schema design, performance tuning, and scalability.
- ETL Expertise: Proven experience with ETL tools like Airbyte and Apache Airflow for building robust data workflows.
- Programming Prowess: Strong proficiency in Python, Golang, or similar languages for data processing and automation.
- Visualization Skills: Hands-on experience with BI tools like Tableau, Power BI, or similar for creating impactful visualizations.
- Cloud Proficiency: Working knowledge of cloud platforms such as AWS, Google Cloud, or Alibaba Cloud to support scalable data solutions.
- AI Advantage: Familiarity with Retrieval-Augmented Generation (RAG) and AI workflow automation is a plus but not required.
- Communication Skills: Fluency in English to collaborate effectively within our diverse, global team.
Our client offers an attractive remuneration package and other benefits, such as:
- Annual Leave
- Performance Bonus
- Grow your career with opportunities for professional development and impact
For further information, please contact by WhatsApp.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
Responsibility
- Design, build and maintain scalable and efficient ETL/ELT pipelines in Azure Databricks to process structured, semi-structured and unstructured insurance data from multiple internal and external sources.
- Collaborate with data architects, modelers, analysts and business stakeholders to gather data requirements and deliver fit-for-purpose data assets that support analytics, regulatory and operational needs.
- Develop, test and optimize data transformation routines, batch and streaming solutions (leveraging tools such as Azure Data Factory, Data Lake Storage Gen2, Azure Event Hubs and Kafka) to ensure timely and accurate data delivery.
- Implement rigorous data quality, validation and cleansing procedures — with a focus on enhancing reliability for high-stakes insurance use cases, reporting and regulatory outputs.
- Integrate Informatica tools to facilitate data governance, including the capture of data lineage, metadata and data cataloguing as required by regulatory and business frameworks.
- Ensure robust data security by following best practices for RBAC, managed identities, encryption and compliance with Hong Kong's PDPO, GDPR and other relevant regulatory requirements.
- Automate and maintain deployment pipelines using GitHub Actions to ensure efficient, repeatable and auditable data workflows and code releases.
- Conduct root cause analysis, troubleshoot pipeline failures and proactively identify and resolve data quality or performance issues.
- Produce and maintain comprehensive technical documentation for pipelines, transformation rules and operational procedures to ensure transparency, reuse and compliance.
- Apply subject matter expertise in Hong Kong Life and General Insurance to ensure that development captures local business needs and industry-specific standards.
Requirement
- Bachelor's degree in Information Technology, Computer Science, Data Engineering or a related discipline.
- 3+ years of experience as a data engineer, building and maintaining ETL/ELT processes and data pipelines on Azure Databricks (using PySpark or Scala), with a focus on structured, semi-structured and unstructured insurance data.
- Strong experience orchestrating data ingestion, transformation and loading workflows using Azure Data Factory and Azure Data Lake Storage Gen2.
- Advanced proficiency in Python and Spark for data engineering, data cleaning, transformation and feature engineering in Databricks for analytics and machine learning.
- Experience integrating batch and streaming data sources via Kafka or Azure Event Hubs for real-time or near-real-time insurance applications.
- Hands-on use of Informatica for data quality, lineage and governance to support business and regulatory standards in insurance.
- Familiarity with automation and CI/CD of Databricks workflows using GitHub Actions.
- Understanding of data security, RBAC, Key Vault, encryption and best practices for compliance in the insurance sector.
- Experience optimizing data pipelines to support ML workflows and BI/reporting tools.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
Our client is a statutory body in Hong Kong. They are looking for experienced talent to implement a scalable cloud-based data platform to enable sustainable, reusable, secure, efficient ingestion, transformation, and storage of enterprise data across multiple sources. Th individual should possess knowledge and experience of near real-time analytics capabilities to support business intelligence, predictive modelling, and faster decision-making for strategic initiatives.
Responsibilities
- Optimize, operate at scale and enhance cloud-based data platform on Microsoft Azure (incl DataBricks)
- Involve in cloud data platform enhancement project lifecycle with external vendor including design, development, testing, deployment, and documentation.
- Work with Data Analyst to perform data preparation, cleansing, build optimized data models and visualization in Power BI
- Perform proof of concept for cloud data products
- Validate data workflows and integration points through rigorous testing (e.g., pipeline validation, schema checks, and performance benchmarking) to ensure solutions meet business and technical requirements.
- Create operational playbooks and automation scripts to streamline deployment, monitoring, and troubleshooting, enabling efficient handover and long-term maintainability of data solutions.
- Document data pipeline architecture, deployment processes, and operational runbooks to support effective troubleshooting and post-go-live maintenance by internal teams
Requirements:
- Bachelor's degree in computer science, Information Technology, Data Science or related field
- Minimum 3+ years of professional experience related to data platform
- Experience in Python, SQL, Spark, Azure Data Factory, Event Hub / Kafka / Event Messaging, Azure Data Lake Storage Gen2, Databricks, Unity Catalog, Power BI
- Understanding of data lifecycle and governance principles (e.g., data quality, lineage, security, and compliance
- Experience in software development lifecycle (SDLC)
- Related Certification in Cloud Data platform
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
Key Responsibilities
- Design and implement technical solutions to collect, integrate, secure, store, organize, and transform data into deliverable formats.
- Build and monitor data pipelines.
- Develop scripts and custom code to process and refine data.
- Collaborate with business users to improve existing tools used for reporting.
- Perform unit tests, analyze database queries, and troubleshoot issues.
- Prepare and maintain technical documentation for senior staff and team members.
- Work collaboratively within a close-knit team, maintaining professionalism in virtual and in-person settings.
- Propose and develop efficient, robust solutions based on requirements, in collaboration with IT and business stakeholders.
- Provide ongoing maintenance, issue investigation, and support.
- Create documentation, flowcharts, diagrams, and clear code to demonstrate and explain solutions.
- Occasionally travel to world-class factories as required.
General Skills & Experience Requirements
- Bachelor's degree in Computer Science, Information Technology, Statistics, or a related discipline.
- 2–4 years of experience in data engineering projects.
- Motivated, independent, and self-reliant, capable of completing tasks with minimal supervision.
- Ability to create detailed design documents, articulate vision, and defend proposed solutions.
- Fluency in English and Mandarin is an advantage.
Required Data Engineering Skills
- Proficiency in Cloud Data Platform solutions (certifications are a plus).
- Experience as a SQL/Oracle Database/Python Developer.
- Expertise in ETL processes for complex data projects.
- Proficiency in programming languages such as Python and Java.
- Experience with creating and managing data assets in a Cloud Data Platform.
- Strong knowledge of data pipeline optimization.
- Understanding of general data modeling concepts.
Preferred Technical Skills
- Exceptional written and verbal communication skills.
- Experience in machine learning and AI projects is a plus.
- Ability to collaborate with remote teammates and users effectively.
- Familiarity with the manufacturing sector is an advantage.
- Understanding of process engineering concepts and measurements is a bonus.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Be The First To Know
About the latest Data engineer Jobs in Hong Kong !
Data Engineer
Posted today
Job Viewed
Job Description
Established in the 1970s, Sino Group is a leading property developer in Hong Kong, comprising private companies owned by the Ng Family as well as three listed companies. Our core business encompasses the development of residential properties, offices, industrial and retail properties for sale and investment in China (Hong Kong and Mainland), Singapore and Australia.
We are committed to Creating Better Lifescapes by promoting sustainable, green living in harmony with the environment, creating inspiring spaces through innovative design, while nurturing a sense of community in everything we do. Adhering to our core values of integrity, customer first, quality excellence, respect, teamwork, continuous improvement, preparedness, and sense of urgency, we work together closely to make Sino the preferred choice for customers, investors and employees.
To find out more about our commitment to Creating Better Lifescapes, please visit
Responsibilities:
The successful candidate will have a strong background in building and maintaining scalable data pipelines and will be proficient in leveraging modern data technologies on the Azure cloud platform. He / She will play a key role in designing, developing, and optimizing our data architecture to support our data-driven decision-making processes. 
He/ She will be expected to perform the following:
Design, construct, install, test, and maintain highly scalable and reliable data management and processing systems 
Develop and manage ETL/ELT pipelines to ingest data from a wide variety of data sources and systems, ensuring data quality and integrity
Build and optimize data models on Azure Synapse Analytics / Microsoft Fabric for analytical and reporting purposes
Implement and manage data storage solutions using Medallion Architecture with Delta Lake, including creating and maintaining tables, handling schema evolution, and ensuring ACID compliance for data transactions
Utilize PySpark, Spark SQL, Python and SQL for data transformation, manipulation, and analysis within our data platforms
Develop interactive dashboards, reports, and visualizations using Power BI, Qlik and SAC to provide actionable insights to business users
Collaborate with data analysts, and business stakeholders to understand data requirements and deliver appropriate data solutions
Monitor and troubleshoot data pipeline performance, implement optimizations and resolving issues in a timely manner
Ensure data governance and security best practices are implemented and adhered to throughout the data lifecycle
Stay current with the latest trends and technologies in data engineering and the Azure ecosystem
Requirements:
Degree holder in computer science, Engineering, Information Systems, or a related technical field 
Minimum of 2 years of proven experience as a Data Engineer or in a similar role, with a clear progression of responsibilities and accomplishments
Mastery of PySpark, Spark SQL, Python and SQL for large-scale data processing and analysis
Deep, hands-on experience with Microsoft Azure data services, particularly Azure Synapse Analytics, Azure Data Factory, Azure Data Lake Storage as well as Microsoft Fabric, including architectural design and cost management
In-depth, expert-level knowledge of Medallion Architecture with Delta Lake, including its architecture, advanced features, and practical implementation in enterprise-level data lakes
Strong proficiency in data visualization and business intelligence tools, specifically Power BI and Qlik, with experience in developing complex reports and data models
Expertise in data modeling, Data Warehousing, Data Lakehouse and Delta Lakehouse concepts, and building and orchestrating enterprise-grade ETL/ELT pipelines
Demonstrated experience with software engineering best practices, including leading code reviews, managing CI/CD pipelines (e.g., Azure DevOps), and working in an agile environment
Exceptional problem-solving, analytical, and communication skills
Good command of both spoken and written English and Chinese
We are an equal opportunity employer who offer an inclusive and diverse workplace where people are valued and respected.
Before submitting your application, please read the Personal Data (Privacy) Policy and Personal Information Collection Statement at our Company website. Information provided will be treated in strict confidence and used for recruitment purposes only. If we have not contacted you within 4 weeks after your submission, you may consider your application unsuccessful.
Full-time,Permanent
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
- Design and implement robust technical systems for data collection, integration, safeguarding, storage, organization, and transformation to deliver optimized data solutions.
- Build, deploy, and monitor scalable data pipelines to ensure seamless data flow and reliability.
- Create custom scripts and code to cleanse, refine, and enhance data quality and usability.
- Collaborate with business stakeholders to improve reporting tools, perform unit testing and database querying for analysis and troubleshooting, and maintain comprehensive technical documentation as a key reference resource.
Skills Reqd
- A Bachelor's degree in Computer Science, Information Technology, Statistics, or a closely related field is highly preferred.
- Candidates must have at least 2 to 4 years of hands-on experience working on data engineering initiatives.
- Proven expertise in Cloud Data Platform solutions, with relevant certifications considered a strong advantage.
- Hands-on experience as an SQL/Oracle database or Python developer, coupled with proficiency in ETL processes for complex data projects and programming languages such as Python and Java.
- Demonstrated ability to build, manage, and maintain data assets within Cloud Data Platforms.
- Proficient in Cantonese and English
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
Techtronic Industries Company Limited ("TTI", or the "Company"), founded in 1985 by German entrepreneur Horst Julius Pudwill, is a world leader in cordless technology. As a pioneer in Power Tools, Outdoor Power Equipment, Floorcare and Cleaning Products, TTI serves professional, industrial, Do It Yourself (DIY), and consumer markets worldwide. With more than 47,000 employees globally, the company's relentless focus on innovation and strategic growth has established its leading position in the industries it serves.
MILWAUKEE is at the forefront of TTI's professional tool portfolio. With global research and development headquartered in Brookfield, Wisconsin, the historic MILWAUKEE brand is renowned for driving innovation, safety, and jobsite productivity worldwide. The RYOBI brand, headquartered in Greenville, South Carolina, remains the top choice for DIYers and continues to set the standard in DIY tool innovation. TTI's diverse brand portfolio also includes trusted brands like AEG, EMPIRE, HOMELITE, and leading floorcare names HOOVER, ORECK, VAX, and DIRT DEVIL (based in Charlotte, North Carolina).
TTI's international recognition and renowned brand portfolio are supported by a strong ownership structure that underscores the company's global reach and stability. The Pudwill family remains the company's largest shareholder, with the remaining ownership held largely by institutional investors at North American and European-owned firms. TTI is publicly traded on the Hong Kong Stock Exchange and is a constituent stock of the Hang Seng Index, operating globally with a strong commitment to environmental, social, and corporate governance standards.
TTI is currently seeking Data Engineer to help develop and maintain our Data Platform for Asia offices.
Responsibilities:
- Develop technical mechanisms to collect, integrate, protect, store, arrange and transform data as data delivery solutions
- Construct and monitor data pipelines
- Develop scripts and custom code to refine data
- Work closely with the business user to enhance existing tools supporting reporting
- Conduct unit tests and develop database queries to analyze the results and troubleshoot any issues that arise
- Develop and update technical documentation for senior members of staff and colleagues to serve as a reference guide
- Teamwork – Work as part of a tight knit team, so the ability to work professionally and virtually with others is vital to being successful in this job
- Recommend & develop robust solutions in a timely manner based on requirement spec working with IT and business stakeholders
- Provide ongoing maintenance and problem investigation & support
- Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code
- Occasional travel to our world-class factories is required as needed
General Skill & Experience Requirements:
- Bachelor degree in Computer Science, Information Technology, Statistics; Or related discipline is desired
- Must possess a minimum of 2 - 4 years of experience in data engineering projects
- Motivated, independent and self-sufficient. Able to receive an assigned task and see it through to completion with minimal supervision
- Ability to write detailed design documents, communicate the vision and defend the position is an advantage
- Fluency in English and Mandarin is a plus
Data Engineer Skillset:
- Demonstrated aptitude in: Cloud Data Platform solutions; Certification a plus
- Experience as SQL/Oracle DB/Python Developer
- Experience in ETL process on complex data projects
- Proficiency in languages like Python, Java
- Experience in creating and maintaining data assets in Cloud Data Platform
- Experience with optimization on data pipelines
- Knowledge of general data modeling concepts
Preferable Technical Skillset:
- Excellent written and verbal communication skill
- Experience in machine learning and AI projects is a plus
- Ability to work effectively with remote teammates and users
- Domain knowledge on manufacturing sector is a plus
- Understand process engineering concepts and measurements is a plus
We offer 5-day week, competitive remuneration package including double pay, medical, life & personal accident insurances, education sponsorship and good career prospects to the right candidate. Interested parties please send your resume with expected salary by clicking Quick Apply.
(All personal data collected would be used for recruitment purpose only)
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                     Explore exciting Data Engineer opportunities. Data Engineers are in high demand, responsible for designing, building, and maintaining data pipelines and systems. These roles involve extracting, transforming, and loading (ETL) data from various sources into data warehouses or data lakes.
 Explore exciting Data Engineer opportunities. Data Engineers are in high demand, responsible for designing, building, and maintaining data pipelines and systems. These roles involve extracting, transforming, and loading (ETL) data from various sources into data warehouses or data lakes.