218 Senior Data Engineer jobs in Hong Kong
AI Engineer/Research Programmers/Research Data Scientist/Big Data engineer
Posted today
Job Viewed
Job Description
AI Engineer/Research Programmers/Research Data Scientist/Big Data engineer
Requirements:
Knowledge in Artificial Intelligence, blockchain and cybersecurity, modern software development, extremely
enthusiastic in programming and learning new technology. smart contract development is an advantage
Knowledge and research experience in various Al algo, Computer Vision, Recommender System, blockchain and Knowledge Graph.
Willing to learn different streams of programming languages and participate in various software system development includes but not limited to Al, web apps, DApp, mobile apps, server system and cybersecurity solutions.
Abilities to work as a team with young teammates and mature supervisor. locally and remotely.
PhD, Master or Bachelor degree in Computer Science, Applied Mathematics, Mathematics, Statistics, Engineering, Technology, Operations Research, Management Science, Knowledge Graph, or any IT or STEM related disciplines.
Result oriented. Flexible working hours acceptable
Working experience: 0 to 5 years
Experience in Education is a plus
"Start-up mentality and self-driven learner is a must.
Working office: Cyberport
Job responsibilities:
(a) Collect internal and external customers business requirement
(b) Prepare functional specifications and test plan
(c) Develop, Implementation of backend applications
(d) Participate in management and operational meetings to report responsible tasks progress and prepare report of meetings
(e) Perform full software development life cycle, design, coding, testing and implementation of backend and frontend system
(f) Compile and analyze data, processes. and codes to troubleshoot problems
(g) Develop administrative interface tor administration and management access
(h) Research on recent development of technology to the adoption on company projects.
(i) Research and publish paper and patent for Al algorithm, included but not limited to Computer vision, Recommender System, blockchain and Knowledge Graph.
(i) Provide IT services to the company when required
(k) Coding with languages such as C#, Python, Tensor flow, R, , Solidity, Java, JavaScript or Go.
Interested parties, please apply together with a detailed resume, stating current and expected salary, and send it via
All personal information received will be treated in strict confidence for employment purpose only. Your application can be
considered as unsuccessful if you have not been reached within 3 months.
On the other hand, it may be transferred to other companies of our group for job openings. Unsuccessful applications will be
destroyed for a period not more than 12 months.
工作類型: 兼職
薪酬: $20,000.00至$32,000.00(每月)
Work Location: 親身到場
預期開始日期: 2025/08/02
Data Engineer / Senior Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
OverviewReporting to the Data & Analytics Director, this position is for a Data Engineer who is passionate about building robust, scalable data solutions and lightweight AI applications. While our ecosystem is built on the Google Cloud Platform (GCP), we value strong engineering fundamentals and welcome candidates with experience in similar technologies from other cloud environments (like AWS or Azure).
You will work closely with data analysts, media teams and business stakeholders to build the foundational technology that drives business growth and operational efficiency. A key focus of this role will be developing and deploying lightweight internal tools on Google Cloud Run that are powered by Generative AI. This role offers the opportunity to directly collaborate with clients and vendors, implementing data & analytics strategies for OMG’s clients. You will contribute to our mission of empowering our agencies with advanced data solutions.
Key Responsibilities- Data Pipeline Architecture & Development: Design, build, and maintain resilient and scalable ETL/ELT pipelines on GCP to process data and load it into BigQuery.
- Workflow Automation & Solution Design: Proactively identify opportunities to automate day-to-day workflows and repetitive tasks across the business. Design and implement automated solutions that reduce manual effort, increase efficiency, and allow teams to focus on higher-value activities.
- Develop Lightweight AI-Powered Tools: Build simple, internal-use web tools using Python frameworks (e.g., Streamlit, Flask). The role involves writing scripts and developing lightweight applications that integrate Generative AI models (e.g., Google's Gemini via Vertex AI) to support tasks like natural language querying, report summarization, and basic insight generation.
- Application Deployment: You will be responsible for containerizing these AI-powered applications with Docker and deploying them on Google Cloud Run, our primary service for hosting container-based applications and APIs.
- Data Governance & Quality: Implement and automate data quality checks to ensure the accuracy and consistency of data within our BigQuery data warehouse.
- Technical Strategy & Innovation: Lead the exploration and implementation of Generative AI use cases within our data platform. You will evaluate new models and services to build innovative solutions that create tangible business value.
- 3+ years of experience in a data engineering or similar software engineering role.
- Strong programming skills in Python, with experience using data-related libraries (e.g., Pandas, Polars).
- Proven experience with at least one major cloud platform (GCP, AWS, or Azure), with a willingness to specialize in the GCP ecosystem.
- Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical field is a plus.
While direct experience with our specific tools is a plus, we value transferable skills and a strong foundation in equivalent technologies.
- Data Warehousing: Google BigQuery (Equivalent experience: Snowflake, Amazon Redshift, Azure Synapse)
- Data Processing & Orchestration: Cloud Composer (Airflow), Cloud Dataflow (Spark), and Cloud Functions (Equivalent experience: AWS Lambda, Azure Functions)
- Application & API Deployment: Google Cloud Run, using Docker for containerization. (Equivalent experience: Kubernetes, AWS Fargate, Azure Container Apps)
- Generative AI: Experience or strong interest in integrating large language models (LLMs) via APIs (e.g., Google Vertex AI, OpenAI).
- Web Application Development: Experience in building lightweight data applications or internal tools with Python frameworks (e.g., Streamlit, Flask).
- Domain Knowledge: Familiarity with digital marketing tools and ad platforms (e.g., Google Ads, Meta Ads, Google Analytics) is a plus.
- Analytical Mindset: You have strong analytical and problem-solving abilities to tackle complex data challenges.
- Excellent Communicator: You can effectively partner with both technical and non-technical stakeholders to translate business needs into technical solutions.
- Strong Sense of Project Ownership: You can take technical projects from conception to completion with autonomy and accountability.
- Full-time
- Engineering, Project Management, and Information Technology
- Advertising Services
Data Engineer / Senior Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
OverviewReporting to the Data & Analytics Director, this position is for a Data Engineer who is passionate about building robust, scalable data solutions and lightweight AI applications. While our ecosystem is built on the Google Cloud Platform (GCP), we value strong engineering fundamentals and welcome candidates with experience in similar technologies from other cloud environments (like AWS or Azure).
You will work closely with data analysts, media teams and business stakeholders to build the foundational technology that drives business growth and operational efficiency. A key focus of this role will be developing and deploying lightweight internal tools on Google Cloud Run that are powered by Generative AI. This role offers the opportunity to directly collaborate with clients and vendors, implementing data & analytics strategies for OMG’s clients. You will contribute to our mission of empowering our agencies with advanced data solutions.
Key Responsibilities- Data Pipeline Architecture & Development: Design, build, and maintain resilient and scalable ETL/ELT pipelines on GCP to process data and load it into BigQuery.
- Workflow Automation & Solution Design: Proactively identify opportunities to automate day-to-day workflows and repetitive tasks across the business. Design and implement automated solutions that reduce manual effort, increase efficiency, and allow teams to focus on higher-value activities.
- Develop Lightweight AI-Powered Tools: Build simple, internal-use web tools using Python frameworks (e.g., Streamlit, Flask). The role involves writing scripts and developing lightweight applications that integrate Generative AI models (e.g., Google's Gemini via Vertex AI) to support tasks like natural language querying, report summarization, and basic insight generation.
- Application Deployment: You will be responsible for containerizing these AI-powered applications with Docker and deploying them on Google Cloud Run, our primary service for hosting container-based applications and APIs.
- Data Governance & Quality: Implement and automate data quality checks to ensure the accuracy and consistency of data within our BigQuery data warehouse.
- Technical Strategy & Innovation: Lead the exploration and implementation of Generative AI use cases within our data platform. You will evaluate new models and services to build innovative solutions that create tangible business value.
- 3+ years of experience in a data engineering or similar software engineering role.
- Strong programming skills in Python, with experience using data-related libraries (e.g., Pandas, Polars).
- Proven experience with at least one major cloud platform (GCP, AWS, or Azure), with a willingness to specialize in the GCP ecosystem.
- Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical field is a plus.
While direct experience with our specific tools is a plus, we value transferable skills and a strong foundation in equivalent technologies.
- Data Warehousing: Google BigQuery (Equivalent experience: Snowflake, Amazon Redshift, Azure Synapse)
- Data Processing & Orchestration: Cloud Composer (Airflow), Cloud Dataflow (Spark), and Cloud Functions (Equivalent experience: AWS Lambda, Azure Functions)
- Application & API Deployment: Google Cloud Run, using Docker for containerization. (Equivalent experience: Kubernetes, AWS Fargate, Azure Container Apps)
- Generative AI: Experience or strong interest in integrating large language models (LLMs) via APIs (e.g., Google Vertex AI, OpenAI).
- Web Application Development: Experience in building lightweight data applications or internal tools with Python frameworks (e.g., Streamlit, Flask).
- Domain Knowledge: Familiarity with digital marketing tools and ad platforms (e.g., Google Ads, Meta Ads, Google Analytics) is a plus.
- Analytical Mindset: You have strong analytical and problem-solving abilities to tackle complex data challenges.
- Excellent Communicator: You can effectively partner with both technical and non-technical stakeholders to translate business needs into technical solutions.
- Strong Sense of Project Ownership: You can take technical projects from conception to completion with autonomy and accountability.
- Full-time
- Engineering, Project Management, and Information Technology
- Advertising Services
Data Engineer, Data
Posted today
Job Viewed
Job Description
We offer work from home (Max. 2 days per week), 14-20 days' annual leave, double pay, discretionary bonus, overtime pay, medical/dental/life insurance, five-day work week.
As a Data Management Engineer, you will play a critical role in ensuring the integrity, security, and efficiency of our data platform. You will collaborate closely with cross-functional teams to implement governance frameworks, enforce data standards, and optimize resource usage. Your work will directly support the organization's data strategy and compliance posture.
Job Description :
Lead the design, implementation and deployment of a master data management architecture that encompasses all customer source systems to enable data sharing across different regions, business units and departments
Operationalize Enterprise Master Data Repository, to enforce centralized governance controls at a global level
Identify and build data quality rules, investigate and remediate data quality issues
Design and build data quality dashboards with Power BI
Evaluate, select and implement appropriate data management technologies to address data governance challenges
Manage vendors to complete data governance activities, from vendor selection, data discovery, Proof of Concept (PoC) development, implementation to global adoption
Design and implement data governance solutions that incorporate AI-driven data management techniques to improve data quality and enhance data governance processes
Monitor data platform resource utilization and performance metrics
Identify and recommend opportunities for cost optimization and operational efficiency
Lead analysis of the current data platforms (e.g., logs) to detect critical deficiencies and recommend solutions for improvement
Engage with key data stakeholders to outline data objectives and gather data requirements. Execute solutions encompassing ownership, accountability, streamlined processes, robust procedures, stringent data quality measures, security protocols, and other pertinent areas to drive successful implementation
Implement the Architecture Governance Standard, Platform Design Principles, Platform Security, and Data Compliance Standard
Implement the Data Classification Standard to enhance data management and security measures within the organization
Take charge of the Global Data Quality Forum and establish regional forums if required to foster collaboration and knowledge sharing on data quality practices
Conduct market research and collaborate with vendors to evaluate cutting-edge data management technologies, trends, and products. Select and deploy the most suitable solutions for Global Data and Analytics Governance initiatives, ensuring seamless scalability
Requirement:
Bachelor's degree from a recognized university in Computer Science, Information Engineering, or related field
At least 6 years of experience in Data Engineering, IT, Data Governance, Data Management or related field
Knowledge of data management best practices and technologies
Knowledge of data governance, security and observability
Proven ability to identify innovation opportunities and deliver innovative data management solutions
Hands-on experiences in SQL, Python, and PowerBI
Experience in Azure Databricks Unity Catalog and DLT
Excellent analytical and problem-solving skills
Fluent in English speaking and writing
Willingness to travel, as needed
The requirements below are considered as advantages, but not a must .
Knowledge of data related regulatory requirements and emerging trends and issues
Experience in programming languages including PySpark, R, Java, Scala
Experience in working with cross-functional teams in global settings
Interested parties please send full resume with employment history and expected salary to HRA Department, Yusen Logistics Global Management (Hong Kong) Limited by email.
Yusen Logistics Global Management (Hong Kong) Limited is an equal opportunity employer. All information collected will be used for recruitment purpose only.
<<
About Yusen Logistics
Yusen Logistics is working to become the world's preferred supply chain logistics company. Our complete offer is designed to forge better connections between businesses, customers and communities – through innovative supply chain management, freight forwarding, warehousing and distribution services. As a company we're dedicated to a culture of continuous improvement, ensuring everyone who works with us is committed, connected and creative in making us the world's preferred choice.
Data Engineer
Posted 10 days ago
Job Viewed
Job Description
Get AI-powered advice on this job and more exclusive features.
We are a cutting-edge AI startup specializing in next-generation video generation technology based in Hong Kong. Our mission is to push the boundaries of what's possible in AI-driven video generation through innovation of foundation model. As a growing startup, we offer a dynamic environment where your research can have immediate impact on technology development.
Position Overview
We are seeking a skilled Data Engineer to design, build, and optimize our data pipelines and infrastructure. The ideal candidate will have strong experience in handling large-scale video datasets and building efficient data processing systems for machine learning applications.
Key Responsibilities
- Design and implement scalable data pipelines for processing, storing, and managing large-scale video datasets
- Build and maintain data infrastructure for training data preparation and feature engineering
- Develop efficient ETL processes for various data sources including videos, images, and metadata
- Create and optimize data storage solutions for high-performance data access
- Implement data quality monitoring and validation systems
- Collaborate with ML researchers to support model training and evaluation needs
- Ensure data security and compliance across all data operations
Required Qualifications
- Master's degree in Computer Science, Software Engineering, or related field
- 8+ years of experience in data engineering roles at tech companies
- Strong programming skills in Python and SQL
- Experience with big data technologies (Spark, Hadoop ecosystem)
- Proven track record in building and maintaining data pipelines
- Experience with cloud platforms (AWS/GCP/Azure or Alibaba Cloud/Tencent Cloud)
- Strong understanding of data modeling and database design
Preferred Qualifications
- Experience with video processing and storage systems
- Knowledge of ML/AI data pipeline requirements
- Familiarity with distributed computing systems
- Experience with streaming data processing
- Understanding of data privacy and security best practices
- Experience with Cloud services and data infrastructure
Technical Skills
Data Processing & Storage
- Big Data: Spark, Hadoop, Hive
- Data Warehousing: Snowflake, Amazon Redshift
- Infrastructure as Code: Terraform, Ansible
Programming & Tools
- Languages: Python, SQL, Shell scripting
- ETL Tools: Airflow, Luigi
- Version Control: Git
Video Processing
- FFmpeg, OpenCV
- Video compression and optimization techniques
- Video metadata extraction and management
What We Offer
- Opportunity to build critical infrastructure for cutting-edge AI technology
- Competitive salary and equity package
- Modern tech stack and tools
- Collaborative and innovative work environment
- Health and wellness benefits
Location
- Hong Kong (on-site, Hong Kong Science and Technology Park)
Expected Impact
- Shape the foundation of our data infrastructure
- Build and mentor a world-class technology team
To Apply:
Please submit:
- Detailed CV with publications and major projects
- Brief description of the most complex data pipeline you've built
- Links to any open-source contributions or technical blogs
To apply or learn more about this position, please contact
Seniority level- Seniority level Mid-Senior level
- Employment type Full-time
- Industries Software Development
Referrals increase your chances of interviewing at Video Rebirth by 2x
Get notified about new Data Engineer jobs in Hong Kong, Hong Kong SAR .
Wan Chai District, Hong Kong SAR 2 months ago
Central, Hong Kong SAR SGD800.00-SGD1,200.00 1 month ago
Data Center Engineer - Global Hedge Fund - Hong Kong Data Engineer - Leading Finance InstitutionCentral & Western District, Hong Kong SAR 5 days ago
Python Senior Software Engineer (Financial Data) Data Science Lead | HKD 75K - HKD 90K per month | Inhouse AM - Trading System - Onchain Data Engineer Graduate Hire 2024/25 - Software Engineer(Backend, Frontend, Mobile) Software Engineer – Financial Data & Trading SystemsWe’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrData Engineer
Posted 21 days ago
Job Viewed
Job Description
Seeking a skilled and detail-oriented Data Pipelines Engineer to join the Data Warehouse for Trustee (DWT) team. The role liaises with different parties to support troubleshooting and problem solving of all Data Warehouse issues and improvement areas of ETL pipelines and data warehouse architecture. The ideal candidate will possess strong analytical skills, excellent communication abilities, and a deep understanding of dependencies and relationships of upstream and downstream systems related to the data warehouse.
Responsibilities- Translate data pipeline requirements into data pipeline design, guiding and directing the design by working closely with stakeholders including the architecture team, external developers, data consumers, data providers, and internal/external business users.
- Contribute to use case development (e.g., workshops) to gather and validate business requirements. Align expectations from different stakeholders to streamline, expedite, and resolve client queries. Experience in project management and business transformation in financial and insurance industries is an advantage.
- Model and design the ETL pipeline data structure, storage, integration, integrity checks, and reconciliation. Standardize exception control and ensure traceability during troubleshooting.
- Document and write technical specifications for functional and non-functional requirements of the solution.
- Design data models/platforms to enable scalable growth while minimizing risk and cost of changes for a large-scale data platform.
- Analyze new data sources with a structured data quality evaluation approach and collaborate with stakeholders on the impact of integrating new data into existing pipelines and models.
- Bridge the gap between business requirements and ETL logic by troubleshooting data discrepancies and implementing scalable solutions.
- Bachelor's degree (or higher) in project management, business management, mathematics, statistics, computer science, engineering, or related field.
- At least 5 years IT experience with 2 years in data migration and/or data warehouse pipelines projects. At least 3 years of experience with Oracle or SQL Server SQL development.
- Strong technical understanding of data quality metrics, data modelling, design and architecture principles and techniques across master data, transaction data and data warehouse.
- Experience with Stored Procedures (e.g., Oracle PL/SQL) and SQL DDL/DML.
- Hadoop, Python, Java Spring Boot, Docker or OCP experience is advantageous.
- Experience with Power BI tool and its data objects, report objects, and service objects in different scenarios.
- Experience in Star Schema and Snowflake design and dimensional modelling.
- Knowledge of Power BI Row Level Security.
- Experience using JSON API data services for rendering Power BI reports.
- Knowledge of Power Query M script is an advantage.
- Proficient in both spoken and written English and Chinese (Mandarin/Cantonese).
- Proactive with good problem-solving and multitasking skills.
All personal data provided by candidates will be used for recruitment purposes only by HKT Services Limited in accordance with HKT's Privacy Statement, which is available on our website. Unless otherwise instructed in writing, candidates may be considered for other suitable positions within the Group (HKT Limited, PCCW Limited and their subsidiaries, affiliates and associated companies). Personal data of unsuccessful candidates will normally be destroyed 24 months after rejection of the candidate's application. If you have any questions regarding your personal data held by HKT Services Limited, please refer to HKT's Privacy Statement or contact our Privacy Compliance Officer by writing to or GPO Box 9896, Hong Kong.
Seniority level- Mid-Senior level
- Full-time
- Information Technology
- Technology, Information and Internet
Data Engineer
Posted today
Job Viewed
Job Description
Post date: 24 September 2025
Ref: DE
Department: Information Technology
Location: Kowloon Bay
RESPONSIBILITIESThe successful candidate will have a strong background in building and maintaining scalable data pipelines and will be proficient in leveraging modern data technologies on the Azure cloud platform. He/ She will play a key role in designing, developing, and optimizing our data architecture to support our data-driven decision-making processes.
He/ She will be expected to perform the following:
- Design, construct, install, test, and maintain highly scalable and reliable data management and processing systems
- Develop and manage ETL/ELT pipelines to ingest data from a wide variety of data sources and systems, ensuring data quality and integrity
- Build and optimize data models on Azure Synapse Analytics / Microsoft Fabric for analytical and reporting purposes
- Implement and manage data storage solutions using Medallion Architecture with Delta Lake, including creating and maintaining tables, handling schema evolution, and ensuring ACID compliance for data transactions
- Utilize PySpark, Spark SQL, Python and SQL for data transformation, manipulation, and analysis within our data platforms
- Develop interactive dashboards, reports, and visualizations using Power BI, Qlik and SAC to provide actionable insights to business users
- Collaborate with data analysts, and business stakeholders to understand data requirements and deliver appropriate data solutions
- Monitor and troubleshoot data pipeline performance, implement optimizations and resolving issues in a timely manner
- Ensure data governance and security best practices are implemented and adhered to throughout the data lifecycle
- Stay current with the latest trends and technologies in data engineering and the Azure ecosystem
- Degree holder in computer science, Engineering, Information Systems, or a related technical field
- Minimum of 2 years of proven experience as a Data Engineer or in a similar role, with a clear progression of responsibilities and accomplishments
- Mastery of PySpark, Spark SQL, Python and SQL for large-scale data processing and analysis
- Deep, hands-on experience with Microsoft Azure data services, particularly Azure Synapse Analytics, Azure Data Factory, Azure Data Lake Storage as well as Microsoft Fabric, including architectural design and cost management
- In-depth, expert-level knowledge of Medallion Architecture with Delta Lake, including its architecture, advanced features, and practical implementation in enterprise-level data lakes
- Strong proficiency in data visualization and business intelligence tools, specifically Power BI and Qlik, with experience in developing complex reports and data models
- Expertise in data modeling, Data Warehousing, Data Lakehouse and Delta Lakehouse concepts, and building and orchestrating enterprise-grade ETL/ELT pipelines
- Demonstrated experience with software engineering best practices, including leading code reviews, managing CI/CD pipelines (e.g., Azure DevOps), and working in an agile environment
- Exceptional problem-solving, analytical, and communication skills
- Good command of both spoken and written English and Chinese
Be The First To Know
About the latest Senior data engineer Jobs in Hong Kong !
Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
- Design and deploy Proof of Concepts (PoCs), Business Intelligence (BI) reports, and ETL processes
- Collaborate with internal IT teams and stakeholders to design and implement effective solutions
- Maintain and update technical documentation throughout the enhancement cycle to accurately reflect changes
Requirements
- Possess a university degree or a professional qualification in Information Technology, along with at least 3 years of substantial and relevant work experience
- Experience in the development and use of Microsoft Power BI, SSIS, SSRS, and SSAS
- Experience with Excel PowerQuery, and PowerPivot
- Experience with ETL processes and tools, including C# and .NET
- Involvement in designing and developing SQL queries, stored procedures, and functions
- Knowledge of data modelling and dimensional data modelling concepts
- Understanding of data visualization practices and principles
- Familiarity with Oracle PL/SQL development and performance tuning
- Exposure to Java, Spring Boot, ReactJS
- A strong understanding of the regulatory environment within the securities sector would be considered an advantage
Data Engineer
Posted today
Job Viewed
Job Description
Hutchison Telecom Hong Kong is a leading digital operator in Hong Kong, committed to channelling the latest technologies into innovations that set market trends and steer industry development. We offer diverse and advanced mobile services under the 3, 3SUPREME, MO+ and SoSIM brands in the consumer market, and are dedicated to developing enterprise solutions in the corporate market under the 3Business brand.
We are currently recruiting exceptional candidates to join our team as we enter the digital era powered by advanced 5G tech. To learn more about us, visit
Responsibilities:
- Identify data sources, transform, correlate and aggregate information, load useful records to Data Warehouse, and formulate Data Marts for users to access
- Data integrity assurance, monitor job alerts, rescue/ restore problem data records/ jobs
- Analyze the information and produce useful summarized figures/ reports to management
- Jobs/ scripting/ SQL statements tuning
- Automation of users reports
- System maintenance support
Requirements:
- Degree in Computer Sciences or related disciplines
- 2+ years relevant working experience, preferably gained in mobile network industry
- Good understanding of structural and unstructured data manipulation
- Previous exposure in statistics analysis and presentations is a plus
- Familiar with AI, ML, LLM and Database applications is an advantage
- Operating System knowledge in Unix/ Linux, Windows Server, IOS
- Database/ Network knowledge in Oracle, MongoDB, MS-SQL, MYSQL
- Programming Language knowledge in SQL, Python, Java, JavaScript
- Good command in spoken and written English and Chinese
Apart from competitive remuneration package and exciting opportunity for career development within the Group, we provide attractive employee benefits such as free company shuttle, free company SIM card, staff discount and preferential SIM plan offers, comprehensive medical & insurance schemes, as well as a full range of other employee well-being provisions.
We appreciate your interest in joining us, by submitting your full resume with present and expected salary to or clicking "QUICK APPLY" button.
We promote a diverse workforce drives our goals and contributes to overall success of the Group. We strive to create a work environment that is respectful, inclusive, and free from any form of discrimination, harassment and intimidation.
Being an equal opportunity employer, we embrace diversity and inclusion, and welcome talents from any backgrounds and conditions. Personal data collected will be treated in the strictest confidence and handled confidentially by authorised personnel for recruitment-related purposes only within the CK Hutchison Group of companies. The personal data of unsuccessful applicants will be destroyed after the recruitment exercise pursuant to the requirements of the Personal Data (Privacy) Ordinance in Hong Kong.
Data Engineer
Posted today
Job Viewed
Job Description
About RedotPay
RedotPay is a global crypto payment fintech integrating blockchain solutions into traditional banking and finance infrastructure. Our user-friendly crypto platform empowers millions globally to spend and send crypto assets, ensuring faster, more accessible, and inclusive financial services. RedotPay advances financial inclusion for the unbanked and supports crypto enthusiasts, driving the global adoption of secure and flexible crypto-powered financial solutions. Join us in shaping the future of finance and making a meaningful impact on a global scale.
Job Summary
As a Data Engineer, you will be a key member of our data team, responsible for building and maintaining our robust, scalable, and efficient data pipelines. You will work closely with data scientists, analysts, and software engineers to ensure data is accessible, reliable, and ready for use. The ideal candidate is passionate about big data technologies, has a strong foundation in software engineering principles, and thrives in a collaborative environment.
Key Responsibilities
Pipeline Development & Architecture:
Design, construct, install, test, and maintain highly scalable data pipelines and data models.
- Build the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from a wide variety of data sources using SQL and cloud-based 'big data' technologies.
- Develop and implement processes for data modeling, mining, and production.
- Select and integrate any new big data tools and frameworks required to provide requested capabilities.
Data Management & Quality:
Implement systems for monitoring data quality, ensuring production data is always accurate and available for key stakeholders and business processes.
- Develop and maintain scalable and sustainable data warehousing solutions, including data lakes and data marts.
- Manage and orchestrate data workflows using modern tools (e.g., Airflow, dbt, Prefect).
Collaboration & Support:
Collaborate with data scientists and analysts to support their data needs for advanced analytics, machine learning, and reporting.
- Work with software engineering teams to assist with data-related technical issues and support their data infrastructure needs.
- Translate complex business requirements into technical specifications.
Operational Excellence:
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Required Qualifications & Skills
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- 3+ years of proven experience as a Data Engineer or in a similar role.
- Strong programming skills in Python and SQL are essential.
- Deep experience with cloud data platforms such as AWS (Redshift, S3, Glue, EMR, Lambda), Google Cloud Platform (BigQuery, Dataflow, Pub/Sub, Cloud Composer), or Azure (Data Factory, Synapse Analytics, Databricks).
- Experience with big data tools and processing frameworks such as Spark (PySpark/SparkSQL) and Hadoop.
- Solid experience building and optimizing ETL/ELT pipelines and data architectures.
- Experience with relational SQL and NoSQL databases, including Postgres, MySQL, Cassandra, MongoDB.
- Experience with data pipeline and workflow management tools: Airflow, dbt, Luigi, etc.
- Understanding of data modeling, data warehousing, and data lake concepts (e.g., Star/Snowflake schema, Data Vault 2.0, Slowly Changing Dimensions).