189 Data Engineer jobs in Hong Kong

Data Engineer / Senior Data Engineer

Omnicom Media Group Hong Kong

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Get AI-powered advice on this job and more exclusive features.

Overview

Reporting to the Data & Analytics Director, this position is for a Data Engineer who is passionate about building robust, scalable data solutions and lightweight AI applications. While our ecosystem is built on the Google Cloud Platform (GCP), we value strong engineering fundamentals and welcome candidates with experience in similar technologies from other cloud environments (like AWS or Azure).

You will work closely with data analysts, media teams and business stakeholders to build the foundational technology that drives business growth and operational efficiency. A key focus of this role will be developing and deploying lightweight internal tools on Google Cloud Run that are powered by Generative AI. This role offers the opportunity to directly collaborate with clients and vendors, implementing data & analytics strategies for OMG’s clients. You will contribute to our mission of empowering our agencies with advanced data solutions.

Key Responsibilities
  • Data Pipeline Architecture & Development: Design, build, and maintain resilient and scalable ETL/ELT pipelines on GCP to process data and load it into BigQuery.
  • Workflow Automation & Solution Design: Proactively identify opportunities to automate day-to-day workflows and repetitive tasks across the business. Design and implement automated solutions that reduce manual effort, increase efficiency, and allow teams to focus on higher-value activities.
  • Develop Lightweight AI-Powered Tools: Build simple, internal-use web tools using Python frameworks (e.g., Streamlit, Flask). The role involves writing scripts and developing lightweight applications that integrate Generative AI models (e.g., Google's Gemini via Vertex AI) to support tasks like natural language querying, report summarization, and basic insight generation.
  • Application Deployment: You will be responsible for containerizing these AI-powered applications with Docker and deploying them on Google Cloud Run, our primary service for hosting container-based applications and APIs.
  • Data Governance & Quality: Implement and automate data quality checks to ensure the accuracy and consistency of data within our BigQuery data warehouse.
  • Technical Strategy & Innovation: Lead the exploration and implementation of Generative AI use cases within our data platform. You will evaluate new models and services to build innovative solutions that create tangible business value.
Expected Qualifications
  • 3+ years of experience in a data engineering or similar software engineering role.
  • Strong programming skills in Python, with experience using data-related libraries (e.g., Pandas, Polars).
  • Proven experience with at least one major cloud platform (GCP, AWS, or Azure), with a willingness to specialize in the GCP ecosystem.
  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical field is a plus.
Technical Proficiency & Our Stack

While direct experience with our specific tools is a plus, we value transferable skills and a strong foundation in equivalent technologies.

  • Data Warehousing: Google BigQuery (Equivalent experience: Snowflake, Amazon Redshift, Azure Synapse)
  • Data Processing & Orchestration: Cloud Composer (Airflow), Cloud Dataflow (Spark), and Cloud Functions (Equivalent experience: AWS Lambda, Azure Functions)
  • Application & API Deployment: Google Cloud Run, using Docker for containerization. (Equivalent experience: Kubernetes, AWS Fargate, Azure Container Apps)
  • Generative AI: Experience or strong interest in integrating large language models (LLMs) via APIs (e.g., Google Vertex AI, OpenAI).
  • Web Application Development: Experience in building lightweight data applications or internal tools with Python frameworks (e.g., Streamlit, Flask).
  • Domain Knowledge: Familiarity with digital marketing tools and ad platforms (e.g., Google Ads, Meta Ads, Google Analytics) is a plus.
Who You Are
  • Analytical Mindset: You have strong analytical and problem-solving abilities to tackle complex data challenges.
  • Excellent Communicator: You can effectively partner with both technical and non-technical stakeholders to translate business needs into technical solutions.
  • Strong Sense of Project Ownership: You can take technical projects from conception to completion with autonomy and accountability.
Employment Type
  • Full-time
Job Function
  • Engineering, Project Management, and Information Technology
Industries
  • Advertising Services

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer / Senior Data Engineer

Hong Kong, Hong Kong Omnicom Media Group Hong Kong

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Get AI-powered advice on this job and more exclusive features.

Overview

Reporting to the Data & Analytics Director, this position is for a Data Engineer who is passionate about building robust, scalable data solutions and lightweight AI applications. While our ecosystem is built on the Google Cloud Platform (GCP), we value strong engineering fundamentals and welcome candidates with experience in similar technologies from other cloud environments (like AWS or Azure).

You will work closely with data analysts, media teams and business stakeholders to build the foundational technology that drives business growth and operational efficiency. A key focus of this role will be developing and deploying lightweight internal tools on Google Cloud Run that are powered by Generative AI. This role offers the opportunity to directly collaborate with clients and vendors, implementing data & analytics strategies for OMG’s clients. You will contribute to our mission of empowering our agencies with advanced data solutions.

Key Responsibilities
  • Data Pipeline Architecture & Development: Design, build, and maintain resilient and scalable ETL/ELT pipelines on GCP to process data and load it into BigQuery.
  • Workflow Automation & Solution Design: Proactively identify opportunities to automate day-to-day workflows and repetitive tasks across the business. Design and implement automated solutions that reduce manual effort, increase efficiency, and allow teams to focus on higher-value activities.
  • Develop Lightweight AI-Powered Tools: Build simple, internal-use web tools using Python frameworks (e.g., Streamlit, Flask). The role involves writing scripts and developing lightweight applications that integrate Generative AI models (e.g., Google's Gemini via Vertex AI) to support tasks like natural language querying, report summarization, and basic insight generation.
  • Application Deployment: You will be responsible for containerizing these AI-powered applications with Docker and deploying them on Google Cloud Run, our primary service for hosting container-based applications and APIs.
  • Data Governance & Quality: Implement and automate data quality checks to ensure the accuracy and consistency of data within our BigQuery data warehouse.
  • Technical Strategy & Innovation: Lead the exploration and implementation of Generative AI use cases within our data platform. You will evaluate new models and services to build innovative solutions that create tangible business value.
Expected Qualifications
  • 3+ years of experience in a data engineering or similar software engineering role.
  • Strong programming skills in Python, with experience using data-related libraries (e.g., Pandas, Polars).
  • Proven experience with at least one major cloud platform (GCP, AWS, or Azure), with a willingness to specialize in the GCP ecosystem.
  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical field is a plus.
Technical Proficiency & Our Stack

While direct experience with our specific tools is a plus, we value transferable skills and a strong foundation in equivalent technologies.

  • Data Warehousing: Google BigQuery (Equivalent experience: Snowflake, Amazon Redshift, Azure Synapse)
  • Data Processing & Orchestration: Cloud Composer (Airflow), Cloud Dataflow (Spark), and Cloud Functions (Equivalent experience: AWS Lambda, Azure Functions)
  • Application & API Deployment: Google Cloud Run, using Docker for containerization. (Equivalent experience: Kubernetes, AWS Fargate, Azure Container Apps)
  • Generative AI: Experience or strong interest in integrating large language models (LLMs) via APIs (e.g., Google Vertex AI, OpenAI).
  • Web Application Development: Experience in building lightweight data applications or internal tools with Python frameworks (e.g., Streamlit, Flask).
  • Domain Knowledge: Familiarity with digital marketing tools and ad platforms (e.g., Google Ads, Meta Ads, Google Analytics) is a plus.
Who You Are
  • Analytical Mindset: You have strong analytical and problem-solving abilities to tackle complex data challenges.
  • Excellent Communicator: You can effectively partner with both technical and non-technical stakeholders to translate business needs into technical solutions.
  • Strong Sense of Project Ownership: You can take technical projects from conception to completion with autonomy and accountability.
Employment Type
  • Full-time
Job Function
  • Engineering, Project Management, and Information Technology
Industries
  • Advertising Services
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer, Data

$1200000 - $2400000 Y Yusen Logistics Global Management (Hong Kong) Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

We offer work from home (Max. 2 days per week), 14-20 days' annual leave, double pay, discretionary bonus, overtime pay, medical/dental/life insurance, five-day work week.

As a Data Management Engineer, you will play a critical role in ensuring the integrity, security, and efficiency of our data platform. You will collaborate closely with cross-functional teams to implement governance frameworks, enforce data standards, and optimize resource usage. Your work will directly support the organization's data strategy and compliance posture.

Job Description :

  • Lead the design, implementation and deployment of a master data management architecture that encompasses all customer source systems to enable data sharing across different regions, business units and departments

  • Operationalize Enterprise Master Data Repository, to enforce centralized governance controls at a global level

  • Identify and build data quality rules, investigate and remediate data quality issues

  • Design and build data quality dashboards with Power BI

  • Evaluate, select and implement appropriate data management technologies to address data governance challenges

  • Manage vendors to complete data governance activities, from vendor selection, data discovery, Proof of Concept (PoC) development, implementation to global adoption

  • Design and implement data governance solutions that incorporate AI-driven data management techniques to improve data quality and enhance data governance processes

  • Monitor data platform resource utilization and performance metrics

  • Identify and recommend opportunities for cost optimization and operational efficiency

  • Lead analysis of the current data platforms (e.g., logs) to detect critical deficiencies and recommend solutions for improvement

  • Engage with key data stakeholders to outline data objectives and gather data requirements. Execute solutions encompassing ownership, accountability, streamlined processes, robust procedures, stringent data quality measures, security protocols, and other pertinent areas to drive successful implementation

  • Implement the Architecture Governance Standard, Platform Design Principles, Platform Security, and Data Compliance Standard

  • Implement the Data Classification Standard to enhance data management and security measures within the organization

  • Take charge of the Global Data Quality Forum and establish regional forums if required to foster collaboration and knowledge sharing on data quality practices

  • Conduct market research and collaborate with vendors to evaluate cutting-edge data management technologies, trends, and products. Select and deploy the most suitable solutions for Global Data and Analytics Governance initiatives, ensuring seamless scalability

Requirement:

  • Bachelor's degree from a recognized university in Computer Science, Information Engineering, or related field

  • At least 6 years of experience in Data Engineering, IT, Data Governance, Data Management or related field

  • Knowledge of data management best practices and technologies

  • Knowledge of data governance, security and observability

  • Proven ability to identify innovation opportunities and deliver innovative data management solutions

  • Hands-on experiences in SQL, Python, and PowerBI

  • Experience in Azure Databricks Unity Catalog and DLT

  • Excellent analytical and problem-solving skills

  • Fluent in English speaking and writing

  • Willingness to travel, as needed

The requirements below are considered as advantages, but not a must .

  • Knowledge of data related regulatory requirements and emerging trends and issues

  • Experience in programming languages including PySpark, R, Java, Scala

  • Experience in working with cross-functional teams in global settings

Interested parties please send full resume with employment history and expected salary to HRA Department, Yusen Logistics Global Management (Hong Kong) Limited by email.

Yusen Logistics Global Management (Hong Kong) Limited is an equal opportunity employer. All information collected will be used for recruitment purpose only.

<<

About Yusen Logistics

Yusen Logistics is working to become the world's preferred supply chain logistics company. Our complete offer is designed to forge better connections between businesses, customers and communities – through innovative supply chain management, freight forwarding, warehousing and distribution services. As a company we're dedicated to a culture of continuous improvement, ensuring everyone who works with us is committed, connected and creative in making us the world's preferred choice.

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Video Rebirth

Posted 9 days ago

Job Viewed

Tap Again To Close

Job Description

Get AI-powered advice on this job and more exclusive features.

We are a cutting-edge AI startup specializing in next-generation video generation technology based in Hong Kong. Our mission is to push the boundaries of what's possible in AI-driven video generation through innovation of foundation model. As a growing startup, we offer a dynamic environment where your research can have immediate impact on technology development.

Position Overview

We are seeking a skilled Data Engineer to design, build, and optimize our data pipelines and infrastructure. The ideal candidate will have strong experience in handling large-scale video datasets and building efficient data processing systems for machine learning applications.

Key Responsibilities

  • Design and implement scalable data pipelines for processing, storing, and managing large-scale video datasets
  • Build and maintain data infrastructure for training data preparation and feature engineering
  • Develop efficient ETL processes for various data sources including videos, images, and metadata
  • Create and optimize data storage solutions for high-performance data access
  • Implement data quality monitoring and validation systems
  • Collaborate with ML researchers to support model training and evaluation needs
  • Ensure data security and compliance across all data operations

Required Qualifications

  • Master's degree in Computer Science, Software Engineering, or related field
  • 8+ years of experience in data engineering roles at tech companies
  • Strong programming skills in Python and SQL
  • Experience with big data technologies (Spark, Hadoop ecosystem)
  • Proven track record in building and maintaining data pipelines
  • Experience with cloud platforms (AWS/GCP/Azure or Alibaba Cloud/Tencent Cloud)
  • Strong understanding of data modeling and database design

Preferred Qualifications

  • Experience with video processing and storage systems
  • Knowledge of ML/AI data pipeline requirements
  • Familiarity with distributed computing systems
  • Experience with streaming data processing
  • Understanding of data privacy and security best practices
  • Experience with Cloud services and data infrastructure

Technical Skills

Data Processing & Storage

  • Big Data: Spark, Hadoop, Hive
  • Data Warehousing: Snowflake, Amazon Redshift
  • Infrastructure as Code: Terraform, Ansible

Programming & Tools

  • Languages: Python, SQL, Shell scripting
  • ETL Tools: Airflow, Luigi
  • Version Control: Git

Video Processing

  • FFmpeg, OpenCV
  • Video compression and optimization techniques
  • Video metadata extraction and management

What We Offer

  • Opportunity to build critical infrastructure for cutting-edge AI technology
  • Competitive salary and equity package
  • Modern tech stack and tools
  • Collaborative and innovative work environment
  • Health and wellness benefits

Location

  • Hong Kong (on-site, Hong Kong Science and Technology Park)

Expected Impact

  • Shape the foundation of our data infrastructure
  • Build and mentor a world-class technology team

To Apply:

Please submit:

  • Detailed CV with publications and major projects
  • Brief description of the most complex data pipeline you've built
  • Links to any open-source contributions or technical blogs

To apply or learn more about this position, please contact

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Industries Software Development

Referrals increase your chances of interviewing at Video Rebirth by 2x

Get notified about new Data Engineer jobs in Hong Kong, Hong Kong SAR .

Wan Chai District, Hong Kong SAR 2 months ago

Central, Hong Kong SAR SGD800.00-SGD1,200.00 1 month ago

Data Center Engineer - Global Hedge Fund - Hong Kong Data Engineer - Leading Finance Institution

Central & Western District, Hong Kong SAR 5 days ago

Python Senior Software Engineer (Financial Data) Data Science Lead | HKD 75K - HKD 90K per month | Inhouse AM - Trading System - Onchain Data Engineer Graduate Hire 2024/25 - Software Engineer(Backend, Frontend, Mobile) Software Engineer – Financial Data & Trading Systems

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

PCCW Solutions

Posted 20 days ago

Job Viewed

Tap Again To Close

Job Description

Seeking a skilled and detail-oriented Data Pipelines Engineer to join the Data Warehouse for Trustee (DWT) team. The role liaises with different parties to support troubleshooting and problem solving of all Data Warehouse issues and improvement areas of ETL pipelines and data warehouse architecture. The ideal candidate will possess strong analytical skills, excellent communication abilities, and a deep understanding of dependencies and relationships of upstream and downstream systems related to the data warehouse.

Responsibilities
  • Translate data pipeline requirements into data pipeline design, guiding and directing the design by working closely with stakeholders including the architecture team, external developers, data consumers, data providers, and internal/external business users.
  • Contribute to use case development (e.g., workshops) to gather and validate business requirements. Align expectations from different stakeholders to streamline, expedite, and resolve client queries. Experience in project management and business transformation in financial and insurance industries is an advantage.
  • Model and design the ETL pipeline data structure, storage, integration, integrity checks, and reconciliation. Standardize exception control and ensure traceability during troubleshooting.
  • Document and write technical specifications for functional and non-functional requirements of the solution.
  • Design data models/platforms to enable scalable growth while minimizing risk and cost of changes for a large-scale data platform.
  • Analyze new data sources with a structured data quality evaluation approach and collaborate with stakeholders on the impact of integrating new data into existing pipelines and models.
  • Bridge the gap between business requirements and ETL logic by troubleshooting data discrepancies and implementing scalable solutions.
Qualifications
  • Bachelor's degree (or higher) in project management, business management, mathematics, statistics, computer science, engineering, or related field.
  • At least 5 years IT experience with 2 years in data migration and/or data warehouse pipelines projects. At least 3 years of experience with Oracle or SQL Server SQL development.
  • Strong technical understanding of data quality metrics, data modelling, design and architecture principles and techniques across master data, transaction data and data warehouse.
  • Experience with Stored Procedures (e.g., Oracle PL/SQL) and SQL DDL/DML.
  • Hadoop, Python, Java Spring Boot, Docker or OCP experience is advantageous.
  • Experience with Power BI tool and its data objects, report objects, and service objects in different scenarios.
  • Experience in Star Schema and Snowflake design and dimensional modelling.
  • Knowledge of Power BI Row Level Security.
  • Experience using JSON API data services for rendering Power BI reports.
  • Knowledge of Power Query M script is an advantage.
  • Proficient in both spoken and written English and Chinese (Mandarin/Cantonese).
  • Proactive with good problem-solving and multitasking skills.

All personal data provided by candidates will be used for recruitment purposes only by HKT Services Limited in accordance with HKT's Privacy Statement, which is available on our website. Unless otherwise instructed in writing, candidates may be considered for other suitable positions within the Group (HKT Limited, PCCW Limited and their subsidiaries, affiliates and associated companies). Personal data of unsuccessful candidates will normally be destroyed 24 months after rejection of the candidate's application. If you have any questions regarding your personal data held by HKT Services Limited, please refer to HKT's Privacy Statement or contact our Privacy Compliance Officer by writing to or GPO Box 9896, Hong Kong.

Seniority level
  • Mid-Senior level
Employment type
  • Full-time
Job function
  • Information Technology
Industries
  • Technology, Information and Internet

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Kowloon Bay $90000 - $120000 Y 信合集團

Posted today

Job Viewed

Tap Again To Close

Job Description

Post date: 24 September 2025

Ref: DE

Department: Information Technology

Location: Kowloon Bay

RESPONSIBILITIES

The successful candidate will have a strong background in building and maintaining scalable data pipelines and will be proficient in leveraging modern data technologies on the Azure cloud platform. He/ She will play a key role in designing, developing, and optimizing our data architecture to support our data-driven decision-making processes.

He/ She will be expected to perform the following:

  • Design, construct, install, test, and maintain highly scalable and reliable data management and processing systems
  • Develop and manage ETL/ELT pipelines to ingest data from a wide variety of data sources and systems, ensuring data quality and integrity
  • Build and optimize data models on Azure Synapse Analytics / Microsoft Fabric for analytical and reporting purposes
  • Implement and manage data storage solutions using Medallion Architecture with Delta Lake, including creating and maintaining tables, handling schema evolution, and ensuring ACID compliance for data transactions
  • Utilize PySpark, Spark SQL, Python and SQL for data transformation, manipulation, and analysis within our data platforms
  • Develop interactive dashboards, reports, and visualizations using Power BI, Qlik and SAC to provide actionable insights to business users
  • Collaborate with data analysts, and business stakeholders to understand data requirements and deliver appropriate data solutions
  • Monitor and troubleshoot data pipeline performance, implement optimizations and resolving issues in a timely manner
  • Ensure data governance and security best practices are implemented and adhered to throughout the data lifecycle
  • Stay current with the latest trends and technologies in data engineering and the Azure ecosystem
REQUIREMENTS
  • Degree holder in computer science, Engineering, Information Systems, or a related technical field
  • Minimum of 2 years of proven experience as a Data Engineer or in a similar role, with a clear progression of responsibilities and accomplishments
  • Mastery of PySpark, Spark SQL, Python and SQL for large-scale data processing and analysis
  • Deep, hands-on experience with Microsoft Azure data services, particularly Azure Synapse Analytics, Azure Data Factory, Azure Data Lake Storage as well as Microsoft Fabric, including architectural design and cost management
  • In-depth, expert-level knowledge of Medallion Architecture with Delta Lake, including its architecture, advanced features, and practical implementation in enterprise-level data lakes
  • Strong proficiency in data visualization and business intelligence tools, specifically Power BI and Qlik, with experience in developing complex reports and data models
  • Expertise in data modeling, Data Warehousing, Data Lakehouse and Delta Lakehouse concepts, and building and orchestrating enterprise-grade ETL/ELT pipelines
  • Demonstrated experience with software engineering best practices, including leading code reviews, managing CI/CD pipelines (e.g., Azure DevOps), and working in an agile environment
  • Exceptional problem-solving, analytical, and communication skills
  • Good command of both spoken and written English and Chinese
This advertiser has chosen not to accept applicants from your region.

Data Engineer

$80000 - $200000 Y One Advisors

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities

  • Design and deploy Proof of Concepts (PoCs), Business Intelligence (BI) reports, and ETL processes
  • Collaborate with internal IT teams and stakeholders to design and implement effective solutions
  • Maintain and update technical documentation throughout the enhancement cycle to accurately reflect changes

Requirements

  • Possess a university degree or a professional qualification in Information Technology, along with at least 3 years of substantial and relevant work experience
  • Experience in the development and use of Microsoft Power BI, SSIS, SSRS, and SSAS
  • Experience with Excel PowerQuery, and PowerPivot
  • Experience with ETL processes and tools, including C# and .NET
  • Involvement in designing and developing SQL queries, stored procedures, and functions
  • Knowledge of data modelling and dimensional data modelling concepts
  • Understanding of data visualization practices and principles
  • Familiarity with Oracle PL/SQL development and performance tuning
  • Exposure to Java, Spring Boot, ReactJS
  • A strong understanding of the regulatory environment within the securities sector would be considered an advantage
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineer Jobs in Hong Kong !

Data Engineer

$40000 - $80000 Y Hutchison Telecommunications (Hong Kong) Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

Hutchison Telecom Hong Kong is a leading digital operator in Hong Kong, committed to channelling the latest technologies into innovations that set market trends and steer industry development. We offer diverse and advanced mobile services under the 3, 3SUPREME, MO+ and SoSIM brands in the consumer market, and are dedicated to developing enterprise solutions in the corporate market under the 3Business brand.

We are currently recruiting exceptional candidates to join our team as we enter the digital era powered by advanced 5G tech. To learn more about us, visit 

Responsibilities:

  • Identify data sources, transform, correlate and aggregate information, load useful records to Data Warehouse, and formulate Data Marts for users to access
  • Data integrity assurance, monitor job alerts, rescue/ restore problem data records/ jobs
  • Analyze the information and produce useful summarized figures/ reports to management
  • Jobs/ scripting/ SQL statements tuning
  • Automation of users reports
  • System maintenance support

Requirements:

  • Degree in Computer Sciences or related disciplines
  • 2+ years relevant working experience, preferably gained in mobile network industry
  • Good understanding of structural and unstructured data manipulation
  • Previous exposure in statistics analysis and presentations is a plus
  • Familiar with AI, ML, LLM and Database applications is an advantage
  • Operating System knowledge in Unix/ Linux, Windows Server, IOS
  • Database/ Network knowledge in Oracle, MongoDB, MS-SQL, MYSQL
  • Programming Language knowledge in SQL, Python, Java, JavaScript
  • Good command in spoken and written English and Chinese

Apart from competitive remuneration package and exciting opportunity for career development within the Group, we provide attractive employee benefits such as free company shuttle, free company SIM card, staff discount and preferential SIM plan offers, comprehensive medical & insurance schemes, as well as a full range of other employee well-being provisions.

We appreciate your interest in joining us, by submitting your full resume with present and expected salary to  or clicking "QUICK APPLY" button.

We promote a diverse workforce drives our goals and contributes to overall success of the Group.  We strive to create a work environment that is respectful, inclusive, and free from any form of discrimination, harassment and intimidation.

Being an equal opportunity employer, we embrace diversity and inclusion, and welcome talents from any backgrounds and conditions. Personal data collected will be treated in the strictest confidence and handled confidentially by authorised personnel for recruitment-related purposes only within the CK Hutchison Group of companies. The personal data of unsuccessful applicants will be destroyed after the recruitment exercise pursuant to the requirements of the Personal Data (Privacy) Ordinance in Hong Kong.

This advertiser has chosen not to accept applicants from your region.

Data Engineer

$120000 - $150000 Y Red Begonia Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

About RedotPay

RedotPay is a global crypto payment fintech integrating blockchain solutions into traditional banking and finance infrastructure. Our user-friendly crypto platform empowers millions globally to spend and send crypto assets, ensuring faster, more accessible, and inclusive financial services. RedotPay advances financial inclusion for the unbanked and supports crypto enthusiasts, driving the global adoption of secure and flexible crypto-powered financial solutions. Join us in shaping the future of finance and making a meaningful impact on a global scale.

Job Summary

As a Data Engineer, you will be a key member of our data team, responsible for building and maintaining our robust, scalable, and efficient data pipelines. You will work closely with data scientists, analysts, and software engineers to ensure data is accessible, reliable, and ready for use. The ideal candidate is passionate about big data technologies, has a strong foundation in software engineering principles, and thrives in a collaborative environment.

Key Responsibilities

  • Pipeline Development & Architecture:

  • Design, construct, install, test, and maintain highly scalable data pipelines and data models.

  • Build the infrastructure required for optimal extraction, transformation, and loading (ETL) of data from a wide variety of data sources using SQL and cloud-based 'big data' technologies.
  • Develop and implement processes for data modeling, mining, and production.
  • Select and integrate any new big data tools and frameworks required to provide requested capabilities.
  • Data Management & Quality:

  • Implement systems for monitoring data quality, ensuring production data is always accurate and available for key stakeholders and business processes.

  • Develop and maintain scalable and sustainable data warehousing solutions, including data lakes and data marts.
  • Manage and orchestrate data workflows using modern tools (e.g., Airflow, dbt, Prefect).
  • Collaboration & Support:

  • Collaborate with data scientists and analysts to support their data needs for advanced analytics, machine learning, and reporting.

  • Work with software engineering teams to assist with data-related technical issues and support their data infrastructure needs.
  • Translate complex business requirements into technical specifications.
  • Operational Excellence:

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Required Qualifications & Skills

  • Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
  • 3+ years of proven experience as a Data Engineer or in a similar role.
  • Strong programming skills in Python and SQL are essential.
  • Deep experience with cloud data platforms such as AWS (Redshift, S3, Glue, EMR, Lambda), Google Cloud Platform (BigQuery, Dataflow, Pub/Sub, Cloud Composer), or Azure (Data Factory, Synapse Analytics, Databricks).
  • Experience with big data tools and processing frameworks such as Spark (PySpark/SparkSQL) and Hadoop.
  • Solid experience building and optimizing ETL/ELT pipelines and data architectures.
  • Experience with relational SQL and NoSQL databases, including Postgres, MySQL, Cassandra, MongoDB.
  • Experience with data pipeline and workflow management tools: Airflow, dbt, Luigi, etc.
  • Understanding of data modeling, data warehousing, and data lake concepts (e.g., Star/Snowflake schema, Data Vault 2.0, Slowly Changing Dimensions).
This advertiser has chosen not to accept applicants from your region.

Data Engineer

$80000 - $120000 Y SmartHire by SEEK

Posted today

Job Viewed

Tap Again To Close

Job Description

Our client "Real Corporation Limited" is seeking a Data Engineer (DBA) – MongoDB Expert to join their team
What you'll be doing?
  • Design ETL platform: Develop and implement robust ETL solutions using Airbyte and Airflow to ensure efficient data processing and transformation.
  • Optimize databases: Manage and enhance database performance, with a primary focus on MongoDB, MySQL, and PostgreSQL.
  • Implement data pipelines: Create and maintain efficient data pipelines to ensure smooth data flow and processing across systems.
  • Create visualizations: Develop insightful application statistics and data visualizations to support business decision-making.
  • Develop BI dashboards: Build and maintain Business Intelligence dashboards for comprehensive data analysis and reporting.
  • Manage logs: Utilize the ELK stack (Elasticsearch, Logstash, Kibana) for effective log management and data visualization.
  • Integrate data sources: Connect and integrate various data sources with third-party APIs to enhance data accessibility and utility.
  • Collaborate across teams: Work closely with cross-functional teams to understand data needs and provide valuable insights.
Who are they looking for?
  • Educational background: Candidates with a Bachelor's degree in Computer Science, Data Science, or a related field. We welcome applications from individuals with diverse educational backgrounds.
  • Relevant experience: Ideally, 4+ years of experience in data engineering or related roles. However, we encourage applications from candidates at various career stages.
  • MongoDB expertise: Strong knowledge and hands-on experience with MongoDB and database management.
  • ETL proficiency: Familiarity with ETL tools, particularly Airbyte and Apache Airflow.
  • Programming skills: Strong programming abilities in Python, Golang, or similar languages.
  • Data visualization: Experience with data visualization tools such as Tableau or Power BI.
  • Cloud services: Familiarity with cloud services (e.g., AWS, Google Cloud, Alibaba Cloud) is essential for this role.
  • AI knowledge: Good understanding of RAG and AI workflow automation is advantageous but not mandatory.
  • Language skills: Proficiency in English to facilitate effective communication within our diverse team.
Why you should consider this opportunity?

Our client offers an attractive remuneration package and other benefits, such as:

  • Annual Leave
How to apply

Ready to join this role? Click Apply now to submit your resume and share your availability and expected salary with us

We value diversity and encourage applications from candidates of all backgrounds, regardless of specific qualifications or experience.

All information received will be kept strictly confidential and will be used only for employment-related purposes.

Refer A Candidate and Earn $2,000)

SmartReward #SmartHire
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Jobs