What Jobs are available for Data Acquisition in Hong Kong?
Showing 124 Data Acquisition jobs in Hong Kong
Data Analysis Specialist
Posted today
Job Viewed
Job Description
Job Responsibilities
- Assist in building data models and the company's data platform, optimizing analysis processes, and promoting data standardization.
- Perform data analysis and reporting to support product strategy and customer management.
- Responsible for the collection, analysis, and maintenance of product-related data.
Job Requirements
- 1-3 years of work experience; master's degree or above.
- Strong data analysis and processing skills, proficient in using common data analysis tools such as SQL and Python.
- Clear thinking, logical rigor, attention to detail, strong sense of responsibility, and strong data insight ability.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Manager, AI Model Development and Data Analysis
Posted today
Job Viewed
Job Description
Responsibility:
- Develop, train and deploy artificial intelligence and machine learning models to solve complex business challenges;
- Responsible for various data preprocessing activities including collection, cleansing and validation of large datasets, and transforming the data into format suitable for consumption by artificial intelligence and machine learning models;
- Conduct in-depth data analysis on large datasets, and apply statistical and machine learning algorithms for extracting meaning from data and identifying actionable insights;
- Continuously monitor model performance and iterate to improve performance and efficiency of models with a view to optimizing business function;
- Responsible for preparation and maintenance of documentations in relation to the model development throughout the model development lifecycle;
- Assist in building a compliance data warehouse and continuously expanding data scope according to the need of AML monitoring, optimizing database performance and improving data usage experience;
- Assist in management and maintenance of the compliance data warehouse, including formulating database management policy and procedure, developing and maintaining data dictionary, monitoring data quality and ensuring data security;
- Assist in data management within the department, including data accountability management, data governance, data quality and security, etc.;
- Collaborate closely with cross-functional teams and external parties including business stakeholders, inhouse IT and data scientist team and solution vendors;
- Research emerging artificial intelligence trends and models, and integrate the new techniques to enhance model capabilities
Requirements:
- Bachelor degree or above in Computer Science, Data Science, Financial Mathematics, Statistics, Finance, Economics, or related disciplines
- 3-5 years or above relevant experience in implementation of artificial intelligence and machine learning models; experience in Banking data management, analysis and application being an advantage (For Senior Manager, at least 8 years relevant experience)
- Proficiency in programming languages, including Python and R, and machine learning libraries
- Strong understanding of machine learning algorithms, deep learning and statistical methods
- Strong problem-solving ability with passion for analytic excellence, able to manage complex scenario challenges simultaneously and work independently and under pressure
- Experience in financial service industry or financial crime compliance domain, or possession of relevant qualification in CAMS, ECF (AML/CFT) Core Level, FRM, CPA, ACCA being an advantage
- Good command of written and spoken English and Chinese
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    AML Manager, AI development and data analysis
Posted today
Job Viewed
Job Description
Responsibility:
- Develop, train and deploy artificial intelligence and machine learning models to solve complex business challenges;
- Responsible for various data preprocessing activities including collection, cleansing and validation of large datasets, and transforming the data into format suitable for consumption by artificial intelligence and machine learning models;
- Conduct in-depth data analysis on large datasets, and apply statistical and machine learning algorithms for extracting meaning from data and identifying actionable insights;
- Continuously monitor model performance and iterate to improve performance and efficiency of models with a view to optimizing business function;
- Responsible for preparation and maintenance of documentations in relation to the model development throughout the model development lifecycle;
- Assist in building a compliance data warehouse and continuously expanding data scope according to the need of AML monitoring, optimizing database performance and improving data usage experience;
- Assist in management and maintenance of the compliance data warehouse, including formulating database management policy and procedure, developing and maintaining data dictionary, monitoring data quality and ensuring data security;
- Assist in data management within the department, including data accountability management, data governance, data quality and security, etc.;
- Collaborate closely with cross-functional teams and external parties including business stakeholders, inhouse IT and data scientist team and solution vendors;
- Research emerging artificial intelligence trends and models, and integrate the new techniques to enhance model capabilities
- Bachelor degree or above in Computer Science, Data Science, Financial Mathematics, Statistics, Finance, Economics, or related disciplines
- 3-5 years or above relevant experience in implementation of artificial intelligence and machine learning models; experience in Banking data management, analysis and application being an advantage (For Senior Manager, at least 8 years relevant experience)
- Proficiency in programming languages, including Python and R, and machine learning libraries
- Strong understanding of machine learning algorithms, deep learning and statistical methods
- Strong problem-solving ability with passion for analytic excellence, able to manage complex scenario challenges simultaneously and work independently and under pressure
- Experience in financial service industry or financial crime compliance domain, or possession of relevant qualification in CAMS, ECF (AML/CFT) Core Level, FRM, CPA, ACCA being an advantage
- Good command of written and spoken English and Chinese
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Bilingual Chinese Data Collection Specialist
Posted today
Job Viewed
Job Description
Join our client's team and put your bilingual Chinese-English skills to work They are seeking detail-oriented professionals with strong data entry and spreadsheet experience. If you have an Android phone, a reliable computer with internet access, and the ability to follow clear instructions with precision, this opportunity is for you. Bring your accuracy, focus, and dedication to a role where your skills make a real impact. Apply today to take the next step in your career
Job Highlights
- Hourly Rate
 : USD 5, the equivalent in your local currency
- Paid Hours per Week:
 40 Hours
- Schedule:
 Monday – Friday, 9:00 AM – 6:00 PM Pacific (includes 1-hour unpaid break) |
 Client Timezone:
 Pacific Time
- Work Arrangement:
 Work from home
- Contract:
 Independent Contractor
Side note: 
Since this is a permanent work-from-home position and an "Independent Contractor" arrangement, the candidates must have their own computer and internet connection. They will handle their own benefits and taxes. The professional fees are on hourly rates and the rate depends on your performance in the application process.  
Scope
- 6-month project (ongoing until March 2026)
- Remote work with direct client management
- Work as part of a 4–5 person team collecting ~20,000 data points
Responsibilities
- Navigate Chinese-language mobile apps (Meituan and Alibaba's Ulema)
- Collect and extract specific data points from food delivery platforms
- Enter collected data into structured spreadsheets
- Follow established data collection processes and procedures
- Report daily progress to client management
- Ensure accuracy and consistency across all data entry tasks
Requirements
- Fluent in Chinese and English (bilingual capability essential)
- Own an Android phone for platform access
- Reliable computer/laptop with internet connection
- Experience with basic data entry and spreadsheet software
- Strong attention to detail and ability to follow instructions precisely
Independent Contractor Perks
- HMO coverage (eligible locations)
- Permanent work from home
- Immediate hiring
Reminder:
Kindly apply directly to the
link
provided; you will be redirected to BruntWork's Career Site. Complete the initial requirements, including the voice recording, pre-screening assessment, and technical check of your computer/device. 
ZR_28099_JOB
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer/ Senior Data Engineer
Posted today
Job Viewed
Job Description
Our client is now looking for talents to join their team:
Goal of the Project
l Implement a scalable cloud-based data platform to enable sustainable, reusable, secure, efficient ingestion, transformation, and storage of enterprise data across multiple sources.
l Improve data consistency, reliability and controlled accessibility by automating ETL/ELT workflows, enforcing data quality checks, and reducing manual intervention by 30–40%.
l Knowledge and experience of near real-time analytics capabilities to support business intelligence, predictive modeling, and faster decision-making for strategic initiatives.
Primary Responsibilities (Guideline: List core duties in at least 3-5 bullet points)
l Optimize, operate at scale and enhance cloud-based data platform on Microsoft Azure (incl DataBricks)
l Involve in cloud data platform enhancement project lifecycle with external vendor including design, development, testing, deployment, and documentation.
l Work with Data Analyst to perform data preparation, cleansing, build optimized data models and visualization in Power BI
l Perform proof of concept for cloud data products
Secondary Responsibilities (Guideline: List supporting tasks in bullet points)
l Validate data workflows and integration points through rigorous testing (e.g., pipeline validation, schema checks, and performance benchmarking) to ensure solutions meet business and technical requirements.
l Create operational playbooks and automation scripts to streamline deployment, monitoring, and troubleshooting, enabling efficient handover and long-term maintainability of data solutions.
l Document data pipeline architecture, deployment processes, and operational runbooks to support effective troubleshooting and post-go-live maintenance by internal teams
Additional Beneficial Data Knowledge:
l ERP
l CRM
l Marketing
l Asset Management
l Construction
Requirements:
● Bachelor's degree in computer science, Information Technology, Data Science or related field. Related Certification in Cloud Data platform
● Fluency in English.
● years of professional experience related to data platform, especially familiar with data engineer side. Experience in Python, SQL, Spark, Azure Data Factory, Event Hub / Kafka / Event Messaging, Azure Data Lake Storage Gen2, Databricks, Unity Catalog, Power BI
● Experience in software development lifecycle (SDLC).
● Understanding of data lifecycle and governance principles (e.g., data quality, lineage, security, and compliance).
Advantages experience (Not a must to have)
l Databricks Certified Data Engineer Associate or Professional
l Prior work on scalable Data Platform implementations or data platform migrations (or 2-3 project-specific examples).
l Exposure to data monetization or advanced analytics use cases (predictive modeling, AI/ML pipelines).
l Knowledge of Data & Analytics Multi Tenancy, Data Lakehouse platform architecture, DataOps, MLOps, AIOps, ModelOps and FinOps.
l Contribute as a strong single contributor and have ability to manage junior data engineers.
l Ability to manage engineering tasks and contribute to planning for successful execution.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer, Data
Posted today
Job Viewed
Job Description
We offer work from home (Max. 2 days per week), 14-20 days' annual leave, double pay, discretionary bonus, overtime pay, medical/dental/life insurance, five-day work week.
As a Data Management Engineer, you will play a critical role in ensuring the integrity, security, and efficiency of our data platform. You will collaborate closely with cross-functional teams to implement governance frameworks, enforce data standards, and optimize resource usage. Your work will directly support the organization's data strategy and compliance posture.
Job Description :
- Lead the design, implementation and deployment of a master data management architecture that encompasses all customer source systems to enable data sharing across different regions, business units and departments 
- Operationalize Enterprise Master Data Repository, to enforce centralized governance controls at a global level 
- Identify and build data quality rules, investigate and remediate data quality issues 
- Design and build data quality dashboards with Power BI 
- Evaluate, select and implement appropriate data management technologies to address data governance challenges 
- Manage vendors to complete data governance activities, from vendor selection, data discovery, Proof of Concept (PoC) development, implementation to global adoption 
- Design and implement data governance solutions that incorporate AI-driven data management techniques to improve data quality and enhance data governance processes 
- Monitor data platform resource utilization and performance metrics 
- Identify and recommend opportunities for cost optimization and operational efficiency 
- Lead analysis of the current data platforms (e.g., logs) to detect critical deficiencies and recommend solutions for improvement 
- Engage with key data stakeholders to outline data objectives and gather data requirements. Execute solutions encompassing ownership, accountability, streamlined processes, robust procedures, stringent data quality measures, security protocols, and other pertinent areas to drive successful implementation 
- Implement the Architecture Governance Standard, Platform Design Principles, Platform Security, and Data Compliance Standard 
- Implement the Data Classification Standard to enhance data management and security measures within the organization 
- Take charge of the Global Data Quality Forum and establish regional forums if required to foster collaboration and knowledge sharing on data quality practices 
- Conduct market research and collaborate with vendors to evaluate cutting-edge data management technologies, trends, and products. Select and deploy the most suitable solutions for Global Data and Analytics Governance initiatives, ensuring seamless scalability 
Requirement:
- Bachelor's degree from a recognized university in Computer Science, Information Engineering, or related field 
- At least 6 years of experience in Data Engineering, IT, Data Governance, Data Management or related field 
- Knowledge of data management best practices and technologies 
- Knowledge of data governance, security and observability 
- Proven ability to identify innovation opportunities and deliver innovative data management solutions 
- Hands-on experiences in SQL, Python, and PowerBI 
- Experience in Azure Databricks Unity Catalog and DLT 
- Excellent analytical and problem-solving skills 
- Fluent in English speaking and writing 
- Willingness to travel, as needed 
The requirements below are considered as advantages, but not a must .
- Knowledge of data related regulatory requirements and emerging trends and issues 
- Experience in programming languages including PySpark, R, Java, Scala 
- Experience in working with cross-functional teams in global settings 
Interested parties please send full resume with employment history and expected salary to HRA Department, Yusen Logistics Global Management (Hong Kong) Limited by email.
Yusen Logistics Global Management (Hong Kong) Limited is an equal opportunity employer. All information collected will be used for recruitment purpose only.
<<
About Yusen Logistics
Yusen Logistics is working to become the world's preferred supply chain logistics company. Our complete offer is designed to forge better connections between businesses, customers and communities – through innovative supply chain management, freight forwarding, warehousing and distribution services. As a company we're dedicated to a culture of continuous improvement, ensuring everyone who works with us is committed, connected and creative in making us the world's preferred choice.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
What You'll Be Doing
- Architect ETL Platforms: Design and build scalable ETL pipelines using Airbyte and Apache Airflow to streamline data processing and transformation.
- Optimize Database Performance: Lead the management and optimization of MongoDB, MySQL, and PostgreSQL databases for high performance and reliability.
- Build Robust Data Pipelines: Develop and maintain efficient, automated data pipelines to ensure seamless data flow across systems.
- Create Impactful Visualizations: Craft compelling data visualizations and application statistics to drive strategic business decisions.
- Develop BI Dashboards: Design and maintain intuitive Business Intelligence dashboards for actionable insights and reporting.
- Manage Log Systems: Leverage the ELK stack (Elasticsearch, Logstash, Kibana) for advanced log management and real-time analytics.
- Integrate Diverse Data Sources: Connect and unify data from third-party APIs to enhance accessibility and functionality.
- Collaborate for Success: Partner with cross-functional teams to understand data requirements and deliver impactful solutions.
- Educational Background: Bachelor's degree in Computer Science, Data Science, Engineering, or a related field. We value diverse educational paths and encourage all qualified candidates to apply.
- Experience Level: 4+ years of hands-on experience in data engineering, database administration, or related roles. We welcome candidates with varied career journeys who bring fresh perspectives.
- MongoDB Mastery: Deep expertise in MongoDB, including schema design, performance tuning, and scalability.
- ETL Expertise: Proven experience with ETL tools like Airbyte and Apache Airflow for building robust data workflows.
- Programming Prowess: Strong proficiency in Python, Golang, or similar languages for data processing and automation.
- Visualization Skills: Hands-on experience with BI tools like Tableau, Power BI, or similar for creating impactful visualizations.
- Cloud Proficiency: Working knowledge of cloud platforms such as AWS, Google Cloud, or Alibaba Cloud to support scalable data solutions.
- AI Advantage: Familiarity with Retrieval-Augmented Generation (RAG) and AI workflow automation is a plus but not required.
- Communication Skills: Fluency in English to collaborate effectively within our diverse, global team.
Our client offers an attractive remuneration package and other benefits, such as:
- Annual Leave
- Performance Bonus
- Grow your career with opportunities for professional development and impact
For further information, please contact by WhatsApp.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Be The First To Know
About the latest Data acquisition Jobs in Hong Kong !
Data Engineer
Posted today
Job Viewed
Job Description
Responsibility
- Design, build and maintain scalable and efficient ETL/ELT pipelines in Azure Databricks to process structured, semi-structured and unstructured insurance data from multiple internal and external sources.
- Collaborate with data architects, modelers, analysts and business stakeholders to gather data requirements and deliver fit-for-purpose data assets that support analytics, regulatory and operational needs.
- Develop, test and optimize data transformation routines, batch and streaming solutions (leveraging tools such as Azure Data Factory, Data Lake Storage Gen2, Azure Event Hubs and Kafka) to ensure timely and accurate data delivery.
- Implement rigorous data quality, validation and cleansing procedures — with a focus on enhancing reliability for high-stakes insurance use cases, reporting and regulatory outputs.
- Integrate Informatica tools to facilitate data governance, including the capture of data lineage, metadata and data cataloguing as required by regulatory and business frameworks.
- Ensure robust data security by following best practices for RBAC, managed identities, encryption and compliance with Hong Kong's PDPO, GDPR and other relevant regulatory requirements.
- Automate and maintain deployment pipelines using GitHub Actions to ensure efficient, repeatable and auditable data workflows and code releases.
- Conduct root cause analysis, troubleshoot pipeline failures and proactively identify and resolve data quality or performance issues.
- Produce and maintain comprehensive technical documentation for pipelines, transformation rules and operational procedures to ensure transparency, reuse and compliance.
- Apply subject matter expertise in Hong Kong Life and General Insurance to ensure that development captures local business needs and industry-specific standards.
Requirement
- Bachelor's degree in Information Technology, Computer Science, Data Engineering or a related discipline.
- 3+ years of experience as a data engineer, building and maintaining ETL/ELT processes and data pipelines on Azure Databricks (using PySpark or Scala), with a focus on structured, semi-structured and unstructured insurance data.
- Strong experience orchestrating data ingestion, transformation and loading workflows using Azure Data Factory and Azure Data Lake Storage Gen2.
- Advanced proficiency in Python and Spark for data engineering, data cleaning, transformation and feature engineering in Databricks for analytics and machine learning.
- Experience integrating batch and streaming data sources via Kafka or Azure Event Hubs for real-time or near-real-time insurance applications.
- Hands-on use of Informatica for data quality, lineage and governance to support business and regulatory standards in insurance.
- Familiarity with automation and CI/CD of Databricks workflows using GitHub Actions.
- Understanding of data security, RBAC, Key Vault, encryption and best practices for compliance in the insurance sector.
- Experience optimizing data pipelines to support ML workflows and BI/reporting tools.
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
Our client is a statutory body in Hong Kong. They are looking for experienced talent to implement a scalable cloud-based data platform to enable sustainable, reusable, secure, efficient ingestion, transformation, and storage of enterprise data across multiple sources. Th individual should possess knowledge and experience of near real-time analytics capabilities to support business intelligence, predictive modelling, and faster decision-making for strategic initiatives.
Responsibilities
- Optimize, operate at scale and enhance cloud-based data platform on Microsoft Azure (incl DataBricks)
- Involve in cloud data platform enhancement project lifecycle with external vendor including design, development, testing, deployment, and documentation.
- Work with Data Analyst to perform data preparation, cleansing, build optimized data models and visualization in Power BI
- Perform proof of concept for cloud data products
- Validate data workflows and integration points through rigorous testing (e.g., pipeline validation, schema checks, and performance benchmarking) to ensure solutions meet business and technical requirements.
- Create operational playbooks and automation scripts to streamline deployment, monitoring, and troubleshooting, enabling efficient handover and long-term maintainability of data solutions.
- Document data pipeline architecture, deployment processes, and operational runbooks to support effective troubleshooting and post-go-live maintenance by internal teams
Requirements:
- Bachelor's degree in computer science, Information Technology, Data Science or related field
- Minimum 3+ years of professional experience related to data platform
- Experience in Python, SQL, Spark, Azure Data Factory, Event Hub / Kafka / Event Messaging, Azure Data Lake Storage Gen2, Databricks, Unity Catalog, Power BI
- Understanding of data lifecycle and governance principles (e.g., data quality, lineage, security, and compliance
- Experience in software development lifecycle (SDLC)
- Related Certification in Cloud Data platform
Is this job a match or a miss?
 
            
        
                                            
            
                 
            
        
                    Data Engineer
Posted today
Job Viewed
Job Description
Key Responsibilities
- Design and implement technical solutions to collect, integrate, secure, store, organize, and transform data into deliverable formats.
- Build and monitor data pipelines.
- Develop scripts and custom code to process and refine data.
- Collaborate with business users to improve existing tools used for reporting.
- Perform unit tests, analyze database queries, and troubleshoot issues.
- Prepare and maintain technical documentation for senior staff and team members.
- Work collaboratively within a close-knit team, maintaining professionalism in virtual and in-person settings.
- Propose and develop efficient, robust solutions based on requirements, in collaboration with IT and business stakeholders.
- Provide ongoing maintenance, issue investigation, and support.
- Create documentation, flowcharts, diagrams, and clear code to demonstrate and explain solutions.
- Occasionally travel to world-class factories as required.
General Skills & Experience Requirements
- Bachelor's degree in Computer Science, Information Technology, Statistics, or a related discipline.
- 2–4 years of experience in data engineering projects.
- Motivated, independent, and self-reliant, capable of completing tasks with minimal supervision.
- Ability to create detailed design documents, articulate vision, and defend proposed solutions.
- Fluency in English and Mandarin is an advantage.
Required Data Engineering Skills
- Proficiency in Cloud Data Platform solutions (certifications are a plus).
- Experience as a SQL/Oracle Database/Python Developer.
- Expertise in ETL processes for complex data projects.
- Proficiency in programming languages such as Python and Java.
- Experience with creating and managing data assets in a Cloud Data Platform.
- Strong knowledge of data pipeline optimization.
- Understanding of general data modeling concepts.
Preferred Technical Skills
- Exceptional written and verbal communication skills.
- Experience in machine learning and AI projects is a plus.
- Ability to collaborate with remote teammates and users effectively.
- Familiarity with the manufacturing sector is an advantage.
- Understanding of process engineering concepts and measurements is a bonus.
Is this job a match or a miss?
 
            
        
                                            
            
                