1 018 Team Data jobs in Hong Kong

Data Engineer, Data

$1200000 - $2400000 Y Yusen Logistics Global Management (Hong Kong) Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

We offer work from home (Max. 2 days per week), 14-20 days' annual leave, double pay, discretionary bonus, overtime pay, medical/dental/life insurance, five-day work week.

As a Data Management Engineer, you will play a critical role in ensuring the integrity, security, and efficiency of our data platform. You will collaborate closely with cross-functional teams to implement governance frameworks, enforce data standards, and optimize resource usage. Your work will directly support the organization's data strategy and compliance posture.

Job Description :

  • Lead the design, implementation and deployment of a master data management architecture that encompasses all customer source systems to enable data sharing across different regions, business units and departments

  • Operationalize Enterprise Master Data Repository, to enforce centralized governance controls at a global level

  • Identify and build data quality rules, investigate and remediate data quality issues

  • Design and build data quality dashboards with Power BI

  • Evaluate, select and implement appropriate data management technologies to address data governance challenges

  • Manage vendors to complete data governance activities, from vendor selection, data discovery, Proof of Concept (PoC) development, implementation to global adoption

  • Design and implement data governance solutions that incorporate AI-driven data management techniques to improve data quality and enhance data governance processes

  • Monitor data platform resource utilization and performance metrics

  • Identify and recommend opportunities for cost optimization and operational efficiency

  • Lead analysis of the current data platforms (e.g., logs) to detect critical deficiencies and recommend solutions for improvement

  • Engage with key data stakeholders to outline data objectives and gather data requirements. Execute solutions encompassing ownership, accountability, streamlined processes, robust procedures, stringent data quality measures, security protocols, and other pertinent areas to drive successful implementation

  • Implement the Architecture Governance Standard, Platform Design Principles, Platform Security, and Data Compliance Standard

  • Implement the Data Classification Standard to enhance data management and security measures within the organization

  • Take charge of the Global Data Quality Forum and establish regional forums if required to foster collaboration and knowledge sharing on data quality practices

  • Conduct market research and collaborate with vendors to evaluate cutting-edge data management technologies, trends, and products. Select and deploy the most suitable solutions for Global Data and Analytics Governance initiatives, ensuring seamless scalability

Requirement:

  • Bachelor's degree from a recognized university in Computer Science, Information Engineering, or related field

  • At least 6 years of experience in Data Engineering, IT, Data Governance, Data Management or related field

  • Knowledge of data management best practices and technologies

  • Knowledge of data governance, security and observability

  • Proven ability to identify innovation opportunities and deliver innovative data management solutions

  • Hands-on experiences in SQL, Python, and PowerBI

  • Experience in Azure Databricks Unity Catalog and DLT

  • Excellent analytical and problem-solving skills

  • Fluent in English speaking and writing

  • Willingness to travel, as needed

The requirements below are considered as advantages, but not a must .

  • Knowledge of data related regulatory requirements and emerging trends and issues

  • Experience in programming languages including PySpark, R, Java, Scala

  • Experience in working with cross-functional teams in global settings

Interested parties please send full resume with employment history and expected salary to HRA Department, Yusen Logistics Global Management (Hong Kong) Limited by email.

Yusen Logistics Global Management (Hong Kong) Limited is an equal opportunity employer. All information collected will be used for recruitment purpose only.

<<

About Yusen Logistics

Yusen Logistics is working to become the world's preferred supply chain logistics company. Our complete offer is designed to forge better connections between businesses, customers and communities – through innovative supply chain management, freight forwarding, warehousing and distribution services. As a company we're dedicated to a culture of continuous improvement, ensuring everyone who works with us is committed, connected and creative in making us the world's preferred choice.

This advertiser has chosen not to accept applicants from your region.

Senior / Data Analyst (Data Management - External Data)

$60000 - $80000 Y Bank of China (Hong Kong) Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities :

  • Handle, integrate and govern external data procurement requests
  • Lead and support external data management activities in the external data acquisition process.
  • Support vendors management (service quality and performance monitoring)
  • Document external data sources, usage and other information to support management reporting
  • Support other data management tasks as assigned

Requirements :

  • Degree or above in finance, computer science, IT management, statistics, business administration, project management or relevant disciplines
  • Minimum 3 years' working experience, preferably in data management particularly in external data
  • Experience in external data procurement would be desirable
  • Excellent communication skill and with team-work spirit
  • Have can-do attitude and attentive to details
  • Familiar with Microsoft Office (including Chinese typing)

Candidates with less experience will be considered as Associate Data Analyst

This advertiser has chosen not to accept applicants from your region.

Data Scientist (Marketing Data Science)

Klook

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Join to apply for the Data Scientist (Marketing Data Science) role at Klook .

Get AI-powered advice on this job and more exclusive features.

About Klook

We are Asia’s leading platform for experiences and travel services, and we believe that we can help bring the world closer together through experiences. Founded in 2014 by three travelers, Klook inspires and enables moments of joy for travelers with over half a million curated experiences across 2,700 destinations. Our international community includes over 1,800 employees in 30+ locations, guided daily by our 6 core values: Customer First, Push Boundaries, Critical Thinking, Build for Scale, Less is More, Win as One.

We never settle and strive to achieve greater heights in the dynamic travel era. If you share our belief in the wonders of travel, consider joining us.

What You'll Do
  • Develop and implement advanced marketing measurement models, including Media Mix Modeling (MMM) and Multi-Touch Attribution (MTA), to evaluate and improve ROI across channels (search, social, display, etc.).
  • Analyze campaign performance using data from Google Analytics, Meta Ads (Facebook/Instagram), and other sources to provide insights on channel effectiveness and customer behavior.
  • Apply view-through attribution modeling for display and social campaigns to credit conversions from ad views (not just clicks).
  • Design and execute experiments (e.g., A/B tests) to measure campaign lift and optimize marketing spend.
  • Collaborate with cross-functional teams (Marketing, Data, Product) to translate findings into strategic recommendations and actionable plans.
  • Communicate insights and reports to stakeholders to guide data-driven decision-making for marketing and budgeting.
What You'll Need
  • Education & Experience: Bachelor’s degree or higher in Statistics, Data Science, Marketing, or related field. Minimum 5 years of relevant experience in marketing analytics or data science.
  • Technical Skills: Proficiency in Python, R, and SQL for data analysis and model development. Hands-on experience with Google Analytics and Meta Ads platforms.
  • Domain Expertise: Demonstrated expertise in MMM, MTA, or view-through contribution modeling to assess marketing performance. Familiarity with digital advertising metrics and attribution models.
  • Analytical & Communication: Strong statistical analysis skills and the ability to derive clear insights from complex data. Excellent communication skills to present findings to both technical and non-technical stakeholders.
  • Teamwork: Ability to work in an in-office, cross-functional team environment. Self-motivated with strong problem-solving skills and attention to detail.

Klook is proud to be an equal opportunity employer. We hire talented and passionate people of all backgrounds and are dedicated to creating a welcoming and supportive culture where everyone belongs.

Note: Klook does not accept unsolicited resumes from agencies. Any submission must follow our internal processes and agreements.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer / Senior Data Engineer

Omnicom Media Group Hong Kong

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Get AI-powered advice on this job and more exclusive features.

Overview

Reporting to the Data & Analytics Director, this position is for a Data Engineer who is passionate about building robust, scalable data solutions and lightweight AI applications. While our ecosystem is built on the Google Cloud Platform (GCP), we value strong engineering fundamentals and welcome candidates with experience in similar technologies from other cloud environments (like AWS or Azure).

You will work closely with data analysts, media teams and business stakeholders to build the foundational technology that drives business growth and operational efficiency. A key focus of this role will be developing and deploying lightweight internal tools on Google Cloud Run that are powered by Generative AI. This role offers the opportunity to directly collaborate with clients and vendors, implementing data & analytics strategies for OMG’s clients. You will contribute to our mission of empowering our agencies with advanced data solutions.

Key Responsibilities
  • Data Pipeline Architecture & Development: Design, build, and maintain resilient and scalable ETL/ELT pipelines on GCP to process data and load it into BigQuery.
  • Workflow Automation & Solution Design: Proactively identify opportunities to automate day-to-day workflows and repetitive tasks across the business. Design and implement automated solutions that reduce manual effort, increase efficiency, and allow teams to focus on higher-value activities.
  • Develop Lightweight AI-Powered Tools: Build simple, internal-use web tools using Python frameworks (e.g., Streamlit, Flask). The role involves writing scripts and developing lightweight applications that integrate Generative AI models (e.g., Google's Gemini via Vertex AI) to support tasks like natural language querying, report summarization, and basic insight generation.
  • Application Deployment: You will be responsible for containerizing these AI-powered applications with Docker and deploying them on Google Cloud Run, our primary service for hosting container-based applications and APIs.
  • Data Governance & Quality: Implement and automate data quality checks to ensure the accuracy and consistency of data within our BigQuery data warehouse.
  • Technical Strategy & Innovation: Lead the exploration and implementation of Generative AI use cases within our data platform. You will evaluate new models and services to build innovative solutions that create tangible business value.
Expected Qualifications
  • 3+ years of experience in a data engineering or similar software engineering role.
  • Strong programming skills in Python, with experience using data-related libraries (e.g., Pandas, Polars).
  • Proven experience with at least one major cloud platform (GCP, AWS, or Azure), with a willingness to specialize in the GCP ecosystem.
  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical field is a plus.
Technical Proficiency & Our Stack

While direct experience with our specific tools is a plus, we value transferable skills and a strong foundation in equivalent technologies.

  • Data Warehousing: Google BigQuery (Equivalent experience: Snowflake, Amazon Redshift, Azure Synapse)
  • Data Processing & Orchestration: Cloud Composer (Airflow), Cloud Dataflow (Spark), and Cloud Functions (Equivalent experience: AWS Lambda, Azure Functions)
  • Application & API Deployment: Google Cloud Run, using Docker for containerization. (Equivalent experience: Kubernetes, AWS Fargate, Azure Container Apps)
  • Generative AI: Experience or strong interest in integrating large language models (LLMs) via APIs (e.g., Google Vertex AI, OpenAI).
  • Web Application Development: Experience in building lightweight data applications or internal tools with Python frameworks (e.g., Streamlit, Flask).
  • Domain Knowledge: Familiarity with digital marketing tools and ad platforms (e.g., Google Ads, Meta Ads, Google Analytics) is a plus.
Who You Are
  • Analytical Mindset: You have strong analytical and problem-solving abilities to tackle complex data challenges.
  • Excellent Communicator: You can effectively partner with both technical and non-technical stakeholders to translate business needs into technical solutions.
  • Strong Sense of Project Ownership: You can take technical projects from conception to completion with autonomy and accountability.
Employment Type
  • Full-time
Job Function
  • Engineering, Project Management, and Information Technology
Industries
  • Advertising Services

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Analyst/ Data Analytics Specialist

$600000 - $1200000 Y Optimum Solutions (Singapore) Pte Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities
  • Work in the Customer Data InnoStation to assist business teams in data exploration.
  • Understand available datasets on travel patterns, loyalty programs, app browsing history, and video analytics.
  • Facilitate business teams in utilizing datasets for ETL, data visualization, and segmentation.
  • Support supervisors and managers in conducting data analytics, preparing reports, and maintaining dashboards.
  • Assist in operating and monitoring the Customer Data InnoStation, including activity logging and usage monitoring.
  • Assist in the development of innovative data-driven services.
Requirements
  • Degree in Computer Science, Data Science, Information Technology, or related disciplines.
  • Strong interpersonal and communication skills in both written and spoken English and Chinese.
  • 3+ years of hands-on experience in data analytics is preferred.
  • Experience with one or more of the following tools and technologies is an advantage:

  • Data visualization tools (e.g., PowerBI)

  • Database programming (e.g., SQL)
  • Programming languages (e.g., Python, R)
  • Frontend web technologies (e.g., HTML5, web scraping)
  • Process automation (e.g., Power Automate)
This advertiser has chosen not to accept applicants from your region.

Data Analyst, Data Service Team

$900000 - $1200000 Y Yusen Logistics Global Management (Hong Kong) Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

  • Create and implement analytical models with meaningful insights, that yield significant insights to promote business growth and reduce risks.
  • Explore various data sources and perform exploratory data analysis to identify relevant data for addressing specific challenges and business needs. Create valuable features that enhance analytics capabilities.
  • Collaborate with stakeholders to propose and assess actions for performance improvement.
  • Build business analytics capabilities aligned with identified use cases.
  • Develop visualizations and dashboards to effectively communicate analytics results to stakeholders.
  • Act as a persuasive storyteller, translating analytics findings into clear, understandable language for stakeholders.
  • Stay up to date with the latest Microsoft Fabric/ Power BI features and functionalities.
  • Offer technical and functional support to internal users for BI tools and reporting solutions.
  • Collaborate with the data engineering and governance teams to adhere to schedules, fulfill requirements, and address any changes, issues, or risks that may arise.

Requirements

  • Bachelor's degree in Business Analytics, Mathematics, Statistics, Computer Science, Information Engineering, or related disciplines.
  • 5+ years of experience in a data analysis or business intelligence role, with a strong understanding of business operations and data requirements.
  • Sound knowledge in statistics, with data analytics techniques like predictive, prescriptive and machine learning.
  • Proficient in SQL, Python and/or R to extract, clean, and manipulate data from various sources for analysis and reporting.
  • Experience in Azure Databricks and using AI for data analytics would be an advantages but not a must
  • Develop, maintain, and manage advanced Power BI dashboards, visualizations, and reports to provide key business insights.
  • Analyze large datasets to identify trends, patterns, and business insights that inform strategy and decision-making.
  • Experience in data modeling and analysis techniques, with the ability to understand and construct critical business factors and characteristics.
  • Proven ability to manage project development from inception to deployment, demonstrating leadership and collaboration in a team-oriented environment.
  • Proven experience as a data analyst in delivering successful data products
  • Ability to navigate in a complex environment to drive outcome with stakeholders
  • Energetic, proactive adaptive problem solver, organized, flexible and comfortable navigating through ambiguity
  • Proficient in English, excellent communication both verbal and written
This advertiser has chosen not to accept applicants from your region.

Senior / Data Analyst (Data Management)

$60000 - $80000 Y Bank of China (Hong Kong) Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities :

  • Lead and support data management activities in the areas of data security , data standards and data dictionary management etc.
  • Work with data stewards in various business areas to drive data quality and customer data management
  • Conduct research for regulatory requirements and best market practices of data governance
  • Support other data management tasks as assigned

Requirements :

  • Degree or above in computer science, IT management, statistics, business administration or relevant disciplines
  • Minimum 3 years' working experience, preferably in data management field.
  • Familiar with Microsoft Office (including Chinese typing), SQL / Python preferred.
  • Candidates with less experience will be considered as Associate Data Analyst.
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Team data Jobs in Hong Kong !

Manager, Data Engineer, Data Transformation

$180000 - $250000 Y Prudential Hong Kong

Posted today

Job Viewed

Tap Again To Close

Job Description

Prudential's purpose is to be partners for every life and protectors for every future. Our purpose encourages everything we do by creating a culture in which diversity is celebrated and inclusion assured, for our people, customers, and partners. We provide a platform for our people to do their best work and make an impact to the business, and we support our people's career ambitions. We pledge to make Prudential a place where you can Connect, Grow, and Succeed.

Job Responsibilities

  • Design, build, and maintain scalable and efficient ETL/ELT pipelines in Azure Databricks to process structured, semi-structured, and unstructured insurance data from multiple internal and external sources.
  • Collaborate with data architects, modellers, analysts, and business stakeholders to gather data requirements and deliver fit-for-purpose data assets that support analytics, regulatory, and operational needs.
  • Develop, test, and optimize data transformation routines, batch and streaming solutions (leveraging tools such as Azure Data Factory, Data Lake Storage Gen2, Azure Event Hubs, and Kafka) to ensure timely and accurate data delivery.
  • Implement rigorous data quality, validation, and cleansing procedures—with a focus on enhancing reliability for high-stakes insurance use cases, reporting, and regulatory outputs.
  • Integrate Informatica tools to facilitate data governance, including the capture of data lineage, metadata, and data cataloguing as required by regulatory and business frameworks.
  • Ensure robust data security by following best practices for RBAC, managed identities, encryption, and compliance with Hong Kong's PDPO, GDPR, and other relevant regulatory requirements.
  • Automate and maintain deployment pipelines using GitHub Actions to ensure efficient, repeatable, and auditable data workflows and code releases.
  • Conduct root cause analysis, troubleshoot pipeline failures, and proactively identify and resolve data quality or performance issues.
  • Produce and maintain comprehensive technical documentation for pipelines, transformation rules, and operational procedures to ensure transparency, reuse, and compliance.
  • Apply subject matter expertise in Hong Kong Life and General Insurance to ensure that development captures local business needs and industry-specific standards.

Job Requirements

  • Bachelor's degree in Information Technology, Computer Science, Data Engineering, or a related discipline.
  • 6+ years of experience as a data engineer, building and maintaining ETL/ELT processes and data pipelines on Azure Databricks (using PySpark or Scala), with a focus on structured, semi-structured, and unstructured insurance data.
  • Strong experience orchestrating data ingestion, transformation, and loading workflows using Azure Data Factory and Azure Data Lake Storage Gentwo.
  • Advanced proficiency in Python and Spark for data engineering, data cleaning, transformation, and feature engineering in Databricks for analytics and machine learning.
  • Experience integrating batch and streaming data sources via Kafka or Azure Event Hubs for real-time or near-real-time insurance applications.
  • Hands-on use of Informatica for data quality, lineage, and governance to support business and regulatory standards in insurance.
  • Familiarity with automation and CI/CD of Databricks workflows using GitHub Actions.
  • Understanding of data security, RBAC, Key Vault, encryption, and best practices for compliance in the insurance sector.
  • Experience optimizing data pipelines to support ML workflows and BI/reporting tools.
  • Excellent command of English and Chinese – both written and spoken.

Prudential is an equal opportunity employer.
We provide equality of opportunity of benefits for all who apply and who perform work for our organisation irrespective of sex, race, age, ethnic origin, educational, social and cultural background, marital status, pregnancy and maternity, religion or belief, disability or part-time / fixed-term work, or any other status protected by applicable law. We encourage the same standards from our recruitment and third-party suppliers taking into account the context of grade, job and location. We also allow for reasonable adjustments to support people with individual physical or mental health requirements.

This advertiser has chosen not to accept applicants from your region.

Manager, Data Engineer, Data Transformation

$1200000 - $2400000 Y Prudential Plc

Posted today

Job Viewed

Tap Again To Close

Job Description

Prudential's purpose is to be partners for every life and protectors for every future. Our purpose encourages everything we do by creating a culture in which diversity is celebrated and inclusion assured, for our people, customers, and partners. We provide a platform for our people to do their best work and make an impact to the business, and we support our people's career ambitions. We pledge to make Prudential a place where you can Connect, Grow, and Succeed.

Job Responsibilities

  • Design, build, and maintain scalable and efficient ETL/ELT pipelines in Azure Databricks to process structured, semi-structured, and unstructured insurance data from multiple internal and external sources.
  • Collaborate with data architects, modellers, analysts, and business stakeholders to gather data requirements and deliver fit-for-purpose data assets that support analytics, regulatory, and operational needs.
  • Develop, test, and optimize data transformation routines, batch and streaming solutions (leveraging tools such as Azure Data Factory, Data Lake Storage Gen2, Azure Event Hubs, and Kafka) to ensure timely and accurate data delivery.
  • Implement rigorous data quality, validation, and cleansing procedures—with a focus on enhancing reliability for high-stakes insurance use cases, reporting, and regulatory outputs.
  • Integrate Informatica tools to facilitate data governance, including the capture of data lineage, metadata, and data cataloguing as required by regulatory and business frameworks.
  • Ensure robust data security by following best practices for RBAC, managed identities, encryption, and compliance with Hong Kong's PDPO, GDPR, and other relevant regulatory requirements.
  • Automate and maintain deployment pipelines using GitHub Actions to ensure efficient, repeatable, and auditable data workflows and code releases.
  • Conduct root cause analysis, troubleshoot pipeline failures, and proactively identify and resolve data quality or performance issues.
  • Produce and maintain comprehensive technical documentation for pipelines, transformation rules, and operational procedures to ensure transparency, reuse, and compliance.
  • Apply subject matter expertise in Hong Kong Life and General Insurance to ensure that development captures local business needs and industry-specific standards.

Job Requirements

  • Bachelor's degree in Information Technology, Computer Science, Data Engineering, or a related discipline.
  • 6+ years of experience as a data engineer, building and maintaining ETL/ELT processes and data pipelines on Azure Databricks (using PySpark or Scala), with a focus on structured, semi-structured, and unstructured insurance data.
  • Strong experience orchestrating data ingestion, transformation, and loading workflows using Azure Data Factory and Azure Data Lake Storage Gen2.
  • Advanced proficiency in Python and Spark for data engineering, data cleaning, transformation, and feature engineering in Databricks for analytics and machine learning.
  • Experience integrating batch and streaming data sources via Kafka or Azure Event Hubs for real-time or near-real-time insurance applications.
  • Hands-on use of Informatica for data quality, lineage, and governance to support business and regulatory standards in insurance.
  • Familiarity with automation and CI/CD of Databricks workflows using GitHub Actions.
  • Understanding of data security, RBAC, Key Vault, encryption, and best practices for compliance in the insurance sector.
  • Experience optimizing data pipelines to support ML workflows and BI/reporting tools.
  • Excellent command of English and Chinese – both written and spoken.

Prudential is an equal opportunity employer. We provide equality of opportunity of benefits for all who apply and who perform work for our organisation irrespective of sex, race, age, ethnic origin, educational, social and cultural background, marital status, pregnancy and maternity, religion or belief, disability or part-time / fixed-term work, or any other status protected by applicable law. We encourage the same standards from our recruitment and third-party suppliers taking into account the context of grade, job and location. We also allow for reasonable adjustments to support people with individual physical or mental health requirements.

This advertiser has chosen not to accept applicants from your region.

Data Engineer / Senior Data Engineer

Hong Kong, Hong Kong Omnicom Media Group Hong Kong

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Get AI-powered advice on this job and more exclusive features.

Overview

Reporting to the Data & Analytics Director, this position is for a Data Engineer who is passionate about building robust, scalable data solutions and lightweight AI applications. While our ecosystem is built on the Google Cloud Platform (GCP), we value strong engineering fundamentals and welcome candidates with experience in similar technologies from other cloud environments (like AWS or Azure).

You will work closely with data analysts, media teams and business stakeholders to build the foundational technology that drives business growth and operational efficiency. A key focus of this role will be developing and deploying lightweight internal tools on Google Cloud Run that are powered by Generative AI. This role offers the opportunity to directly collaborate with clients and vendors, implementing data & analytics strategies for OMG’s clients. You will contribute to our mission of empowering our agencies with advanced data solutions.

Key Responsibilities
  • Data Pipeline Architecture & Development: Design, build, and maintain resilient and scalable ETL/ELT pipelines on GCP to process data and load it into BigQuery.
  • Workflow Automation & Solution Design: Proactively identify opportunities to automate day-to-day workflows and repetitive tasks across the business. Design and implement automated solutions that reduce manual effort, increase efficiency, and allow teams to focus on higher-value activities.
  • Develop Lightweight AI-Powered Tools: Build simple, internal-use web tools using Python frameworks (e.g., Streamlit, Flask). The role involves writing scripts and developing lightweight applications that integrate Generative AI models (e.g., Google's Gemini via Vertex AI) to support tasks like natural language querying, report summarization, and basic insight generation.
  • Application Deployment: You will be responsible for containerizing these AI-powered applications with Docker and deploying them on Google Cloud Run, our primary service for hosting container-based applications and APIs.
  • Data Governance & Quality: Implement and automate data quality checks to ensure the accuracy and consistency of data within our BigQuery data warehouse.
  • Technical Strategy & Innovation: Lead the exploration and implementation of Generative AI use cases within our data platform. You will evaluate new models and services to build innovative solutions that create tangible business value.
Expected Qualifications
  • 3+ years of experience in a data engineering or similar software engineering role.
  • Strong programming skills in Python, with experience using data-related libraries (e.g., Pandas, Polars).
  • Proven experience with at least one major cloud platform (GCP, AWS, or Azure), with a willingness to specialize in the GCP ecosystem.
  • Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related technical field is a plus.
Technical Proficiency & Our Stack

While direct experience with our specific tools is a plus, we value transferable skills and a strong foundation in equivalent technologies.

  • Data Warehousing: Google BigQuery (Equivalent experience: Snowflake, Amazon Redshift, Azure Synapse)
  • Data Processing & Orchestration: Cloud Composer (Airflow), Cloud Dataflow (Spark), and Cloud Functions (Equivalent experience: AWS Lambda, Azure Functions)
  • Application & API Deployment: Google Cloud Run, using Docker for containerization. (Equivalent experience: Kubernetes, AWS Fargate, Azure Container Apps)
  • Generative AI: Experience or strong interest in integrating large language models (LLMs) via APIs (e.g., Google Vertex AI, OpenAI).
  • Web Application Development: Experience in building lightweight data applications or internal tools with Python frameworks (e.g., Streamlit, Flask).
  • Domain Knowledge: Familiarity with digital marketing tools and ad platforms (e.g., Google Ads, Meta Ads, Google Analytics) is a plus.
Who You Are
  • Analytical Mindset: You have strong analytical and problem-solving abilities to tackle complex data challenges.
  • Excellent Communicator: You can effectively partner with both technical and non-technical stakeholders to translate business needs into technical solutions.
  • Strong Sense of Project Ownership: You can take technical projects from conception to completion with autonomy and accountability.
Employment Type
  • Full-time
Job Function
  • Engineering, Project Management, and Information Technology
Industries
  • Advertising Services
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Team Data Jobs