Warning
This job advert has expired and applications have closed.
Data Engineer
Posting date: | 04 October 2024 |
---|---|
Salary: | Not specified |
Additional salary information: | Performance bonus |
Hours: | Full time |
Closing date: | 03 November 2024 |
Location: | Bristol, South West England |
Remote working: | Hybrid - work remotely up to 3 days per week |
Company: | Worldwide Online Ltd |
Job type: | Permanent |
Job reference: | Data Engineer Sysnia |
Summary
Job Description:
We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will play a critical role in building, optimizing, and maintaining the infrastructure and pipelines necessary to facilitate efficient data processing and analysis.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL processes to collect, process, and store large datasets from various sources.
- Develop, optimize, and manage data architecture that supports business intelligence, analytics activities and cloud data migration.
- Collaborate with cross-functional teams to define data requirements and ensure high data quality and integrity.
- Implement and manage data warehouses, lakes, and other data storage solutions.
- Work with structured and unstructured data, ensuring it is accessible, consistent, and secure.
- Monitor and troubleshoot data pipelines to resolve any issues with data ingestion and processing.
- Continuously improve the performance, scalability, and efficiency of data systems and pipelines.
- Stay updated on new technologies, tools, and methodologies in data engineering and implement relevant advancements.
Requirements:
- Bachelor's degree in IT, Data Engineering, or a related field.
- 3+ years of experience in data engineering, Snowflake, Databrick, or a related field.
- Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL).
- Experience with cloud platforms (AWS, Azure, or Google Cloud) and data storage technologies (e.g., S3, Redshift, BigQuery).
- Good programming skills in Python, Java, or Scala.
- Experience with data pipeline and workflow management tools (e.g., Airflow, dbt, Terraform, Luig).
- Excellent problem-solving and debugging skills.
- Strong understanding of data modeling, ETL best practices, and performance tuning.
Preferred Qualifications:
- Experience with real-time data processing and streaming architectures.
- Knowledge of containerization (Docker, Kubernetes).
- Experience with data visualization and reporting tools (e.g., Tableau, PowerBI).
- Knowledge of machine learning pipelines and model deployment is a plus.
Join us to shape the future of data-driven decision-making!
We are seeking a skilled Data Engineer to join our dynamic team. The ideal candidate will play a critical role in building, optimizing, and maintaining the infrastructure and pipelines necessary to facilitate efficient data processing and analysis.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines and ETL processes to collect, process, and store large datasets from various sources.
- Develop, optimize, and manage data architecture that supports business intelligence, analytics activities and cloud data migration.
- Collaborate with cross-functional teams to define data requirements and ensure high data quality and integrity.
- Implement and manage data warehouses, lakes, and other data storage solutions.
- Work with structured and unstructured data, ensuring it is accessible, consistent, and secure.
- Monitor and troubleshoot data pipelines to resolve any issues with data ingestion and processing.
- Continuously improve the performance, scalability, and efficiency of data systems and pipelines.
- Stay updated on new technologies, tools, and methodologies in data engineering and implement relevant advancements.
Requirements:
- Bachelor's degree in IT, Data Engineering, or a related field.
- 3+ years of experience in data engineering, Snowflake, Databrick, or a related field.
- Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL).
- Experience with cloud platforms (AWS, Azure, or Google Cloud) and data storage technologies (e.g., S3, Redshift, BigQuery).
- Good programming skills in Python, Java, or Scala.
- Experience with data pipeline and workflow management tools (e.g., Airflow, dbt, Terraform, Luig).
- Excellent problem-solving and debugging skills.
- Strong understanding of data modeling, ETL best practices, and performance tuning.
Preferred Qualifications:
- Experience with real-time data processing and streaming architectures.
- Knowledge of containerization (Docker, Kubernetes).
- Experience with data visualization and reporting tools (e.g., Tableau, PowerBI).
- Knowledge of machine learning pipelines and model deployment is a plus.
Join us to shape the future of data-driven decision-making!