Menu

Data Engineer

Job details
Posting date: 13 May 2024
Salary: £39,000 to £45,000 per year
Hours: Full time
Closing date: 12 June 2024
Location: RG2 6UB
Remote working: Hybrid - work remotely up to 3 days per week
Company: Olive Technologies Ltd
Job type: Permanent
Job reference: Olive02

Apply for this job

Summary


Job Description: Data Engineer
We are currently looking for Data Engineer for our Cloud and Enterprise Information
Management Practice. Develop architectural models for Cloud-based data management
solutions leveraging Microsoft Azure / AWS / GCP / Snowflake technologies to operate at
large scale and high performance.
MAIN DUTIES AND RESPONSIBILITIES
Responsible for the functional design requirements for a cloud-based Data Management
solution and design conceptual, logical, physical data models that can meet current and
future business needs.
2. • Provides Cloud and Data Management environments, able to deep dive and
identify root cause of issues.
3. • Evaluate and Plan DWH Migrations to Cloud.
4. • Manage the data engineering roadmap and help to bring the organization
towards an automated, scalable and fault-tolerant infrastructure
5. • Advance writing and building data pipelines, lakes, managing ETL processes
and perform a number of transformations.
6. • Understanding relational and big data models to both store and access data
from data visualization and other query tools.
7. • Build data management platforms using Cloud Technologies like Data Factory,
Data Lake, Data Bricks, Cosmos SQL, Blob, Redshift, Lambda, RDS, S3, EC2,

Required Skills and Experience
• Experience in Data Architecture, Data Management and Analytical Technologies of
modern data platforms on Cloud.
• Experience with Extract, Transform & Load and ELT development is required
• Familiarity and good understanding of Data Models - relational and dimension models
is required
• Proven expertise in Data Modeling, Data Profiling, Data Analysis, Data Quality, Data
Governance and Data Lineage.
• Experience in leveraging Snowflake for building data lakes / data warehouses is
highly preferred.
• Excellent SQL Skills along with ETL/data processing tools such as Informatica,
Talend, Pentaho, Databricks, Spark, Alteryx.
• Experience with Snowflake utilities such as SnowSQL, SnowPipe, Python, Tasks,
Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures.
• Experience in Programming Languages such as Python
• Experience in using Python Libraries for Data mining, Data Modeling and Processing
and Data Visualizations
• Experience in SQL / NO SQL Databases such as Oracle, MS SQL Server, Cassandra,
Mongo DB, HBase, Zen, Elastic Stack, Couch DB, Dynamo DB and others.
• Experience in Data Pipelines such as Kafka or Apache Airflow
• Experience in Big Data Ecosystem (Apache Hadoop, Spark, Kafka) and/or the IaaS or
PaaS Ecosystem (Microsoft Azure, Google Cloud, AWS).
• Should be strong in Azure Data Factory, Data Lake, Data Bricks, Cosmos SQL, Blob,
Redshift, Lambda, RDS, S3, EC2, Kinesis, AWS/Azure/Snowflake Data Warehouse and
other services.
• Experience in BI tools such as Tableau, Power BI.
• Experience in Continuous Integration and Delivery.
• Experience with JIRA/Confluence and source control environments like Git, GitHub
etc.
• Experience with software development in a Windows, Linux/Unix environment.
• Experience in Agile methodologies (Kanban and SCRUM).
• Ability to work autonomously and with small cross-functional teams.
• Communication skills including technical documentation, development and delivery
of demonstrations and presentations and the ability to listen and communicate effectively
with functional leaders.

Apply for this job