About the role
Data Engineer Newcastle (Hybrid) Engineer scalable data platforms.
Apply promptly! A high volume of applicants is expected for the role as detailed below, do not wait to send your CV.
Power analytics and AI at scale.
An established global consultancy is looking for a Data Engineer to join its Advanced Technology team.
Youll play a key role in building robust, scalable data pipelines that underpin analytics and AI solutions.
This role is ideal for someone who enjoys working with real-time data, distributed systems, and modern cloud architectures .
What youll be doing Design and build scalable data pipelines (batch & streaming) Work with technologies like Kafka, Flink, Spark, AWS services Develop real-time, event-driven data solutions Contribute to data architecture and modelling best practices Implement CI/CD pipelines and Infrastructure-as-Code Collaborate with analytics and AI teams to deliver clean datasets What were looking for Strong programming skills in Java (preferred) or Python Experience with streaming technologies (Kafka/Flink/Spark) Knowledge of distributed systems and large-scale data processing Experience with cloud platforms (AWS preferred) Familiarity xwzovoh with DevOps tools and containerisation (Docker/Kubernetes) The offer Hands-on exposure to modern data engineering at scale Work within a collaborative, innovation-led environment Clear progression and upskilling opportunities Hybrid working (Newcastle-based) Requirements: 3 years experience, eligibility for UK security clearance.
Apply promptly! A high volume of applicants is expected for the role as detailed below, do not wait to send your CV.
Power analytics and AI at scale.
An established global consultancy is looking for a Data Engineer to join its Advanced Technology team.
Youll play a key role in building robust, scalable data pipelines that underpin analytics and AI solutions.
This role is ideal for someone who enjoys working with real-time data, distributed systems, and modern cloud architectures .
What youll be doing Design and build scalable data pipelines (batch & streaming) Work with technologies like Kafka, Flink, Spark, AWS services Develop real-time, event-driven data solutions Contribute to data architecture and modelling best practices Implement CI/CD pipelines and Infrastructure-as-Code Collaborate with analytics and AI teams to deliver clean datasets What were looking for Strong programming skills in Java (preferred) or Python Experience with streaming technologies (Kafka/Flink/Spark) Knowledge of distributed systems and large-scale data processing Experience with cloud platforms (AWS preferred) Familiarity xwzovoh with DevOps tools and containerisation (Docker/Kubernetes) The offer Hands-on exposure to modern data engineering at scale Work within a collaborative, innovation-led environment Clear progression and upskilling opportunities Hybrid working (Newcastle-based) Requirements: 3 years experience, eligibility for UK security clearance.
About this listing
Screened by Joboru
This role passed our automated spam and quality filters and was active in our feed when last checked. Joboru is an aggregator — here is how we screen listings. If anything looks off, tell us.
Similar jobs you may like
Jib Electrician
1 day agoOdin Recruitment Group
Enterprise Architect
1 day agoYolk Recruitment Ltd
Systems Design Integrator
1 day agoAMS CWS
Project Manager
1 day agoAmbis Resourcing
Senior Electronics And Software Product Engineer
1 day agoRedline Group Ltd
Space Mission Systems Engineer
1 day agoBAE Systems
Service Desk Engineer
1 day agoContext Recruitment Limited
Head of Data Warehouse / Data Engineering - Insurance
1 day agoI3 Resourcing Limited
C#/.NET Full Stack Engineer
1 day ago83zero Ltd