Warning
Mae'r hysbyseb swydd hon wedi dod i ben ac mae'r ceisiadau wedi cau.
Machine Learning Compiler Engineer
Dyddiad hysbysebu: | 11 Rhagfyr 2024 |
---|---|
Cyflog: | £45,000 i £70,000 bob blwyddyn |
Oriau: | Llawn Amser |
Dyddiad cau: | 10 Ionawr 2025 |
Lleoliad: | Birmingham, West Midlands |
Gweithio o bell: | Hybrid - gweithio o bell hyd at 3 ddiwrnod yr wythnos |
Cwmni: | Devi Technologies |
Math o swydd: | Parhaol |
Cyfeirnod swydd: |
Crynodeb
Responsibilities:
Design and develop machine learning compilers that optimize performance on various hardware platforms.
Implement advanced compiler techniques for optimizing deep learning models and AI workloads.
Collaborate with AI researchers and engineers to integrate new machine learning models with compiler tools.
Work on optimizing model inference time and memory usage for deployment on edge devices and cloud infrastructure.
Contribute to the development of compilers that support model training, inference, and multi-platform execution.
Requirements:
Strong experience in compiler design, optimization techniques, and performance tuning.
Proficiency in programming languages like C++, Python, and CUDA for GPU-based optimizations.
Experience with machine learning frameworks like TensorFlow, PyTorch, or similar.
Familiarity with hardware acceleration (e.g., GPUs, TPUs) and optimization for cloud and edge deployments.
Bachelor’s or Master’s in Computer Science, Electrical Engineering, or related field.
Design and develop machine learning compilers that optimize performance on various hardware platforms.
Implement advanced compiler techniques for optimizing deep learning models and AI workloads.
Collaborate with AI researchers and engineers to integrate new machine learning models with compiler tools.
Work on optimizing model inference time and memory usage for deployment on edge devices and cloud infrastructure.
Contribute to the development of compilers that support model training, inference, and multi-platform execution.
Requirements:
Strong experience in compiler design, optimization techniques, and performance tuning.
Proficiency in programming languages like C++, Python, and CUDA for GPU-based optimizations.
Experience with machine learning frameworks like TensorFlow, PyTorch, or similar.
Familiarity with hardware acceleration (e.g., GPUs, TPUs) and optimization for cloud and edge deployments.
Bachelor’s or Master’s in Computer Science, Electrical Engineering, or related field.