At CERN, the European Organisation for Nuclear Research, physicists and engineers are probing the fundamental structure of the universe. Using the world's largest and most complex scientific instruments, they study the basic constituents of matter - fundamental particles that are made to collide together at close to the speed of light. The process gives physicists clues about how particles interact, and provides insights into the fundamental laws of nature.
Introduction
Are you an experienced C++ programmer with a passion for computing hardware and emerging technologies? The ATLAS Data-Acquisition team operates and maintains a state-of-the-art heterogeneous computing infrastructure with in-house developed high-performance software. Join the unique challenge of managing, evolving and upgrading all aspects of the distributed data-conveyance and storage components. Make use of your skills to meet the demanding performance and efficiency requirements of one of the largest particle-physics experiments in the world.
You will join the:
Experimental Physics Department (EP) which carries out basic research in the field of experimental physics. It aims at providing a stimulating scientific atmosphere and remains an important reference centre for the European physics community. It contributes to the education and training of young scientists.ATLAS Trigger and Data Acquisition Group (ADT) which has major responsibilities in the ATLAS trigger and data acquisition system. Its activities span every aspect, from development and operation of custom trigger hardware to high-level physics selection algorithms via data conveyance and control infrastructures.Data Acquisition (DQ) Section of the ATLAS experiment, which is in charge of developing, installing, maintaining, operating and upgrading the DAQ system for the overall detector. This position is specifically in the DataFlow area of the DAQ system.The DataFlow infrastructure includes distributed high-performance software, operating over a large heterogeneous computer system interconnected by a high-speed network, as well as a storage system as large as 10 PB. The DataFlow infrastructure is responsible for reliable and effective conveyance and storage of physics data from the detector electronics to the off-line computing facilities.
Functions
As a Computing Engineer you will work on the operation and maintenance of the existing data-conveyance infrastructure, and more importantly, on the development, procurement, installation and commissioning of the upgraded system.
In particular you will:
Take part in the operation and maintenance of the DataFlow hardware and software infrastructures. This involves the active follow-up of operational problems and requirements, including performance and reliability reach.Contribute to the hardware upkeep, replacement and upgrade of DAQ components. This involves hardware evaluation and testing and fostering contacts with major IT manufacturers. You will participate in the installation and (de-) commissioning of large computing infrastructures, including as many as several thousand servers.Take on responsibilities in the ATLAS data-acquisition software, in particular concerning the implementation, testing and performance reach of the next-generation high-throughput distributed data-conveyance infrastructure. It is anticipated the system will be required to efficiently and reliably transport about 50 TB/s of physics data.Participate in the design, procurement, installation and commissioning of a multi-PB storage system, operating at roughly 100 GB/s aggregate throughput.In due time, participate in the overall DAQ and ATLAS integration and commissioning, as well as in beam operations. You will join the team of DAQ experts that assures the smooth functioning of the DAQ system 24 hours a day, seven days a week.Master's degree or equivalent relevant experience in the field of Physics, Computer Science or Engineering or a related field.
Experience:
The experience required for this post is:
Multi-threading, parallel and networking programming techniques in C++ on distributed systems;Good knowledge in the use of tools and methods that support all phases of the software life cycle, in particular design, coding, testing, on the Linux Operating System;Performance profiling, tuning and optimisation, especially oriented towards data throughput on networked and/or storage systems;Practical knowledge of computing hardware and architectures, and networking equipment, as well as Linux OS internals, with emphasis on the storage and networking stack.Experience that is considered an advantage:
Configuration, operation and optimisation of storage systems;Work in the context of large-scale computing organisations or High-Energy Physics experiments.Technical competencies:
Knowledge and application of software life-cycle tools and procedures (including integration, build and test): code repositories (Git), CI/CD frameworks;Knowledge of programming techniques and languages: parallel and distributed programming, C++, Linux operating system, scripting languages (Python/shell);Testing, diagnosing and optimisation of software: proven record of end-to-end performance optimisation, in particular for high-throughput I/O applications, including tuning of the Linux OS network and storage stacks;Knowledge of communication technologies and protocols: practical understanding of TCP/IP protocol and nuances; knowledge of RDMA technologies would be beneficial;Re-use, refactoring, integration and porting of existing software.Behavioural competencies:
Achieving results: having a structured and organised approach towards work;being able to set priorities and plan tasks with results in mind;defining clear objectives, milestones and deliverables before initiating work/ project;following through on new ideas and innovations;planning and implementing application.Demonstrating flexibility: actively participating in the implementation of new processes and technologies;demonstrating openness to new ideas and situations;readily absorbing new techniques and working practices;proposing new or improved ways of working.Learning and sharing knowledge: keeping up-to-date with developments in own field of expertise, and readily absorbing new information;seeking feedback from colleagues and other stakeholders about ways of increasing competence.Solving problems: addressing complex problems by breaking them down into manageable componentsadopting a pragmatic approach;understanding the value of adopting generic rather than 'gold -plated' technical solutions;identifying, defining and assessing problems, taking action to address them.Working in teams: building and maintaining constructive and effective work relationships;debating at the table and engaging in constructive confrontation of ideas;being ready to concede in the interest of the team;seeking agreement.Language skills:
Spoken and written English, with a commitment to learn French.
Eligibility and closing date:
Diversity has been an integral part of CERN's mission since its foundation and is an established value of the Organisation. Employing a diverse workforce is central to our success. We welcome applications from all Member States and Associate Member States.
This vacancy will be filled as soon as possible, and applications should normally reach us no later than 17.07.2025 at 23:59 CEST.
Employment Conditions
Contract type: Limited duration contract (5 years). Subject to certain conditions, holders of limited-duration contracts may apply for an indefinite position.
Working Hours: 40 hours per week
This position involves:
Work in Radiation Areas.Interventions in underground installations.Stand-by-duty and work during nights, Sundays, and official holidays, when required by the needs of the Organisation.Job grade: 6-7
Job reference: EP-ADT-DQ-2025-129-LD
Benchmark Job Title: Computing Engineer