skip to main content
Primo Search
Search in: Busca Geral

The Data-Logging System of the Trigger and Data Acquisition for the ATLAS Experiment at CERN

Battaglia, A. ; Beck, H.P. ; Dobson, M. ; Gadomski, S. ; Kordas, K. ; Vandelli, W.

IEEE transactions on nuclear science, 2008-10, Vol.55 (5), p.2607-2612 [Periódico revisado por pares]

New York: IEEE

Texto completo disponível

Citações Citado por
  • Título:
    The Data-Logging System of the Trigger and Data Acquisition for the ATLAS Experiment at CERN
  • Autor: Battaglia, A. ; Beck, H.P. ; Dobson, M. ; Gadomski, S. ; Kordas, K. ; Vandelli, W.
  • Assuntos: CERN ; Collisions ; Control systems ; Controllers ; Data acquisition ; data handling ; Detectors ; Disks ; Energy (nuclear) ; Event detection ; Filters ; Information storage ; Laboratories ; Magnetic separation ; Networks ; Object detection ; Physics ; Product introduction ; Protons ; Raid disc systems ; Raids ; Storage facilities
  • É parte de: IEEE transactions on nuclear science, 2008-10, Vol.55 (5), p.2607-2612
  • Notas: ObjectType-Article-2
    SourceType-Scholarly Journals-1
    ObjectType-Feature-1
    content type line 23
  • Descrição: The ATLAS experiment is getting ready to observe collisions between protons at a centre of mass energy of 14 TeV. These will be the highest energy collisions in a controlled environment to-date, to be provided by the Large Hadron Collider at CERN by mid 2008. The ATLAS Trigger and Data Acquisition (TDAQ) system selects events online in a three level trigger system in order to keep those events promising to unveil new physics at a budgeted rate of ~200 Hz for an event size of ~1.5 MB. This paper focuses on the data-logging system on the TDAQ side, the so-called ldquoSub-Farm Outputrdquo (SFO) system. It takes data from the third level trigger, and it streams and indexes the events into different files, according to each event's trigger path. The data files are moved to CASTOR, the central mass storage facility at CERN. The final TDAQ data-logging system has been installed using 6 Linux PCs, holding in total 144 disks of 500 GB each, managed by three RAID controllers on each PC. The data-writing is managed in a controlled round-robin way among three independent filesystems associated to a distinct set of disks, managed by a distinct RAID controller. This novel design allows fast I/O, which together with a high speed network permits to minimize the number of SFO nodes. We report here on the functionality and performance requirements on the system, our experience with commissioning it and on the performance achieved.
  • Editor: New York: IEEE
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.