skip to main content

Large scale and low latency analysis facilities for the CMS experiment: development and operational aspects

Riahi, H ; Gowdy, S ; Kreuzer, P ; Bakken, J ; Cinquilli, M ; Evans, D ; Foulkes, S ; Kaselis, R ; Metson, S ; Spiga, D ; Vaandering, E

Journal of physics. Conference series, 2011-12, Vol.331 (7), p.072030-8 [Periódico revisado por pares]

Bristol: IOP Publishing

Texto completo disponível

Citações Citado por
  • Título:
    Large scale and low latency analysis facilities for the CMS experiment: development and operational aspects
  • Autor: Riahi, H ; Gowdy, S ; Kreuzer, P ; Bakken, J ; Cinquilli, M ; Evans, D ; Foulkes, S ; Kaselis, R ; Metson, S ; Spiga, D ; Vaandering, E
  • Assuntos: CERN ; Commissioning ; Computation ; Computer networks ; Computer programs ; Data analysis ; Data collection ; Detectors ; Distributed processing ; Infrastructure ; Large Hadron Collider ; Solenoids ; Workflow ; Workload ; Workloads
  • É parte de: Journal of physics. Conference series, 2011-12, Vol.331 (7), p.072030-8
  • Notas: ObjectType-Article-1
    SourceType-Scholarly Journals-1
    ObjectType-Feature-2
    content type line 23
  • Descrição: While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool[7]) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Transition to Tier-0 has been guaranteed by the usage of the WMCore, a library developed by CMS to be the common core of workload management tools, for handing data driven workflow dependencies. This system is now being used with the first use cases, and important experience is being acquired. In addition to the CERN CAF facility, FNAL has CMS dedicated analysis resources at the FNAL LHC Physics Center (LPC). In the first few years of data collection FNAL has been able to accept a large fraction of CMS data. The remote centre is not well suited for the extremely low latency work expected of the CAF, but the presence of substantial analysis resources, a large resident community, and a large fraction of the data make the LPC a strong facility for resource intensive analysis. We present the building, commissioning and operation of these dedicated analysis facilities in the first year of LHC collisions; we also present the specific development to our software needed to allow for the use of these computing facilities in the special use cases of fast turnaround analyses.
  • Editor: Bristol: IOP Publishing
  • Idioma: Inglês

Buscando em bases de dados remotas. Favor aguardar.