Kontakt


  • Bitte kontaktieren Sie uns bevorzugt per E-Mail an eine der unten angegebenen Adressen. Vorrangig bitte an unsere Supportadresse .
  • Wünschen Sie telefonischen Kontakt, so rufen Sie uns gerne unter den unten stehenden Nummern an, oder vereinbaren Sie einen Gesprächstermin via Mail.
  • Im Büro sind wir bis coronabedingt aktuell nicht mehr täglich anzutreffen.

Dr. Stefan Harfst

+49 (0)441 798-3147

W3 1-139

Fynn Schwietzer

 +49 (0)441 798-3287

 W3 1-139

HPC-Support

Anschrift

Carl von Ossietzky Universität Oldenburg
Fakultät V - Geschäftsstelle
Ammerländer Heerstr. 114-118
26129 Oldenburg

CARL

CARL (named after Carl von Ossietzky)

CARL, funded by the Deutsche Forschungsgemeinschaft (DFG) and the Ministry of Science and Culture (MWK) of the State of Lower Saxony, is a multi-purpose cluster designed to meet the needs of compute-intensive and data-driven research projects in the main areas of

  • Quantum Chemistry and Quantum Dynamics,
  • Theoretical Physics,
  • The Neurosciences (including Hearing Research),   
  • Oceanic and Marine Research,
  • Biodiversity, and
  • Computer Science

Like its sister cluster EDDY, CARL is operated by the IT Services of the University of Oldenburg. The system is used by more than 20 research groups from the Faculty of Mathematics and Science, and a couple of research groups from the Department of Computing Science of the School of Computing Science, Business Administration, Economics and Law.

Overview of Hardware

  • Management and Login Nodes
    • 2 administration nodes in an active/passive high-availability (HA) configuration
      The master nodes are shared between CARL and its sister cluster EDDY and run all vital cluster services (node provisioning, DHCP, DNS, LDAP, NFS, Job Management System, etc.). They also provide monitoring functions for both clusters (with automated alerting). Monitoring includes hardware components (health states of all servers, temperature, power consumption, etc.) as well as basic cluster services (with automated restart if a service has died)
       
    • 2 login nodes for user access to the system, software development (programming environment), and job submission and control. Tthese nodes have the same speciftications as the administration nodes described above.
  • Internal networks
    • InfiniBand Network consisting of 2 Spine and 11 Leaf-Switches with a 8:1 blocking-factor. Therefore, each leaf-switch is connected to 32 MPC-nodes. The maximum data transfer rate is 56.250 Gb/s (4x FDR).
    • Second, physically separated Gigabit Ethernet ("base network") for vital cluster services (node provisioning, DHCP, DNS, LDAP, NFS, Job Management System, etc.)
    • 10Gb Ethernet backbone network connecting the management and login nodes, the storage system, and the Gigabit Ethernet (MPI and base network) leaf switches
    • IPMI network for hardware monitoring and control, including access to VGA console (KVM functionality), allowing full remote management of the cluster 
  • Storage System
    • General Parallel File System (GPFS) with 1.392 PB of total storage space of which about 926 TB are usable. We are using 4 declustered arrays to secure the availability of our data. The fail of a single hard drive isn't noticeable, even if two hard ddrives fail the "critical rebuild" would only take about 45 minutes. The high-memory and pre- and postprocessing have additional local storage space (up to 1 TB).
    • Enterprise-class scalable NAS cluster (manufacturer: EMC Isilon). This is where the home directories are stored. All data saved on the Isilon is backed up and its possible to work with snapchots. The data is accessible via a 10GbE-Connection.

      The Isilon is the central storage of the IT Services, thats why its also used for the HPC. Disk space is allocated to the two clusters depending on how much of the hardware of the storage system was paid out of the FLOW and HERO project funds, respectively.

SYSTEM SOFTWARE AND MIDDLEWARE

SELECTED APPLICATIONS RUNNING ON CARL

(Due to licensing, some applications are only accessible for specific users or research groups.)

PICTURES

 

 

More pictures can be found here.

(Stand: 19.01.2024)  | 
Zum Seitananfang scrollen Scroll to the top of the page