BeeGFS: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor:

BeeGFS (formerly FhGFS) is a parallel file system, developed and optimized for high-performance computing. BeeGFS includes a distributed metadata architecture for scalability and flexibility reasons. Its most used and widely known aspect is data throughput. BeeGFS was originally developed at the Fraunhofer Center for High Performance Computing in Germany by a team around Sven Breuner, who later became the CEO of ThinkParQ (2014 - 2018), the spin-off company that was founded in 2014 to maintain BeeGFS and offer professional services. Whilst the Community Edition of BeeGFS can be downloaded and used free of charge, the Enterprise Edition must be used under a professional support subscription contract.

  • scalability
  • fhgfs
  • professional support

1. History and Usage

BeeGFS started in 2005 as an in-house development at Fraunhofer Center for HPC to replace the existing file system on the institute's new compute cluster and to be used in a production environment.

In 2007, the first beta version of the software was announced during ISC07 in Dresden, Germany and introduced to the public during SC07 in Reno, NV. One year later the first stable major release became available.

In 2014, Fraunhofer started its spin-off, the new company called ThinkParQ[1] for BeeGFS. In this process, FhGFS was renamed and became BeeGFS®.[2] While ThinkParQ maintains the software and offers professional services, further feature development will continue in cooperation of ThinkParQ and Fraunhofer.

Due to the nature of BeeGFS being free of charge, it is unknown how many active installations there are. However, in 2014 there were already around 100 customers worldwide that used BeeGFS with commercial support by ThinkParQ and Fraunhofer. Among those are academic users such as universities and research facilities[3] as well as commercial companies in fields like the finance or the oil & gas industry.

Notable installations include several TOP500 computers such as the Loewe-CSC[4] cluster at the Goethe University Frankfurt, Germany (#22 on installation), the Vienna Scientific Cluster[5] at the University of Vienna, Austria (#56 on installation), and the Abel[6] cluster at the University of Oslo, Norway (#96 on installation).

2. Key Concepts and Features

When developing BeeGFS, Fraunhofer aimed to create a software focused on scalability, flexibility and usability.

BeeGFS runs on any Linux machine and consists of several components that include services for clients, metadata servers and storage servers. In addition, there is a service for the management host as well as one for a graphical administration and monitoring system.[7]

BeeGFS System Overview. By Silbersu - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=58485162

To run BeeGFS, at least one instance of the metadata server and the storage server is required. But BeeGFS allows multiple instances of each service to distribute the load from a large number of clients. The scalability of each component makes sure the system itself is scalable.

File contents are distributed over several storage servers using striping, i.e. each file is split into chunks of a given size and these chunks are distributed over the existing storage servers. The size of these chunks can be defined by the file system administrator. In addition, also the metadata is distributed over several metadata servers on a directory level, with each server storing a part of the complete file system tree. This approach allows fast access to the data.

Clients, as well as metadata or storage servers, can be added into an existing system without any downtime. The client itself is a lightweight kernel module that does not require any kernel patches. The servers run on top of an existing local file system. There are no restrictions to the type of underlying file system as long as it supports POSIX; recommendations are to use ext4 for the metadata servers and XFS for the storage servers. Both servers run in userspace.

Also, there is no strict requirement for dedicated hardware for individual services. The design allows a file system administrator to start the services in any combination on a given set of machines and expand in the future. A common way among BeeGFS users to take advantage of this is combining metadata servers and storage servers on the same machines.

BeeGFS supports various network-interconnects with dynamic failover such as Ethernet or Infiniband as well as many different Linux distributions and kernels (from 2.6.16 to the latest vanilla). The software has a simple setup and startup mechanism using init scripts. For users who prefer a graphical interface over command lines, a Java-based GUI (AdMon) is available. The GUI provides monitoring of the BeeGFS state and management of system settings. Besides managing and administrating the BeeGFS installation, this tool also offers a couple of monitoring options to help identifying performance issues within the system.

2.1. BeeOND (BeeGFS On-Demand)

BeeOND (BeeGFS on-demand) allows the creation of BeeGFS file system instances on a set of nodes with one single command line. Possible use cases for the tool are manifold; a few include setting up a dedicated parallel file system for a cluster job (often referred to as burst-buffering), cloud computing or for fast and easy temporary setups for testing purposes.

2.2. BeeGFS and Containers

An open source container storage interface (CSI) driver enables BeeGFS to be used with container orchestrators like Kubernetes.[8] The driver is designed to support environments where containers running in Kubernetes and jobs running in traditional HPC workload managers need to share access to the same BeeGFS file system. There are two main workflows enabled by the driver:

  • Static provisioning allows administrators to grant containers access to existing directories in BeeGFS.
  • Dynamic provisioning allows containers to request BeeGFS storage on-demand (represented as a new directory).

Container access and visibility into the file system is restricted to the intended directory. Dynamic provisioning takes into account BeeGFS features including storage pools and striping when creating the corresponding directory in BeeGFS. General features of a POSIX file system such as the ability to specify permissions on new directories are also exposed, easing integration of global shared storage and containers. This notably simplifies tracking and limiting container consumption of the shared file system using BeeGFS quotas.[9]

3. Benchmarks

The following benchmarks have been performed on Fraunhofer Seislab,[10] a test and experimental cluster at Fraunhofer ITWM with 25 nodes (20 compute + 5 storage) and a three-tier memory: 1 TB RAM, 20 TB SSD, 120 TB HDD. Single node performance on the local file system without BeeGFS is 1,332 MB/s (write) and 1,317 MB/s (read).

The nodes are equipped with 2x Intel Xeon X5660, 48 GB RAM, 4x Intel 510 Series SSD (RAID 0), Ext4, QDR Infiniband and run Scientific Linux 6.3, Kernel 2.6.32-279 and FhGFS 2012.10-beta1.

Read/Write Throughput. By Tobias.goetz at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=39804745

File Creates. By Tobias.goetz at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=39804695

IOPS. By Tobias.goetz at English Wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=39804719

4. BeeGFS and Exascale

Fraunhofer ITWM is participating in the Dynamic-Exascale Entry Platform – Extended Reach (DEEP-ER) project of the European Union,[11] which addresses the problems of the growing gap between compute speed and I/O bandwidth, and system resiliency for large-scale systems.

Some of the aspects that BeeGFS developers are working on under the scope of this project are:

  • support for tiered storage,
  • POSIX interface extensions,
  • fault tolerance and high availability (HA), and
  • improved monitoring and diagnostic tools.

The plan is to keep the POSIX interface for backward compatibility but also allow applications more control over how the file system handles things like data placement and coherency through API extensions.

The content is sourced from: https://handwiki.org/wiki/Software:BeeGFS

References

  1. "ThinkParQ website". http://www.thinkparq.com. 
  2. Rich Brueckner (March 13, 2014). "Fraunhofer to Spin Off Renamed BeeGFS File System". insideHPC. http://insidehpc.com/2014/03/13/fraunhofer-spin-renamed-beegfs-file-system/. 
  3. "FraunhoferFS High-Performance Parallel File System". ClusterVision eNews. November 2012. http://www.clustervision.com/eNews/Nov2012/Technology/Fraunhofer. 
  4. "... And Fraunhofer". StorageNewsletter.com. June 18, 2010. http://www.storagenewsletter.com/rubriques/business-others/fraunhofer-goethe-university-of-frankfurt-hpc/. 
  5. "VSC-2". Top500 List. June 20, 2011. http://www.top500.org/system/177280. 
  6. "Abel". Top500 List. June 18, 2012. http://www.top500.org/system/177801. 
  7. "BeeGFS - The Leading Parallel Cluster File System" (in en-US). https://www.beegfs.io/content/. 
  8. "Drivers - Kubernetes CSI Developer Documentation". https://kubernetes-csi.github.io/docs/drivers.html. 
  9. "BeeGFS CSI Driver". 11 October 2021. https://github.com/NetApp/beegfs-csi-driver/. 
  10. Christian, Mohrbacher (September 24, 2015). "BeeGFS - Not only for HPC". https://www.itwm.fraunhofer.de/content/dam/itwm/de/documents/HPC_Infomaterial/GreenbyIT/hpc_Praesentation_BeeGFS_EN.pdf. 
  11. "DEEP-ER Project Website". http://www.deep-er.eu/project/partners. 
More
This entry is offline, you can click here to edit this entry!
Video Production Service