Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 handwiki -- 1638 2022-10-14 01:47:02

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
HandWiki. Ceph. Encyclopedia. Available online: https://encyclopedia.pub/entry/29297 (accessed on 17 November 2024).
HandWiki. Ceph. Encyclopedia. Available at: https://encyclopedia.pub/entry/29297. Accessed November 17, 2024.
HandWiki. "Ceph" Encyclopedia, https://encyclopedia.pub/entry/29297 (accessed November 17, 2024).
HandWiki. (2022, October 14). Ceph. In Encyclopedia. https://encyclopedia.pub/entry/29297
HandWiki. "Ceph." Encyclopedia. Web. 14 October, 2022.
Ceph
Edit

Ceph (pronounced /ˈsɛf/) is an open-source software-defined storage platform that implements object storage on a single distributed computer cluster and provides 3-in-1 interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure, scalability to the exabyte level, and to be freely available. Since version 12, Ceph does not rely on other filesystems and can directly manage HDDs and SSDs with its own storage backend BlueStore and can completely self reliantly expose a POSIX filesystem. Ceph replicates data and makes it fault-tolerant, using commodity hardware and Ethernet IP and requiring no specific hardware support. The Ceph’s system offers disaster recovery and data redundancy through techniques such as replication, erasure coding, snapshots and storage cloning. As a result of its design, the system is both self-healing and self-managing, aiming to minimize administration time and other costs. In this way, administrators have a single, consolidated system that avoids silos and collects the storage within a common management framework. Ceph consolidates several storage use cases and improves resource utilization. It also lets an organization deploy servers where needed.

software-defined storage self-healing silos

1. Design

A high-level overview of the Ceph's internal organization[1]:4

Ceph employs five distinct kinds of daemons:[1]

  • Cluster monitors (ceph-mon) that keep track of active and failed cluster nodes, cluster configuration, and information about data placement and global cluster state.
  • Object storage devices (ceph-osd) that use a direct, journaled disk storage (named BlueStore,[2] which since the v12.x release replaces the FileStore[3] which would use a filesystem)
  • Metadata servers (ceph-mds) that cache and broker access to inodes and directories inside a CephFS filesystem.
  • HTTP gateways (ceph-rgw) that expose the object storage layer as an interface compatible with Amazon S3 or OpenStack Swift APIs
  • Managers (ceph-mgr) that perform cluster monitoring, bookkeeping, and maintenance tasks, and interface to external monitoring systems and management (e.g. balancer, dashboard, Prometheus, Zabbix plugin)[4]

All of these are fully distributed, and may run on the same set of servers. Clients with different needs can directly interact with different subsets of them.[5]

Ceph does striping of individual files across multiple nodes to achieve higher throughput, similar to how RAID0 stripes partitions across multiple hard drives. Adaptive load balancing is supported whereby frequently accessed objects are replicated over more nodes. (As of September 2017), BlueStore is the default and recommended storage type for production environments,[6] which is Ceph's own storage implementation providing better latency and configurability than the filestore backend, and avoiding the shortcomings of the filesystem based storage involving additional processing and caching layers. The filestore backend is still considered useful and very stable; XFS used to be the recommended underlying filesystem type for production environments, while Btrfs was recommended for non-production environments. ext4 filesystems were not recommended because of resulting limitations on the maximum RADOS objects length.[7] Even using BlueStore, XFS is used for a small partition of metadata.[8]

1.1. Object Storage S3

An architecture diagram showing the relations between components of the Ceph storage platform

Ceph implements distributed object storage - BlueStore. RADOS gateway (ceph-rgw) expose the object storage layer as an interface compatible with Amazon S3.

These are often capacitive disks which are associated with Ceph's S3 object storage for use cases: Big Data (datalake), Backup & Archives, IOT, media, video recording, etc.

Ceph's software libraries provide client applications with direct access to the reliable autonomic distributed object store (RADOS) object-based storage system, and also provide a foundation for some of Ceph's features, including RADOS Block Device (RBD), RADOS Gateway, and the Ceph File System. In this way, administrators can maintain their storage devices as a unified system, which makes it easier to replicate and protect the data.

The "librados" software libraries provide access in C, C++, Java, PHP, and Python. The RADOS Gateway also exposes the object store as a RESTful interface which can present as both native Amazon S3 and OpenStack Swift APIs.

1.2. Block Storage

Ceph's object storage system allows users to mount Ceph as a thin-provisioned block device. When an application writes data to Ceph using a block device, Ceph automatically stripes and replicates the data across the cluster. Ceph's RADOS Block Device (RBD) also integrates with Kernel-based Virtual Machines (KVMs).

These are often fast disks (NVMe, SSD) which are associated with Ceph's block storage for use cases, including databases, virtual machines, data analytics, artificial intelligence, and machine learning.

"Ceph-RBD" interfaces with the same Ceph object storage system that provides the librados interface and the CephFS file system, and it stores block device images as objects. Since RBD is built on librados, RBD inherits librados's abilities, including read-only snapshots and revert to snapshot. By striping images across the cluster, Ceph improves read access performance for large block device images.

"Ceph-iSCSI" is a gateway which enables access to distributed, highly available block storage from any Microsoft Windows and VMWare vSphere server or client capable of speaking the iSCSI protocol. By using ceph-iscsi on one or more iSCSI gateway hosts, Ceph RBD images become available as Logical Units (LUs) associated with iSCSI targets, which can be accessed in an optionally load-balanced, highly available fashion.

Since all of ceph-iscsi configuration is stored in the Ceph RADOS object store, ceph-iscsi gateway hosts are inherently without persistent state and thus can be replaced, augmented, or reduced at will. As a result, Ceph Storage enables customers to run a truly distributed, highly-available, resilient, and self-healing enterprise storage technology on commodity hardware and an entirely open source platform.

The block device can be virtualized, providing block storage to virtual machines, in virtualization platforms such as Openshift, OpenStack, Kubernetes, OpenNebula, Ganeti, Apache CloudStack and Proxmox Virtual Environment.

1.3. File System Storage

Ceph's file system (CephFS) runs on top of the same object storage system that provides object storage and block device interfaces. The Ceph metadata server cluster provides a service that maps the directories and file names of the file system to objects stored within RADOS clusters. The metadata server cluster can expand or contract, and it can rebalance the file system dynamically to distribute data evenly among cluster hosts. This ensures high performance and prevents heavy loads on specific hosts within the cluster.

Clients mount the POSIX-compatible file system using a Linux kernel client. An older FUSE-based client is also available. The servers run as regular Unix daemons.

Ceph's file storage is often associated with log collection, messaging, and file storage.

2. History

Ceph was initially created by Sage Weil for his doctoral dissertation,[9] which was advised by Professor Scott A. Brandt at the Jack Baskin School of Engineering, University of California, Santa Cruz (UCSC), and sponsored by the Advanced Simulation and Computing Program (ASC), including Los Alamos National Laboratory (LANL), Sandia National Laboratories (SNL), and Lawrence Livermore National Laboratory (LLNL).[10] The first line of code that ended up being part of Ceph was written by Sage Weil in 2004 while at a summer internship at LLNL, working on scalable filesystem metadata management (known today as Ceph's MDS).[11] In 2005, as part of a summer project initiated by Scott A. Brandt and led by Carlos Maltzahn, Sage Weil created a fully functional file system prototype which adopted the name Ceph. Ceph made its debut with Sage Weil giving two presentations in November 2006, one at USENIX OSDI 2006[12] and another at SC'06.[13]

After his graduation in autumn 2007, Weil continued to work on Ceph full-time, and the core development team expanded to include Yehuda Sadeh Weinraub and Gregory Farnum. On March 19, 2010, Linus Torvalds merged the Ceph client into Linux kernel version 2.6.34[14][15] which was released on May 16, 2010. In 2012, Weil created Inktank Storage for professional services and support for Ceph.[16][17]

In April 2014, Red Hat purchased Inktank, bringing the majority of Ceph development in-house to make it a production version for enterprises with support (hotline) and continuous maintenance (new versions).[18]

In October 2015, the Ceph Community Advisory Board was formed to assist the community in driving the direction of open source software-defined storage technology. The charter advisory board includes Ceph community members from global IT organizations that are committed to the Ceph project, including individuals from Red Hat, Intel, Canonical, CERN, Cisco, Fujitsu, SanDisk, and SUSE.[19]

In November 2018, the Linux Foundation launched the Ceph Foundation as a successor to the Ceph Community Advisory Board. Founding members of the Ceph Foundation included Amihan, Canonical, China Mobile, DigitalOcean, Intel, OVH, ProphetStor Data Services, Red Hat, SoftIron, SUSE, Western Digital, XSKY Data Technology, and ZTE.[20]

In March 2021, SUSE discontinued its Enterprise Storage product incorporating Ceph in favor of Longhorn.[21] and the former Enterprise Storage website was updated stating "SUSE has refocused the storage efforts around serving our strategic SUSE Enterprise Storage Customers and are no longer actively selling SUSE Enterprise Storage."[22]

2.1. Release History

Release history
Name Release First release End of
life
Milestones
Argonaut 0.48 July 3, 2012   First major "stable" release
Bobtail 0.56 January 1, 2013    
Cuttlefish 0.61 May 7, 2013   ceph-deploy is stable
Dumpling 0.67 August 14, 2013 May 2015 namespace, region, monitoring REST API
Emperor 0.72 November 9, 2013 May 2014 multi-datacenter replication for the radosgw
Firefly 0.80 May 7, 2014 April 2016 erasure coding, cache tiering, primary affinity, key/value OSD backend (experimental), standalone radosgw (experimental)
Giant 0.87 October 29, 2014 April 2015  
Hammer 0.94 April 7, 2015 August 2017  
Infernalis 9.2.0 November 6, 2015 April 2016  
Jewel 10.2.0 April 21, 2016 2018-06-01 Stable CephFS, experimental RADOS backend named BlueStore
Kraken 11.2.0 January 20, 2017 2017-08-01 BlueStore is stable
Luminous 12.2.0 August 29, 2017 2020-03-01  
Mimic 13.2.0 June 1, 2018 2020-07-22 snapshots are stable, Beast is stable
Nautilus 14.2.0 March 19, 2019 2021-06-01  
Octopus 15.2.0 March 23, 2020 2022-06-01  
Pacific 16.2.0 March 31, 2021[23] 2023-06-01  
Quincy 17.2.0 April 19, 2022[24] 2024-06-01  

3. Etymology

The name "Ceph" is an abbreviation of "cephalopod", a class of molluscs that includes the octopus. The name (emphasized by the logo) suggests the highly parallel behavior of an octopus and was chosen to associate the file system with "Sammy", the banana slug mascot of UCSC.[1] Both cephalopods and banana slugs are molluscs.

4. Prominent Incidents

4.1. 2022 Freedesktop.org SSD Failure

12:12 2022-06-12 users have found out that freedesktop.org GitLab is unavailable. It turned out that 2 SDD drives have failed simultaneously, causing transition to degraded mode, requiring manual recovery.[25][26]

References

  1. M. Tim Jones (2010-06-04). "Ceph: A Linux petabyte-scale distributed file system". IBM. http://www.ibm.com/developerworks/library/l-ceph/l-ceph-pdf.pdf. Retrieved 2014-12-03. 
  2. "BlueStore". Ceph. http://docs.ceph.com/docs/master/rados/configuration/storage-devices/#bluestore. Retrieved 2017-09-29. 
  3. "BlueStore Migration". https://docs.ceph.com/docs/mimic/rados/operations/bluestore-migration/. Retrieved 2020-04-12. 
  4. "Ceph Manager Daemon — Ceph Documentation". http://docs.ceph.com/docs/mimic/mgr/.  archive link
  5. Jake Edge (2007-11-14). "The Ceph filesystem". LWN.net. https://lwn.net/Articles/258516/. 
  6. Sage Weil (2017-08-29). "v12.2.0 Luminous Released". Ceph Blog. http://ceph.com/releases/v12-2-0-luminous-released/. 
  7. "Hard Disk and File System Recommendations". ceph.com. http://docs.ceph.com/docs/master/rados/configuration/filesystem-recommendations/. Retrieved 2017-06-26. 
  8. "BlueStore Config Reference". https://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/. Retrieved April 12, 2020. 
  9. Sage Weil (2007-12-01). "Ceph: Reliable, Scalable, and High-Performance Distributed Storage". University of California, Santa Cruz. https://ceph.com/wp-content/uploads/2016/08/weil-thesis.pdf. 
  10. Gary Grider (2004-05-01). "The ASCI/DOD Scalable I/O History and Strategy" (in en-US). https://www.dtc.umn.edu/resources/grider1.pdf. 
  11. Dynamic Metadata Management for Petabyte-Scale File Systems, SA Weil, KT Pollack, SA Brandt, EL Miller, Proc. SC'04, Pittsburgh, PA, November, 2004
  12. "Ceph: A scalable, high-performance distributed file system," SA Weil, SA Brandt, EL Miller, DDE Long, C Maltzahn, Proc. OSDI, Seattle, WA, November, 2006
  13. "CRUSH: Controlled, scalable, decentralized placement of replicated data," SA Weil, SA Brandt, EL Miller, DDE Long, C Maltzahn, SC'06, Tampa, FL, November, 2006
  14. Sage Weil (2010-02-19). "Client merged for 2.6.34". ceph.newdream.net. http://ceph.newdream.net/2010/03/client-merged-for-2-6-34/. 
  15. Tim Stephens (2010-05-20). "New version of Linux OS includes Ceph file system developed at UCSC". news.ucsc.edu. https://news.ucsc.edu/2010/05/3807.html. 
  16. Bryan Bogensberger (2012-05-03). "And It All Comes Together". Inktank Blog. http://www.inktank.com/uncategorized/and-it-all-comes-together-2/. 
  17. Joseph F. Kovar (July 10, 2012). "The 10 Coolest Storage Startups Of 2012 (So Far)". CRN. http://www.crn.com/slide-shows/storage/240003163/the-10-coolest-storage-startups-of-2012-so-far.htm?pgno=5. Retrieved July 19, 2013. 
  18. Red Hat Inc (2014-04-30). "Red Hat to Acquire Inktank, Provider of Ceph". Red Hat. http://www.redhat.com/en/about/press-releases/red-hat-acquire-inktank-provider-ceph. Retrieved 2014-08-19. 
  19. "Ceph Community Forms Advisory Board". 2015-10-28. http://www.storagereview.com/ceph_community_forms_advisory_board. Retrieved 2016-01-20. 
  20. "The Linux Foundation Launches Ceph Foundation To Advance Open Source Storage". 2018-11-12. https://www.linuxfoundation.org/en/press-release/the-linux-foundation-launches-ceph-foundation/. 
  21. "SUSE says tschüss to Ceph-based enterprise storage product – it's Rancher's Longhorn from here on out". https://www.theregister.com/2021/03/25/suse_kisses_ceph_goodbye/. 
  22. "SUSE Enterprise Software-Defined Storage". https://www.suse.com/products/suse-enterprise-storage/. 
  23. Ceph.io — v16.2.0 Pacific released https://ceph.io/releases/v16-2-0-pacific-released/
  24. Ceph.io — v17.2.0 Quincy released https://ceph.com/en/news/blog/2022/v17-2-0-quincy-released/
  25. "IRC Logs of #freedesktop on irc.freenode.net for 2022-06-12". https://people.freedesktop.org/~cbrill/dri-log/?channel=freedesktop&highlight_names=&date=2022-06-12&show_html=true. 
  26. https://lists.x.org/archives/xorg-devel/2022-June/058833.html
More
Information
Subjects: Others
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 1.0K
Entry Collection: HandWiki
Revision: 1 time (View History)
Update Date: 14 Oct 2022
1000/1000
ScholarVision Creations