Ceph

Hardware Sizing - Ceph Radosgw (RGW)

Hardware Sizing - Ceph Radosgw (RGW) I’m often asked, “what’s the best hardware to use for Ceph?” The answer is simple - it depends. With Ceph there are many moving parts such as: Ceph Monitor Nodes Ceph RGW Nodes Ceph Mgr Nodes Ceph OSD Nodes Ceph MDS Nodes In addition to those nodes you may also have software or hardware load-balancers in front of your RGW nodes. This discussion is centered around RGW nodes only.

Continue reading

AWS S3 vs On-Premises

AWS S3 vs On-Premises When you own Enterprise Storage or are asked to build enterprise class storage you find yourself more frequently having to cost justify against outside Cloud Storage vendors like AWS S3, GCE or Azure. So, how will you do that since you have to make capital expenditures vs OpEx that these providers discuss so much? When you break down the pricing of AWS S3 and others you find that the actual storage cost is not too bad (depending on amount of stored data).

Continue reading

Erasure Coding vs Replica

Erasure Coding vs Replica Ceph RadosGW (RGW), Ceph’s S3 Object Store, supports both Replica and Erasure Coding. Most all examples of using RGW show replicas because that’s the easiest to setup, manage and get your head around. Replicas simply means that a default of 3 means that RGW stores the original plus two more copies spread out within the cluster based on the Crush Map, Ceph’s way of calculating where to store objects.

Continue reading

Ceph and SMR Drives - NO

Don’t use Ceph with SMR Drives! A while back at Ceph Day in NYC I saw a representative from a drive manufacturer talk about SMR drives. SMR stands for Shingled Magnetic Recording and the analogy given was ‘think about the shingles on a house’. It sounded very interesting but they were not on the market at the time. A very active community member, Wido den Hollander, posted his findings here.

Continue reading

S3LSIO - S3 Utility

The initial release of s3lsio has been released for both MacOSX and Linux (RHEL/Fedora/CentOS). Ubuntu will be released soon. In theory Windows should work but OpenSSL can cause initial setup of the Rust build environment for s3lsio. There are constant enhancements being made on a weekly basis. S3lsio is a command line tool that can work within a script, called from an app or ran stand alone to easily manipulate your AWS S3 and Ceph Rados Gateway (S3) environments.

Continue reading

Building Large Ceph Clusters

Ceph is a very complex distributed storage system that provides an Object Store, Block Storage Devices and Distributed File System. It has a built-in installation program called Ceph-Deploy but it’s design is for very simple and small installations. There are two official automated installation and maintenance systems for Ceph, Ceph-Ansible and Ceph-Chef. As the name implies, Ceph-Ansible is built for the Ansible while Ceph-Chef is built for Chef. I will focus on Ceph-Chef for the Chef environment.

Continue reading

Rust AWS S3 SDK

The aws-sdk-rust library is officially released which allows both V2 and V4 API signatures. This is import for those that wish to use the SDK to access storage products that implement the S3 Interface such as Ceph’s Rados Gateway (RGW). Ceph Hammer release only uses V2 while the Jewel and higher releases support V4. Enterprise level proxy support has also been added. So, if http_proxy, https_proxy and no_proxy environment variables are in use, aws-sdk-rust will use them to access the S3 resource.

Continue reading

Ceph Librados for Rust

Offical Ceph Rust Librados Library The official Ceph librados Rust API has been released called ceph-rust. Ceph-rust can be found at crates.io and at https://github.com/ceph/ceph-rust. Ceph-rust is a very thin layer above the C librados library that drives Ceph. In addition, it has some higher level APIs that wrap the low-level C interface with Rust specific protection. Working with Ceph is now fast and safe.

Continue reading