Bastian Blank - 09 September 2013
Almost all existing documentation tells me how to setup Ceph with one or two layers of abstration. This entry shows how to set it up by hand without needing root permissions.
Ceph consists of two main daemons. One is the monitoring daemon, which monitors the health of the cluster …
Almost all existing documentation tells me how to setup Ceph with one or two layers of abstration. This entry shows how to set it up by hand without needing root permissions.
Ceph consists of two main daemons. One is the monitoring daemon, which monitors the health of the cluster and provides location information. The second is the storage daemon, which maintains the actual storage. Both are needed in a minimal setup.
The monitor daemons are the heart of the cluster. They maintain quorum within the cluster to check if everything can be used. They provide referrals to clients, to allow them to find the data they seek. Without a majority of monitors nothing will work within the cluster.
The storage daemons maintain the actual storage. One daemon maintains one backend storage device.
The default config is understandable, but several things will just not work with it.
By default the monitor daemon will not work on localhost. There is an (undocumented) override to force it to work on localhost:
[mon.noname-admin] mon addr = [::1]:6789
The monitor will be renamed to mon.admin internaly.
Ceph supports IP (IPv6) or legacy-IP (IPv4), but never both. I don't really use legacy-IP any longer, so I have to configure Ceph accordingly:
[global] ms bind ipv6 = true
For testing purposes I wanted to create a cluster with exactly one OSD. It never got into a clean state. So I asked and found the answer in #ceph:
[global] osd crush chooseleaf type = 0
While deprecated, the following seems to work:
[global] auth supported = none
[global] auth supported = none log file = $name.log run dir = … osd pool default size = 1 osd crush chooseleaf type = 0 ms bind ipv6 = true [mon] mon data = …/$name [mon.noname-admin] mon addr = [::1]:6789 [osd] osd data = …/$name osd journal = …/$name/journal osd journal size = 100 [osd.0] host = devel
This is currently based on my updated packages. And they are still pretty unclean from my point of view.
All the documentation tells only about ceph-deploy and ceph-disk. This tools are abstractions that need root to mount stuff and do all the work. Here I show how to do a minimal setup without needing root.
For some reason even with no authentication the monitor setup wants a keyring. So just set one up:
$ ceph-authtool --create-keyring keyring --gen-key -n mon. $ ceph-authtool keyring --gen-key -n client.admin
Monitor setup by hand is easy:
$ mkdir $mon_data $ ceph-mon -c ceph.conf --mkfs --fsid $(uuidgen) --keyring keyring
After that just start it:
$ ceph-mon -c ceph.conf $
First properly add the new OSD to the internal state:
$ ceph -c ceph.conf osd create $ ceph -c ceph.conf osd crush set osd.0 1.0 root=default
Then setup the OSD itself:
$ mkdir $osd_data $ ceph-osd -c ceph.conf -i 0 --mkfs --mkkey --keyring keyring
And start it:
$ ceph-osd -c ceph.conf -i 0 starting osd.0 at :/0 osd_data $osd_data $osd_data/journal $
The health check should return ok after some time:
$ ceph -c ceph.conf health HEALTH_OK $