Jump to content

Is anyone using Ceph?


shocker

Recommended Posts

shocker

Hello,

   Does any one have any success stories with Ceph? :)

 

As of now I'm using two pools, one with ZFS and another one with BTRFS.

- ZFS is very robust and everything is working as intended. But as things are advance quickly it's very hard to keep up with the VDEVs and planning. Also if it's getting old it's hard to find parity hdd's if a failure occur as you need to use the same size.

- BTRFS is very nice and have all the needed functionality for this century. Unfortunately I'm using Raid6 and if you have a large pool and a HDD failure then that's when the fun begin :) there is no way to tune the performance, recovery can take weeks or months and during this period R/W's are degraded with more then 90% as HDD intensity (backlog) is up to 90-85%.

 

Seems that Ceph is addressing all those issues, but unfortunately I'm keen to know if anyone is using it and have some success stories.

- How it's implemented?

- Performance with 1 baremetal, 2,3 etc.

- How is recovery handled, performance, degradation.

- Metadata on NVME, bluestore , etc. :)

- Ceph over CephFS/XFS or something else?

 

Cheers!

Link to comment
Share on other sites

mastrmind11

I am also using ZFS in a mirror configuration.  While it's true your disks *should* match in size, it isn't a requirement.  I've had a failure or 2 where I replaced the failed disk with a larger disk in the mirror, and while it's not ideal since the vdev will only use the size of the smaller disk, it will still get you out of a degraded state.  Once the resilvering occurs and you're in a happy place again, you can take your time to replace the smaller of the 2 disks in the mirror to reclaim your space.

 

I have not tried Ceph.

Link to comment
Share on other sites

shocker

I am also using ZFS in a mirror configuration. While it's true your disks *should* match in size, it isn't a requirement. I've had a failure or 2 where I replaced the failed disk with a larger disk in the mirror, and while it's not ideal since the vdev will only use the size of the smaller disk, it will still get you out of a degraded state. Once the resilvering occurs and you're in a happy place again, you can take your time to replace the smaller of the 2 disks in the mirror to reclaim your space.

 

I have not tried Ceph.

Thanks for the feedback. Unfortunately I’m using raidz3 and this is not an option for me :)

Link to comment
Share on other sites

  • 3 years later...
shadowsbane0

Don’t know how dead this topic is but as of this post I cannot get Emby, Jellyfin, or Plex to run with a cephfs mount point The config is not running on cephfs only the media directory.. They all just do a constant boot loop. I can’t figure it out. All of my other containers don’t have an issue. If I remove the cephfs volume the container runs as expected.

 

Edited by shadowsbane0
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...