Here are the most useful Ceph commands that every Ceph administrator should know — grouped by purpose and explained with context and screenshots, especially when working with Proxmox VE.
Cluster Health and Status
1. ceph -s
Shows overall cluster health summary.
ceph -s
Output includes:
- Health status (HEALTH_OK / WARN / ERR)
- Number of OSDs, MONs, MGRs
- I/O rates
- PG state
![]()
2. ceph health detail
Gives detailed information about any health warnings.
ceph health detail
![]()
3. ceph df
Shows disk usage by pools and objects.
ceph df
![]()
4. ceph status
Same as ceph -s, but more verbose output in some versions.
![]()
OSDs (Object Storage Daemons)
5. ceph osd tree
Displays the hierarchical layout of OSDs (nodes, devices).
ceph osd tree
![]()
6. ceph osd status
Gives the up/in status of all OSDs.
![]()
7. ceph osd df
Detailed OSD usage report (space, performance weight).
![]()
8. ceph osd pool ls
Lists all storage pools.
![]()
9. ceph osd pool stats
Stats per pool — useful for performance monitoring.
![]()
10. ceph osd out <osd-id>
Mark an OSD out (for maintenance or replacement).
ceph osd out osd.3
11. ceph osd in <osd-id>
Bring an OSD back in to the cluster.
ceph osd in osd.3
12. ceph osd crush reweight <osd-id> <weight>
Manually adjust an OSD’s relative data weight.
ceph osd crush reweight osd.2 0.8
MONs and MGRs
13. ceph quorum_status
Shows the current monitor quorum — useful for troubleshooting MON issues.
14. ceph mgr dump
Displays manager daemon status and modules.
PGs (Placement Groups)
15. ceph pg stat
Summary of PG states (active+clean, degraded, etc.).
16. ceph pg dump
Detailed info about all placement groups (very verbose).
17. ceph pg ls-by-pool <pool-name>
List PGs belonging to a specific pool.
Pools and Images
18. rbd ls -p <pool>
List RBD images in a pool.
rbd ls -p vm-storage
19. rbd info <image>
Show details of a block image.
rbd info -p vm-storage vm-100-disk-0
20. rbd snap ls <image>
List all snapshots for a given image.
21. rbd resize <image> --size <MB>
Resize an image (expand).
rbd resize vm-100-disk-0 --size 20480
Maintenance & Troubleshooting
22. ceph tell osd.* bench
Benchmark all OSDs.
23. ceph osd scrub <osd-id>
Manually trigger a scrub on an OSD (data consistency check).
24. ceph balancer status
Check if automatic balancing is active.
25. ceph mgr module enable <module>
Enable optional modules like dashboard, prometheus, etc.
ceph mgr module enable dashboard
Authentication
26. ceph auth list
List all Ceph auth keys.
27. ceph auth get <client>
Get a specific auth key.
ceph auth get client.admin
28. ceph auth del <client>
Remove a client key.
Cleaning Up
29. ceph osd purge <osd-id> --yes-i-really-mean-it
Completely remove an OSD.
30. ceph mon remove <mon-id>
Remove a monitor (if removed physically or decommissioned).
Proxmox Specific
If you’re managing Ceph from within Proxmox VE, some tasks are best handled using:
pveceph status
pveceph pool ls
pveceph osd create <disk>
pveceph install --version <nautilus|octopus|quincy>
These integrate with Proxmox-specific configurations and help maintain UI synchronization.
Get in touch with Saturn ME today for a free Ceph consulting session—no strings attached.