Hi,
the OSDs log into the journal, so you should be able to capture the
logs during startup with 'journalctl -fu
ceph-@osd..service' or check after the failure with
'journalctl -u ceph-@osd..service'.
Zitat von 7ba335c6-fb20-4041-8c18-1b00efb78...@anonaddy.me:
Hello,
I've bootstrapped
On Fri, Apr 29, 2022 at 10:43:54AM +0200, Rainer Krienke
wrote:
> # ceph device ls
> DEVICEHOST:DEV DAEMONS LIFE EXPECTANCY
> SEAGATE_ST4000NM017A_WS23WKJ4 ceph4:sdb osd.49
> SEAGATE_ST4000NM0295_ZC13XK9P ceph6:sdo osd.92
> SEAGATE_ST4000NM0295_ZC141B3S ce
On Tue, May 3, 2022 at 9:31 PM Steve Taylor wrote:
>
> Just curious, is there any updated ETA on the 16.2.8 release? This
> note implied that it was pretty close a couple of weeks ago, but the
> release task seems to have several outstanding items before it's
> wrapped up.
>
> I'm just wondering i
Hi,
Looking to take our Octopus Ceph up to Pacific in the coming days.
All the machines (physical - osd,mon,admin,meta) are running Debian
'buster' and the setup was done originally with cephdeploy (~2016).
Previously I've been able to upgrade the core OS, keeping the ceph
packages at the sa
On 04.05.22 12:41, Luke Hall wrote:
Secondarily, would anyone make a strong case for taking this opportunity
to move to cephadm management?r deployment toolto use anothe
Yes, as this is the path the Ceph project has decided to go.
The ceph-deploy tool has been deprecated. You would have have
Hi Everyone,
I'm looking for a bit of guidance on a 9 server * 16 OSDs per server=
144 OSDs system.
This cluster has 143 OSDs in it but ceph osd df shows that they are very
unbalanced in their utilization. Some are around 50% full and yet
others are pushing 85% full. The balancer was on and
Hi Josh,
Thanks for getting back to me so soon!
This cluster was upgraded from 13.x to 14.2.9 some time ago. The entire
cluster was installed at the 13.x time and was upgraded together so all
OSDs should have the same formatting etc.
Below is pasted the ceph osd df tree output.
Sincerely
-D
Hi Josh,
We do have an old pool that is empty so there's 4611 empty PGs but the
rest seem fairly close:
# ceph pg ls|awk '{print $7/1024/1024/10}'|cut -d "." -f 1|sed -e
's/$/0/'|sort -n|uniq -c
4611 00
1 1170
8 1180
10 1190
28 1200
51 1210
54 1220
Hi David,
I think that part of the problem with unbalanced osds is that your EC
rule k=7,m=2 gives 9 total chunks and you have 9 total servers. This
is essentially tying cephs hands as it has no choice where to put the
pg's. Assuming a failure domain of host then each EC shard needs to be
on a diff
On Wed, May 4, 2022 at 1:25 AM Eneko Lacunza wrote:
> Hi Gregory,
>
> El 3/5/22 a las 22:30, Gregory Farnum escribió:
>
> On Mon, Apr 25, 2022 at 12:57 AM Eneko Lacunza
> wrote:
>
> We're looking to deploy a stretch cluster for a 2-CPD deployment. I have
> read the following
> docs:https://do
10 matches
Mail list logo