Hi Lindays,
Am 29.06.20 um 15:37 schrieb Lindsay Mathieson:
> Nautilus - Bluestore OSD's created with everything on disk. Now I have
> some spare SSD's - can I move the location of the existing WAL and/or DB
> to SSD partitions without recreating the OSD?
>
> I suspect not - saw emails from 2018,
HI Ben,
yes we have the same issues and switched to seagate for those reasons.
you can fix at least a big part of it by disabling the write cache of
those drives - generally speaking it seems the toshiba firmware is broken.
I was not able to find a newer one.
Greets,
Stefan
Am 24.06.20 um 09:4
o SMR).
Stefan
>
> Thanks,
>
> Igor
>
>
> On 5/11/2020 9:44 AM, Stefan Priebe - Profihost AG wrote:
>> Hi Igor,
>>
>> where to post the logs?
>>
>> Am 06.05.20 um 09:23 schrieb Stefan Priebe - Profihost AG:
>>> Hi Igor,
>>>
Hi Igor,
where to post the logs?
Am 06.05.20 um 09:23 schrieb Stefan Priebe - Profihost AG:
> Hi Igor,
>
> Am 05.05.20 um 16:10 schrieb Igor Fedotov:
>> Hi Stefan,
>>
>> so (surprise!) some DB access counters show a significant difference, e.g.
t.sum: 0.003866224
kv_sync_lat.sum: 2.667407139
bytes_written_sst: 34904457
> If that's particularly true for "kv_flush_lat" counter - please rerun with
> debug-bluefs set to 20 and collect OSD logs for both cases
Yes it's still true for kv_flush_lat - see above. Where
0 seconds, can cause ill effects on osd.
Please adjust 'osd_bench_small_size_max_iops' with a higher value if you
wish to use a higher 'count'.
Stefan
>
>
> Thanks,
>
> Igor
>
> On 4/28/2020 8:42 PM, Stefan Priebe - Profihost AG wrote:
>> HI Igor,
&g
KiB in 10.7454 sec at 1.1 MiB/sec 279
IOPS
both baked by the same SAMSUNG SSD as block.db.
Greets,
Stefan
Am 28.04.20 um 19:12 schrieb Stefan Priebe - Profihost AG:
> Hi Igore,
> Am 27.04.20 um 15:03 schrieb Igor Fedotov:
>> Just left a comment at https://tracker.ceph.com/
grate to do actual migration.
>
> And I think that's the root cause for the above ticket.
perfect - this removed all spillover in seconds.
Greets,
Stefan
> Thanks,
>
> Igor
>
> On 4/24/2020 2:37 PM, Stefan Priebe - Profihost AG wrote:
>> No not a standalone Wal I wa
runs.
>
>
> Thanks,
>
> Igor
>
>
>
> On 4/24/2020 12:32 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Igor,
>>
>> there must be a difference. I purged osd.0 and recreated it.
>>
>> Now it gives:
>> ceph tell osd.0 bench
>&g
ase - I presume all the benchmark results are persistent, i.e.
> you can see the same results for multiple runs.
Yes I did 10 runs for each posted benchmark.
Thanks,
Stefan
>
>
> Thanks,
>
> Igor
>
>
>
>> On 4/24/2020 12:32 PM, Stefan Priebe - Profihost A
t;
>
> On 4/24/2020 1:58 PM, Stefan Priebe - Profihost AG wrote:
>> Is Wal device missing? Do I need to run bluefs-bdev-new-db and Wal?
>>
>> Greets,
>> Stefan
>>
>>> Am 24.04.2020 um 11:32 schrieb Stefan Priebe - Profihost AG
>>> :
>>&
Is Wal device missing? Do I need to run bluefs-bdev-new-db and Wal?
Greets,
Stefan
> Am 24.04.2020 um 11:32 schrieb Stefan Priebe - Profihost AG
> :
>
> Hi Igor,
>
> there must be a difference. I purged osd.0 and recreated it.
>
> Now it gives:
uot;iops": 31.389961354303033
}
What's wrong wiht adding a block.db device later?
Stefan
Am 23.04.20 um 20:34 schrieb Stefan Priebe - Profihost AG:
Hi,
if the OSDs are idle the difference is even more worse:
# ceph tell osd.0 bench
{
"bytes_written": 1073741824,
t;: 16.626931034761871
}
# ceph tell osd.38 bench
{
"bytes_written": 1073741824,
"blocksize": 4194304,
"elapsed_sec": 6.890398517004,
"bytes_per_sec": 155831599.77624846,
"iops": 37.153148597776521
}
Stefan
Am 23.04.20 um 14:39 schr
uld
give the same performance from their specs. The only other difference is
that OSD 36 was directly created with the block.db device (Nautilus
14.2.7) and OSD 0 (14.2.8) does not.
Stefan
On 4/23/2020 8:35 AM, Stefan Priebe - Profihost AG wrote:
Hello,
is there anything else needed beside runni
Hello,
is there anything else needed beside running:
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD}
bluefs-bdev-new-db --dev-target /dev/vgroup/lvdb-1
I did so some weeks ago and currently i'm seeing that all osds
originally deployed with --block-db show 10-20% I/O waits while all
eb Igor Fedotov:
> On 4/21/2020 4:59 PM, Stefan Priebe - Profihost AG wrote:
>> Hi Igor,
>>
>> Am 21.04.20 um 15:52 schrieb Igor Fedotov:
>>> Hi Stefan,
>>>
>>> I think that's the cause:
>>>
>>> https://tracker.ceph.com/issues/42
ets,
Stefan
>
> On 4/21/2020 4:02 PM, Stefan Priebe - Profihost AG wrote:
>> Hi there,
>>
>> i've a bunch of hosts where i migrated HDD only OSDs to hybird ones
>> using:
>> sudo -E -u ceph -- bash -c 'ceph-bluestore-tool --path
>> /var/lib/ceph/o
Hi there,
i've a bunch of hosts where i migrated HDD only OSDs to hybird ones using:
sudo -E -u ceph -- bash -c 'ceph-bluestore-tool --path
/var/lib/ceph/osd/ceph-${OSD} bluefs-bdev-new-db --dev-target
/dev/bluefs_db1/db-osd${OSD}'
while this worked fine and each OSD was running fine.
It looses
Hello,
is there any way to reset deep-scrubbed time for pgs?
The cluster was accidently in state nodeep-scrub and is now unable to
deep scrub fast enough.
Is there any way to force mark all pgs as deep scrubbed to start from 0
again?
Greets,
Stefan
__
Am 04.03.20 um 16:02 schrieb Wido den Hollander:
>
>
> On 3/4/20 3:49 PM, Lars Marowsky-Bree wrote:
>> On 2020-03-04T15:44:34, Wido den Hollander wrote:
>>
>>> I understand what you are trying to do, but it's a trade-off. Endless
>>> snapshots are also a danger because bit-rot can sneak in somew
Am 04.03.20 um 15:49 schrieb Lars Marowsky-Bree:
> On 2020-03-04T15:44:34, Wido den Hollander wrote:
>
>> I understand what you are trying to do, but it's a trade-off. Endless
>> snapshots are also a danger because bit-rot can sneak in somewhere which
>> you might not notice.
>>
>> A fresh expo
Am 04.03.20 um 15:44 schrieb Wido den Hollander:
>
>
> On 3/3/20 8:46 PM, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> does anybody know whether there is any mechanism to make sure an image
>> looks like the original after an import-diff?
>>
>
;
>
> On Wed, Mar 4, 2020 at 11:05 AM Stefan Priebe - Profihost AG
> wrote:
>>
>> Hello,
>>
>> is there any way to switch to pg_upmap without triggering heavy
>> rebalancing two times?
>>
>> 1.) happens at:
>> ceph osd crush weig
Hello,
is there any way to switch to pg_upmap without triggering heavy
rebalancing two times?
1.) happens at:
ceph osd crush weight-set rm-compat
2.) happens after running the balancer in pg_upmap mode
Greets,
Stefan
___
ceph-users mailing list -- cep
d export on
the source and target snapshot everytime to compare hashes? Which is
slow if you talk about 100's of terrabytes of data isn't it?
Stefan
>
> Regards,
>
> [1] https://github.com/JackSlateur/backurne
>
> On 3/3/20 8:46 PM, Stefan Priebe - Profihost AG wrote
Hello,
does anybody know whether there is any mechanism to make sure an image
looks like the original after an import-diff?
While doing ceph backups on another ceph cluster i currently do a fresh
import every 7 days. So i'm sure if something went wrong with
import-diff i have a fresh one every 7
Am 03.03.20 um 15:34 schrieb Rafał Wądołowski:
> Stefan,
>
> What version are you running?
14.2.7
> You wrote "Ceph automatically started to
> migrate all date from the hdd to the ssd db device", is that normal auto
> compaction or ceph developed a trigger to do it?
normal after running
ceph-b
Am 03.03.20 um 08:38 schrieb Thomas Lamprecht:
> Hi,
>
> On 3/3/20 8:01 AM, Stefan Priebe - Profihost AG wrote:
>> does anybody have a guide to build ceph Nautilus for Debian stretch? I
>> wasn't able to find a backported gcc-8 for stretch.
>
> That's be
Nobody who has an idea? Ceph automatically started to migrate all date
from the hdd to the ssd db device but has stopped at 128kb on nearly all
osds.
Greets,
Stefan
Am 02.03.20 um 10:32 schrieb Stefan Priebe - Profihost AG:
> Hello,
>
> i added a db device to my osds running nautilu
Hello list,
does anybody have a guide to build ceph Nautilus for Debian stretch? I
wasn't able to find a backported gcc-8 for stretch.
Otherwise i would start one.
Greets,
Stefan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
fter osd creation a ssd device.
Greets,
Stefan
>
> Reed
>
>> On Mar 2, 2020, at 3:32 AM, Stefan Priebe - Profihost AG
>> mailto:s.pri...@profihost.ag>> wrote:
>>
>> Hello,
>>
>> i added a db device to my osds running nautilus. The DB data migra
Hello,
i added a db device to my osds running nautilus. The DB data migratet
over some days from the hdd to ssd (db device).
But now it seems all are stuck at:
# ceph health detail
HEALTH_WARN BlueFS spillover detected on 8 OSD(s)
BLUEFS_SPILLOVER BlueFS spillover detected on 8 OSD(s)
osd.0
on crash with negative index starting at - up to -1 as a prefix.
>
> -1> 2020-01-16 01:10:13.404090 7f3350a14700 -1 rocksdb:
>
>
> It would be great If you share several log snippets for different
> crashes containing these last 1 lines.
>
>
> Thanks,
>
34 matches
Mail list logo