> So hdparam -W 0 /dev/sdx doesn't work or it makes no difference?
I wrote "We found the raw throughput in fio benchmarks to be very different for
write-cache enabled and disabled, exactly as explained in the performance
article.", so yes, it makes a huge difference.
> Also I am not sure I und
Hi everyone,
I'm trying to understand where is the difference between the command :
ceph df details
And the result I'm getting when I run this script :
total_bytes=0
while read user; do
echo $user
bytes=$(radosgw-admin user stats --uid=${user} | grep total_bytes_rounded |
tr -dc "0-9")
if
Frank,
Sorry for the confusion. I thought that turning off cache using hdparm -W
0 /dev/sdx takes effect right away and in case of non-raid controllers and
Seagate or Micron SSDs I would see a difference starting fio benchmark
right after executing hdparm. So I wonder it makes a difference wheth
Hello Igor,
thanks for all your feedback and all your help.
The first thing i'll try is to upgrade a bunch of system from
4.19.66 kernel to 4.19.97 and see what happens.
I'll report back in 7-10 days to verify whether this helps.
Greets,
Stefan
Am 20.01.20 um 13:12 schrieb Igor Fedotov:
> Hi S
OK, now I understand. Yes, the cache setting will take effect immediately. Its
more about do you trust the disk firmware to apply the change correctly in all
situations when production IO is active at the same time (will volatile cache
be flushed correctly or not)? I would not and rather change
Hi,
I'm having troubles changing osd_memory_target on my test cluster. I've
upgraded whole cluster from luminous to nautiuls, all OSDs are running
bluestore. Because this testlab is short in RAM, I wanted to lower
osd_memory_target to save some memory.
# ceph version
ceph version 14.2.6 (f0aa067
Quoting Martin Mlynář (nexus+c...@smoula.net):
>
> When I remove this option:
> # ceph config rm osd osd_memory_target
>
> OSD starts without any trouble. I've seen same behaviour when I wrote
> this parameter into /etc/ceph/ceph.conf
>
> Is this a known bug? Am I doing something wrong?
I wond
On 14.2.5 but also present in Luminous, buffer_anon memory use spirals
out of control when scanning many thousands of files. The use case is
more or less "look up this file and if it exists append this chunk to
it, otherwise create it with this chunk." The memory is recovered as
soon as the workloa
I am trying to set up a CephFS with a Cache Tier (for data) on a mini test
cluster, but a kernel-mount CephFS client is unable to write. Cache tier
setup alone seems to be working fine (I tested it with `rados put` and `osd
map` commands to verify on which OSDs the objects are placed) and setting
On Mon, Jan 20, 2020 at 12:57:51PM +, EDH - Manuel Rios wrote:
> Hi Cephs
>
> Several nodes of our Ceph 14.2.5 are fully dedicated to host cold storage /
> backups information.
>
> Today checking the data usage with a customer found that rgw-admin is
> reporting:
...
> That's near 5TB used
On Tue, Jan 21, 2020 at 6:02 PM Hayashida, Mami wrote:
>
> I am trying to set up a CephFS with a Cache Tier (for data) on a mini test
> cluster, but a kernel-mount CephFS client is unable to write. Cache tier
> setup alone seems to be working fine (I tested it with `rados put` and `osd
> map`
Ilya,
Thank you for your suggestions!
`dmsg` (on the client node) only had `libceph: mon0 10.33.70.222:6789
socket error on write`. No further detail. But using the admin key
(client.admin) for mounting CephFS solved my problem. I was able to write
successfully! :-)
$ sudo mount -t ceph 10.33
Hi Robin,
- What are the external tools? CloudBerry S3 Explorer and S3 Browser
- How many objects do the external tools report as existing? Tool report 72142
keys (Aprox 6TB) vs CEPH num_objects 180981 (9TB)
- Do the external tools include incomplete multipart uploads in their size
data? I
On Tue, Jan 21, 2020 at 7:51 PM Hayashida, Mami wrote:
>
> Ilya,
>
> Thank you for your suggestions!
>
> `dmsg` (on the client node) only had `libceph: mon0 10.33.70.222:6789 socket
> error on write`. No further detail. But using the admin key (client.admin)
> for mounting CephFS solved my pro
Dne út 21. 1. 2020 17:09 uživatel Stefan Kooman napsal:
> Quoting Martin Mlynář (nexus+c...@smoula.net):
>
> >
> > When I remove this option:
> > # ceph config rm osd osd_memory_target
> >
> > OSD starts without any trouble. I've seen same behaviour when I wrote
> > this parameter into /etc/ceph/
Quoting Martin Mlynář (nexus+c...@smoula.net):
> Do you think this could help? OSD does not even start, I'm getting a little
> lost how flushing caches could help.
I might have mis-understood. I though the OSDs crashed when you set the
config setting.
> According to trace I suspect something aro
We were able to isolate an individual Micron 5200 and perform Vitaliy's
tests in his spreadsheet.
An interesting item - write cache changes do NOT require a power cycle
to take effect, at least on a Micron 5200.
The complete results from fio are included at the end of this message
for the individ
Hi! Thanks.
The parameter gets reset when you reconnect the SSD so in fact it requires not
to power cycle it after changing the parameter :-)
Ok, this case seems lucky, ~2x change isn't a lot. Can you tell the exact model
and capacity of this Micron, and what controller was used in this test? I
Hi Vitaliy,
The drive is a Micron 5200 ECO 3.84TB
This is from the msecli utility:
Device Name : /dev/sde
Model No : Micron_5200_MTFDDAK3T8TDC
Serial No:
FW-Rev : D1MU404
Total Size : 3840.00GB
Drive Status : Drive is in good heal
On Tue, Jan 21, 2020 at 8:32 AM John Madden wrote:
>
> On 14.2.5 but also present in Luminous, buffer_anon memory use spirals
> out of control when scanning many thousands of files. The use case is
> more or less "look up this file and if it exists append this chunk to
> it, otherwise create it wi
Hi Cbodley ,
As you requested by IRC we tested directly with AWS Cli.
Results:
aws --endpoint=http://XX --profile=ceph s3api list-multipart-uploads
--bucket Evol6
It reports near 170 uploads.
We used the last one:
{
"Initiator": {
"DisplayName": "x",
Hi,
We upgraded our Ceph cluster from Hammer to Luminous and it is running
fine. Post upgrade we live migrated all our Openstack instances (not 100%
sure). Currently we see 1658 clients still on Hammer version. To track the
clients we increased the debugging of debug_mon=10/10, debug_ms=1/5,
debug
On Wed, Jan 22, 2020 at 12:24 AM Patrick Donnelly
wrote:
> On Tue, Jan 21, 2020 at 8:32 AM John Madden wrote:
> >
> > On 14.2.5 but also present in Luminous, buffer_anon memory use spirals
> > out of control when scanning many thousands of files. The use case is
> > more or less "look up this fi
23 matches
Mail list logo