00
All as it should be, except for compression. Am I overlooking something?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
possibly provide a source or sample
commands?
Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: David Turner
Sent: 09 October 2018 17:42
To: Frank Schilder
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users
ssion happening. If
you know about something else than "ceph osd pool set" - commands, please let
me know.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: David Turner
Sent: 12 October 2018 15:47:20
To:
ou know.
Thanks and have a nice weekend,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: David Turner
Sent: 12 October 2018 16:50:31
To: Frank Schilder
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] bluestore compression enabled but n
e questions, this would be great. Most
importantly right now is that I got it to work.
Thanks for your help,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Frank
Schilder
Sent: 12 October 2018 17:00
(see question marks
in table above, what is the resulting mode?).
What I would like to do is enable compression on all OSDs, enable compression
on all data pools and disable compression on all meta data pools. Data and meta
data pools might share OSDs in the future. The above ta
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Ragan, Tj
(Dr.)
Sent: 14 March 2019 11:22:07
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] bluestore compression enabled but no
helps,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Rhian Resnick
Sent: 16 November 2018 16:58:04
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Checking cephfs compression is working
How do you confirm
ser running the benchmark. Only IO to particular files/a particular
directory stopped, so this problem seems to remain isolated. Also, the load on
the servers was not high during the test. The fs remained responsive to other
users. Also, the MDS daemons never crashed. There was no fail-over e
"time": "2019-05-15 11:38:36.511381",
"event": "header_read"
},
{
"time": "2019-05-15 11:38:36.511383",
"event": "throttled"
relevant if multiple MDS daemons are active on a file system.
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Yan, Zheng
Sent: 16 May 2019 05:50
To: Frank Schilder
Cc: Stefan Kooman; ceph-users@lists.ceph.com
Subject: Re: [ceph
single-file-read load on it.
I hope it doesn't take too long.
Thanks for your input!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Yan, Zheng
Sent: 16 May 2019 09:35
To: Frank Schilder
Subject: Re: [ceph-users] mimic
be, keeping in mind that we are in a pilot production phase already and
need to maintain integrity of user data?
Is there any counter showing if such operations happened at all?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
0b~1,10d~1,10f~1,111~1]
The relevant pools are con-fs-meta and con-fs-data.
Best regards,
Frank
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
[root@ceph-08 ~]# cat /etc/tuned/ceph/tuned.conf
[main]
summary=Settings for ceph cluster. Derived from throughput-performance.
inc
versioned encoding,6=dirfrag is stored in omap,8=no
anchor table,9=file layout v2,10=snaprealm v2}
Sorry, I should have checked this first.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
00 PGs per OSD. I actually plan to give the cephfs
a bit higher share for performance reasons. Its on the list.
Thanks again and have a good weekend,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Stefan Kooman
Sent: 18 May 201
Dear Yan,
thank you for taking care of this. I removed all snapshots and stopped snapshot
creation.
Please keep me posted.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Yan, Zheng
Sent: 20 May 2019 13:34:07
sion. Either min_size=k is safe or not. If
it is not, it should never be used anywhere in the documentation.
I hope I marked my opinions and hypotheses clearly and that the links are
helpful. If anyone could shed some light on as to why exactly min_size=k+1 is
important, I would be grateful.
Best r
an interesting feature? Is there any reason for not
remapping all PGs (if possible) prior to starting recovery? It would eliminate
the lack of redundancy for new writes (at least for new objects).
Thanks again and best regards,
=====
Frank Schilder
AIT Risø Campus
Bygni
10/18/surviving-a-ceph-cluster-outage-the-hard-way/
. You will easily find more. The deeper problem here is called "split-brain"
and there is no real solution to it except to avoid it at all cost.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygni
Dear Maged,
thanks for elaborating on this question. Is there already information in which
release this patch will be deployed?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users
that cannot be questioned by a single OSD
trying to mark itself as in.
At least the only context I have heard of OSD flapping was in connection to
2/1-pools. I have never seen such a report for, say, 3/2 pools. Am I
overlooking something here?
Best regards,
=====
Frank Schilder
AIT Risø
gh-network-load scheduled tasks on your machines (host or VM) or
somewhere else affecting relevant network traffic (backups etc?)
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Marc Roos
Se
ng crush
rules to adjust locations of pools, etc.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
stable)
I can't see anything unusual in the logs or health reports.
Thanks for your help!
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo
Please ignore the message below, it has nothing to do with ceph.
Sorry for the spam.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Frank
Schilder
Sent: 17 June 2019 20:33
To: ceph
replicated pools, the aggregated IOPs might be heavily affected. I have,
however, no data on that case.
Hope that helps,
Frank
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Dan van der
Ster
Sent: 20
Typo below, I meant "I doubled bluestore_compression_min_blob_size_hdd ..."
____
From: Frank Schilder
Sent: 20 June 2019 19:02
To: Dan van der Ster; ceph-users
Subject: Re: [ceph-users] understanding the bluestore blob, chunk and
compression para
Dear Yan, Zheng,
does mimic 13.2.6 fix the snapshot issue? If not, could you please send me a
link to the issue tracker?
Thanks and best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Yan, Zheng
Sent: 20 May 2019
e works well for the majority of our use cases. We
can still build small expensive pools to accommodate special performance
requests.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of David
Sent
_size=object_size/k. Coincidentally, for
spinning disks this also seems to imply best performance.
If this is wrong, maybe a disk IO expert can provide a better explanation as a
guide for EC profile choices?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, ru
integer.
alloc_size should be an integer multiple of object_size/k.
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 09 July 2019 09:22
To: Nathan Fish; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] What
fig, kernel
parameters etc, etc. One needs to test what one has.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Lars
Marowsky-Bree
Sent: 11 July 2019 10:14:04
To: ceph-users@lists.ceph.com
being powers of 2.
Yes, the 6+2 is a bit surprising. I have no explanation for the observation. It
just seems a good argument for "do not trust what you believe, gather facts".
And to try things that seem non-obvious - just to be sure.
Best regards,
=====
Frank Schilde
node) against a running cluster with mons in quorum.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Oscar Segarra
Sent: 15 July 2019 11:55
To: ceph-users
Subject: [ceph-users] What if
pshots due to a not yet fixed bug; see this
thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg54233.html
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Robert Ruge
Sen
On Centos7, the option "secretfile" requires installation of ceph-fuse.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Yan, Zheng
Sent: 07 August 2019 10:10:19
ord: "!"
comment: "ceph-container daemons"
uid: 167
group: ceph
shell: "/sbin/nologin"
home: "/var/lib/ceph"
create_home: no
local: yes
state: present
system: yes
This should err if a group and user ceph already exist with IDs
e and what compromises are you willing
to make with regards to sleep and sanity.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Salsa
Sent: 21 October 2019 17:31
To: Martin Verges
Cc: ceph-use
ing.
Best regards,
=====
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of John Hearns
Sent: 24 October 2019 08:21:47
To: ceph-users
Subject: [ceph-users] Erasure coded pools on Ambedded - advice please
I am se
Is this issue now a no-go for updating to 13.2.7 or are there only some
specific unsafe scenarios?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Dan van der
Ster
Sent: 03 December
worst-case situations.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: ceph-users on behalf of Bastiaan
Visser
Sent: 17 January 2020 06:55:25
To: Dave Hall
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-
disk_activate" && -n "${OSD_DEVICE}" ]] ; then
echo "Disabling write cache on ${OSD_DEVICE}"
/usr/sbin/smartctl -s wcache=off "${OSD_DEVICE}"
fi
This works for both, SAS and SATA drives and ensures that write cache is
disabled before an OSD daemon st
the OSD is started. Why and how
else would one want this to happen?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
the setting while
the OSD is down.
During benchmarks on raw disks I just switched cache on and off when I needed.
There was nothing running on the disks and the fio benchmark is destructive any
ways.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
45 matches
Mail list logo