Hi David,
thanks for your answer. I did enable compression on the pools as described in
the link you sent below (ceph osd pool set sr-fs-data-test compression_mode
aggressive, I also tried force to no avail). However, I could not find anything
on enabling compression per OSD. Could you possibly
I rebooted a Ceph host and logged `ceph status` & `ceph health detail`
every 5 seconds. During this I encountered 'PG_AVAILABILITY Reduced data
availability: pgs peering'. At the same time some VMs hung as described
before.
See the log here: https://pastebin.com/wxUKzhgB
PG_AVAILABILITY is noted
Hi,
On 10/12/2018 01:55 PM, Nils Fahldieck - Profihost AG wrote:
I rebooted a Ceph host and logged `ceph status` & `ceph health detail`
every 5 seconds. During this I encountered 'PG_AVAILABILITY Reduced data
availability: pgs peering'. At the same time some VMs hung as described
before.
Just
Hi, in our `ceph.conf` we have:
mon_max_pg_per_osd = 300
While the host is offline (9 OSDs down):
4352 PGs * 3 / 62 OSDs ~ 210 PGs per OSD
If all OSDs are online:
4352 PGs * 3 / 71 OSDs ~ 183 PGs per OSD
... so this doesn't seem to be the issue.
If I understood you right, that's what y
Hi all,
we are running a luminous 12.2.8 cluster with a 3 fold replicated cache
pool with a min_size of 2. We recently encountered an "object unfound"
error in one of our pgs in this pool. After marking this object lost,
two of the acting osds crashed and were unable to start up again, with
o
It's all of the settings that you found in your first email when you dumped
the configurations and such.
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression
On Fri, Oct 12, 2018 at 7:36 AM Frank Schilder wrote:
> Hi David,
>
> thanks for your answer. I d
Cephers:
As the subject suggests, has anyone tested Samsung 860 DCT SSDs? They
are really inexpensive and we are considering buying some to test.
Thanks,
--
Kenneth Van Alstyne
Systems Architect
Knight Point Systems, LLC
Service-Disabled Veteran-Owned Business
1775 Wiehle Avenue Suite 1
The PGs per OSD does not change unless the OSDs are marked out. You have
noout set, so that doesn't change at all during this test. All of your PGs
peered quickly at the beginning and then were active+undersized the rest of
the time, you never had any blocked requests, and you always had 100MB/s+
What do you want to use these for? "5 Year or 0.2 DWPD" is the durability
of this drive which is absolutely awful for most every use in Ceph.
Possibly if you're using these for data disks (not DB or WAL) and you plan
to have a more durable media to host the DB+WAL on... this could work. Or
if you
don't have tested them, but be careful of dwpd
0.2 DWPD
:/
- Mail original -
De: "Kenneth Van Alstyne"
À: "ceph-users"
Envoyé: Vendredi 12 Octobre 2018 15:53:43
Objet: [ceph-users] Anyone tested Samsung 860 DCT SSDs?
Cephers:
As the subject suggests, has anyone tested Samsung 860 DCT
Hi
It has only TBW of 349 TB, so might die quite soon. But what about the
"Seagate Nytro 1551 DuraWrite 3DWPD Mainstream Endurance 960GB, SATA"?
Seems really cheap too and has TBW 5.25PB. Anybody tested that? What
about (RBD) performance?
Cheers
Corin
On Fri, 2018-10-12 at 13:53 +, Kenneth
Hi David,
thanks for your quick answer. When I look at both references, I see exactly the
same commands:
ceph osd pool set {pool-name} {key} {value}
where on one page only keys specific for compression are described. This is the
command I found and used. However, I can't see any compression ha
If you go down just a little farther you'll see the settings that you put
into your ceph.conf under the osd section (although I'd probably do
global). That's where the OSDs get the settings from. As a note, once
these are set, future writes will be compressed (if they match the
compression settin
Thanks for the feedback everyone. Based on the TBW figures, it sounds like
these drives are terrible for us as the idea is to NOT use them simply for
archive. This will be a high read/write workload, so totally a show stopper.
I’m interested in the Seagate Nytro myself.
Thanks,
--
Kenneth V
Hi David,
thanks, now I see what you mean. If you are right, that would mean that the
documentation is wrong. Under
"http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values"; is
stated that "Sets inline compression algorithm to use for underlying BlueStore.
This setting overri
I set up a new Mimic cluster recently and have just enabled the Dashboard.
I first tried to add a (Dashboard) user with the "ac-user-create" command
following this version of documentation (
http://docs.ceph.com/docs/master/mgr/dashboard/), but the command did not
work. Following the /mimic/
On Wed, Oct 10, 2018 at 5:42 PM Brady Deetz wrote:
>
> Looks like that may have recently been broken.
>
> Unfortunately no real logs of use in rbd-target-api.log or rbd-target-gw.log.
> Is there an increased log level I can enable for whatever web-service is
> handling this?
>
> [root@dc1srviscs
Happens to me too, on gmail. I'm on half a dozen other mailman lists with
no issues at all. I've escalate this problem to the ceph mailing list
maintainer and they said its an issue with their provider, but this was
probably a year ago.
On Tue, Oct 9, 2018 at 7:04 AM Elias Abacioglu <
elias.abacio
Hi David,
Am 12.10.2018 um 15:59 schrieb David Turner:
> The PGs per OSD does not change unless the OSDs are marked out. You
> have noout set, so that doesn't change at all during this test. All of
> your PGs peered quickly at the beginning and then were active+undersized
> the rest of the time,
PGs switching to the peering state after a failure is normal and
expected. The important thing is how long they stay in that state; it
shouldn't be longer than a few seconds. It looks like less than 5
seconds from your log.
What might help here is the ceph -w log (or mon cluster log file)
during a
It would be helpful to have a full crash log with debug osd = 0/20 and
the information in which pool and pg you marked the object as lost.
You might be able to use ceph-objectstore-tool to remove the bad
object from the OSD if it still exists in either the cache pool or
underlying pool.
Ugly fix i
what was the object name that you marked lost? was it one of the cache tier
hit_sets?
the trace you have does seem to be failing when the OSD is trying to remove
a hit set that is no longer needed. i ran into a similar problem which
might have been why that bug you listed was created. maybe provid
22 matches
Mail list logo