Hi everyone,
We are facing a problem where we cannot read logs sent to graylog because it is
missing one mandatory field.
GELF message (received from
) has empty mandatory "host" field.
Does anyone know what we are missing ?
I know there was someone facing the same issue but it seems that he
It's sometimes faster if you reduce the object size, but I wouldn't go
below 1 MB. Depends on your hardware and use case, 4 MB is a very good
default, though.
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 Münch
Hello,
we run a Multisite Setup between Berlin (master) and Amsterdam (slave) with
mimic. We had some huge bucket of around 40TB which got deleted a while
ago. However the data seems not to be deleted on the slave:
from rados df:
berlin.rgw.buckets.data32 TiB 31638448 0 94915344
0
It's just a display bug in ceph -s:
https://tracker.ceph.com/issues/40011
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
On Sun, Sep 29, 2019 at 4:41 PM Lazuardi Nasution
In Nautilus ceph status writes "rgw: 50 daemons active" and then lists
all 50 names of rgw daemons.
It takes significant space in terminal.
Is it possible to disable list of names and make output like in
Luminous: only number of active daemons?
Thanks
Aleksei
_
Update, I managed to limit the user privilege by modifying the user's
op-mask to read as follows:
```
radosgw-admin user modify --uid= --op-mask=read
```
And to rollback its default privileges:
```
radosgw-admin user modify --uid= --op-mask="read,write,delete"
```
Kind regards,
Charles Alva
S
Hi Massimo,
On 9/29/2019 9:13 AM, Massimo Sgaravatto wrote:
In my ceph cluster I am use spinning disks for bluestore OSDs and SSDs
just for the block.db.
If I have got it right, right now:
a) only 3,30,300GB can be used on the SSD rocksdb spillover to slow
device, so you don't have any ben
Hi Paul,
Thank you for this straight explanation. It is very helpful while waiting
for the fix.
Best regards,
On Mon, Sep 30, 2019, 16:38 Paul Emmerich wrote:
> It's just a display bug in ceph -s:
>
> https://tracker.ceph.com/issues/40011
>
> --
> Paul Emmerich
>
> Looking for help with your C
Hi!
What happens when the cluster network goes down completely?
Is the cluster silently using the public network without interruption, or does
the admin has to act?
Thanks
Lars
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
On Fri, Sep 27, 2019 at 5:18 AM Matthias Leopold
wrote:
>
>
> Hi,
>
> I was positively surprised to to see ceph-iscsi-3.3 available today.
> Unfortunately there's an error when trying to install it from yum repo:
>
> ceph-iscsi-3.3-1.el7.noarch.rp FAILED
> 100%
> [=
Hi,
On 9/30/19 2:46 PM, Lars Täuber wrote:
Hi!
What happens when the cluster network goes down completely?
Is the cluster silently using the public network without interruption, or does
the admin has to act?
The cluster network is used for OSD heartbeats and backfilling/recovery
traffic. If
Mon, 30 Sep 2019 14:49:48 +0200
Burkhard Linke ==>
ceph-users@lists.ceph.com :
> Hi,
>
> On 9/30/19 2:46 PM, Lars Täuber wrote:
> > Hi!
> >
> > What happens when the cluster network goes down completely?
> > Is the cluster silently using the public network without interruption, or
> > does the
Hi Paul,
I have done some RBD benchmark on 7 nodes of 10 SATA HDDs and 3 nodes of
SATA SSD with various object size where the results are shared on URL below.
https://drive.google.com/drive/folders/1tTqCR9Tu-jSjVDl1Ls4rTev6gQlT8-03?usp=sharing
Any thoughts?
Best regards,
On Mon, Sep 30, 2019 a
>
> I don't remember where I read it, but it was told that the cluster is
> migrating its complete traffic over to the public network when the cluster
> networks goes down. So this seems not to be the case?
>
Be careful with generalizations like "when a network acts up, it will be
completely down
What parameters are you exactly using? I want to do a similar test on
luminous, before I upgrade to Nautilus. I have quite a lot (74+)
type_instance=Osd.opBeforeDequeueOpLat
type_instance=Osd.opBeforeQueueOpLat
type_instance=Osd.opLatency
type_instance=Osd.opPrepareLatency
type_instance=Osd.opP
In my case, I am using premade Prometheus sourced dashboards in grafana.
For individual latency, the query looks like that
irate(ceph_osd_op_r_latency_sum{ceph_daemon=~"$osd"}[1m]) / on
(ceph_daemon) irate(ceph_osd_op_r_latency_count[1m])
irate(ceph_osd_op_w_latency_sum{ceph_daemon=~"$osd"}[1m])
Wondering if there are any documents for standing up NFS with an existing
ceph cluster. We don't use ceph-ansible or any other tools besides
ceph-deploy. The iscsi directions were pretty good once I got past the
dependencies.
I saw the one based on Rook, but it doesn't seem to apply to our
Just install these
http://download.ceph.com/nfs-ganesha/
nfs-ganesha-rgw-2.7.1-0.1.el7.x86_64
nfs-ganesha-vfs-2.7.1-0.1.el7.x86_64
libnfsidmap-0.25-19.el7.x86_64
nfs-ganesha-mem-2.7.1-0.1.el7.x86_64
nfs-ganesha-xfs-2.7.1-0.1.el7.x86_64
nfs-ganesha-2.7.1-0.1.el7.x86_64
nfs-ganesha-ceph-2.7.1-0.1.
BTW: commit and apply latency are the exact same thing since
BlueStore, so don't bother looking at both.
In fact you should mostly be looking at the op_*_latency counters
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31
I am wondering what the best way is of deleting a cluster, removing all
the OSDs, and basically start over. I plan to create a few ceph test
clusters to determine what works best in our use-case. There is no real
data being stored, so I don't care about data-loss.
I have a cephfs setup on top of t
At this point, I ran out of ideas. I changed nr_requests and readahead
parameters to 128->1024 and 128->4096, tuned nodes to
performance-throughput. However, I still get high latency during benchmark
testing. I attempted to disable cache on ssd
for i in {a..f}; do hdparm -W 0 -A 0 /dev/sd$i; do
Mon, 30 Sep 2019 15:21:18 +0200
Janne Johansson ==> Lars Täuber :
> >
> > I don't remember where I read it, but it was told that the cluster is
> > migrating its complete traffic over to the public network when the cluster
> > networks goes down. So this seems not to be the case?
> >
>
> Be ca
Hellow everyone,
Can you shed the line on the cause of the crash? Could actually client request
trigger it?
Sep 30 22:52:58 storage2n2-la ceph-osd-17[10770]: 2019-09-30 22:52:58.867
7f093d71e700 -1 bdev(0x55b72c156000 /var/lib/ceph/osd/ceph-17/block) aio_submit
retries 16
Sep 30 22:52:58 sto
23 matches
Mail list logo