Hello,
I see this in my logs:
2025-01-22T09:14:43.063966+ mgr.node1.joznex (mgr.584732) 337151 : cluster
[DBG] pgmap v300985: 497 pgs: 497 active+clean; 9.5 TiB data, 29 TiB used, 48
TiB / 76 TiB avail
2025-01-22T09:14:45.066685+ mgr.node1.joznex (mgr.584732) 337154 : cluster
[DBG] pgm
NFS - HA and Ingress: [ https://docs.ceph.com/en/latest/mgr/nfs/#ingress ]
Referring to Note#2, is NFS high-availability functionality considered complete
(and stable)?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
repaired in a pool Count
# TYPE ceph_pg_objects_repaired counter
ceph_pg_objects_repaired{poolid="32"} 0.0
[...]
This annoys our exporter_exporter service so it rejects the export of ceph
metrics. Is this a known issue? Will this be fixed in the next update?
Cheers,
Andreas
--
|
to avoid having all snapshots being synced? We only need
the latest version of the image on the destination cluster and the
snapshots add around 200% disk space overhead on average.
Best regards,
Andreas
___
ceph-users mailing list -- ceph-users
gards,
Andreas
On 19.01.23 12:50, Frank Schilder wrote:
Hi Ilya,
thanks for the info, it did help. I agree, its the orchestration layer's
responsibility to handle things right. I have a case open already with support
and it looks like there is indeed a bug on that side. I was mainly after a
lock and with "-oexclusive"
the RBD client is not going to release it. So this is not a bug.
Best regards,
Andreas
On 30.11.22 12:58, Andreas Teuchert wrote:
Hello,
creating snapshots of RBD images that are mapped with -oexclusive seems
not to be possible:
# rbd map -oexclusiv
mention this.
Is this on purpose or a bug?
Ceph version is 17.2.5, RBD client is Ubuntu 22.04 with kernel
5.15.0-52-generic.
Best regards,
Andreas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
due to some missing python modules ...
Something suspicious in the output of "ceph crash ls" ?
Cheers,
Andreas
--
| Andreas Haupt| E-Mail: andreas.ha...@desy.de
| DESY Zeuthen| WWW:http://www-zeuthen.desy.de/~ahaupt
| Platanenallee 6 | Phone:
device class
only in Pacific in order to get a functional autoscaler?
Thanks,
Andreas
--
| Andreas Haupt| E-Mail: andreas.ha...@desy.de
| DESY Zeuthen| WWW:http://www-zeuthen.desy.de/~ahaupt
| Platanenallee 6 | Phone: +49/33762/7-7359
| D-15738 Zeuthen
ally no problem
compiling it on our own. But it would be much more convenient to have
it in EPEL-8, as problably no one will run productive iSCSI gateways
under Fedora ;-)
Cheers,
Andreas
--
| Andreas Haupt| E-Mail: andreas.ha...@desy.de
| DESY Zeuthen| WWW:http:/
Hi all,
I've set up a 6-node ceph cluster to learn how ceph works and what I can
do with it. However, I'm new to ceph, so if the answer to one of my
questions is RTFM, point me to the right place.
My problem is this:
The cluster consists of 3 mons and 3 osds. Even though the dashboard
shows
few insights to that. Spent way too much time to
switch to some other solution.
Best regards,
Andreas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Dear all,
ceph-mgr-dashboard-15.2.13-0.el7.noarch contains three rpm dependencies
that cannot be resolved here (not part of CentOS & EPEL 7):
python3-cherrypy
python3-routes
python3-jwt
Does anybody know where they are expected to come from?
Thanks,
Andreas
--
| Andreas Haupt
1Gb/10s so I shut them down again.
>>
>> Any idea what is going on? Or how can I shrik back down the db?
>>
>>
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-use
reasonably sized).
I might be totally wrong, though. If you just do it, because you don't
want to re-create (or modify) the OSDs, it's not worth the effort IMHO.
rgds,
derjohn
On 02.03.21 10:48, Norman.Kern wrote:
> On 2021/3/2 上午5:09, Andreas John wrote:
>> Hallo,
>>
s anyone have any best practices for it? Thanks.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Geschaefts
have linux bonding with mode slb, but to
my experience that didn't work very well with COTS switches, maybe due
to ARP learing issues. (We ended up buying Juniper QFX-5100 with MLAG
support).
Best Regards,
Andreas
P.S. I didn't try out the setup from above yet. If anyone did already
fannes, Fabian wrote:
> failed: (22) Invalid argument
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832
Tel: +49 69 8570033-1 | Fax: -2 | http://www.net-lab.net
Facebook: https://www.facebook.com/netlabdotnet
Twi
Hello Alwin,
do you know if it makes difference to disable "all green computing" in
the BIOS vs. settings the governor to "performance" in the OS?
Of not, I think I will will have some service cycles to set our
proxmox-ceph nodes correctly.
Best Regards,
Andreas
On 1
ng don't know why. Disk itself is capable to deliver well
>> above 50 KIOPS. Difference is magnitude. Any info is more welcome.
>> Daniel Mezentsev, founder
>> (+1) 604 313 8592.
>> Soleks Data Group.
>> Shaping the clouds.
>> ___
>> ceph-users ma
so tried doing a 'ceph pg
>>>> force-recovery' on
>>>> the affected PGs, but only one seems to have been tagged accordingly
>>>> (see ceph -s output below).
>>>>
>>>> The guide also says "Sometimes it simply takes some t
_
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le
on db size
increased drastically.
We have 14.2.11, 10 OSD @ 2TB and cephfs in use.
Is this a known issue? Should we avoid noout?
TIA,
derjohn
--
Andreas John
net-lab GmbH | Frankfurter Str. 99 | 63067 Offenbach
Geschaeftsfuehrer: Andreas John | AG Offenbach, HRB40832
Tel: +49 69 8570033-1 | Fax:
.
Is this assumption correct? The documentation
(https://docs.ceph.com/projects/ceph-ansible/en/latest/day-2/upgrade.html) is
short on this.
Thanks!
- Andreas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
;s
> not clear to me if this can only move a WAL device or if it can be
> used to remove it ...
>
> Regards,
> Michael
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le..
On 22.09.20 22:09, Nico Schottelius wrote:
[...]
> All nodes are connected with 2x 10 Gbit/s bonded/LACP, so I'd expect at
> least a couple of hundred MB/s network bandwidth per OSD.
>
> On one server I just restarted the OSDs and now the read performance
> dropped down to 1-4 MB/s per OSD with be
Hey Nico,
maybe you "pinned" the IP of the OSDs in question in ceph.conf to the IP
of the old chassis?
Good Luck,
derjohn
P.S. < 100MB/sec is a terrible performance for recovery with 85 OSDs.
Is it rotational on 1 GBit/sec network? You could set ceph osd set
nodeep-scrub to prevent too much
Hello,
On 22.09.20 20:45, Nico Schottelius wrote:
> Hello,
>
> after having moved 4 ssds to another host (+ the ceph tell hanging issue
> - see previous mail), we ran into 241 unknown pgs:
You mean, that you re-seated the OSDs into another chassis/host? Is the
crush map aware about that?
I didn'
Hello,
https://docs.ceph.com/en/latest/rados/operations/erasure-code/
but, you could probably manually intervent, if you want an erasure coded
pool.
rgds,
j.
On 22.09.20 14:55, René Bartsch wrote:
> Am Dienstag, den 22.09.2020, 14:43 +0200 schrieb Andreas John:
>> Hello,
>>
ph cluster?
> Does Proxmox support snapshots, backups and thin provisioning with RBD-
> VM images?
>
> Regards,
>
> Renne
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ce
9986 bytes, 0/0 manifest objects, 0/0
hit_set_archive bytes.
Aug 6 08:28:44 krake08 ceph-osd: 2020-08-06 08:28:44.477 7fb6b2b9d700 -1
log_channel(cluster) log [ERR] : 12.38 repair 1 errors, 1 fixed
Thanks in advance,
Andreas
--
| Andreas Haupt| E-Mail: andreas.ha...@desy.de
| DESY Zeu
Hello,
if I understand correctly:
if we upgrade from an running nautilus cluster to octopus we have a
downtime on an update of MDS.
Is this correct?
Mit freundlichen Grüßen / Kind regards
Andreas Schiefer
Leiter Systemadministration / Head of systemadministration
---
HOME OF LOYALTY
CRM
dpoint
2020-04-23T07:02:17.745+0200 7f5aab2af700 1 handler->ERRORHANDLER:
err_no=-2003 new_err_no=-2003
2020-04-23T07:02:17.745+0200 7f5aab2af700 2 req 1 0s http status=405
2020-04-23T07:02:17.745+0200 7f5aab2af700 1 == req done
req=0x7f5aab2a6d50 op status=0 http_status=405 latency=0s ==
Best Regards,
Andreas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
mqp://rabbitmquser:rabbitmqp...@rabbitmq.example.com:5672
And then the bucket-notification works like it should.
But I don't think the documentation is wrong, or is it?
Cheers,
Andreas
[1] https://docs.ceph.com/docs/master/radosgw/notificati
EndpointArgs, right?
Or do I miss it somewhere else?
Best Regards,
Andreas
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Sorry for the noise - problem was introduced by a missing iptables rule
:-(
On Fri, 2020-02-21 at 09:04 +0100, Andreas Haupt wrote:
> Dear all,
>
> we recently added two additional RGWs to our CEPH cluster (version
> 14.2.7). They work flawlessly, however they do not show up in
On Fri, 2020-02-21 at 15:19 +0700, Konstantin Shalygin wrote:
> On 2/21/20 3:04 PM, Andreas Haupt wrote:
> > As you can see, only the first, old RGW (ceph-s3) is listed. Is there
> > any place where the RGWs need to get "announced"? Any idea, how to
> > debug th
Ws need to get "announced"? Any idea, how to
debug this?
Thanks,
Andreas
--
| Andreas Haupt| E-Mail: andreas.ha...@desy.de
| DESY Zeuthen| WWW:http://www-zeuthen.desy.de/~ahaupt
| Platanenallee 6 | Phone: +49/33762/7-7359
| D-15738 Zeuthen |
;> OS: Centos7
>> Ceph: 10.2.5
>>
>> Hi, everyone
>>
>> The cluster is used for VM image storage and object storage.
>> And I have a bucket which has more than 20 million objects.
>>
>> Now, I have a problem that cluster blocks operation.
>>
Helllo,
answering to myself in case some else sutmbles upon this thread in the
future. I was able to remove the unexpected snap, here is the recipe:
How to remove the unexpected snapshots:
1.) Stop the OSD
ceph-osd -i 14 --flush-journal
... flushed journal /var/lib/ceph/osd/ceph-14/journal fo
:20, Andreas John wrote:
> Hello,
>
> for those sumbling upon a similar issue: I was able to mitigate the
> issue, by setting
>
>
> === 8< ===
>
> [osd.14]
> osd_pg_max_concurrent_snap_trims = 0
>
> =
>
>
> in ceph.conf. You don't need to re
correctly that in PG 7.374 there is with rbd prefix
59cb9c679e2a9e3 an object that ends with ..3096, which has a snap ID
29c44 ... ? What does the part A29AAB74__7 ?
I was nit able to find in docs how the directory / filename is structured.
Best Regrads,
j.
On 31.01.20 16:04, Andreas J
Hello,
in my cluster one after the other OSD dies until I recognized that it
was simply an "abort" in the daemon caused probably by
2020-01-31 15:54:42.535930 7faf8f716700 -1 log_channel(cluster) log
[ERR] : trim_object Snap 29c44 not in clones
Close to this msg I get a stracktrace:
ceph ver
43 matches
Mail list logo