Re: [ceph-users] radosgw (beast): how to enable verbose log? request, user-agent, etc.

2019-08-06 Thread Félix Barbeira
10. > > > > But at least in our case we switched again to civetweb because it don’t > provide a clear log without a lot verbose. > > > > Regards > > > > Manuel > > > > > > *De:* ceph-users *En nombre de *Félix > Barbeira > *Enviado el:*

[ceph-users] radosgw (beast): how to enable verbose log? request, user-agent, etc.

2019-08-06 Thread Félix Barbeira
docs.ceph.com/docs/nautilus/radosgw/frontends/#id3> The only manner I found is to put in front a nginx server running as a proxy or an haproxy, but I really don't like that solution because it would be an overhead component used only to log requests. Anyone in the same situation? Thanks

Re: [ceph-users] bluestore block.db on SSD, where block.wal?

2019-06-05 Thread Félix Barbeira
t; http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Fwd: Planning all flash cluster

2019-01-30 Thread Félix Barbeira
ion. > >> Doing 20k IOPS at 1kB block is totally different at 1MB block... > >> > > >> > Is there anything that obviously stands out as severely unbalanced? > The R720XD comes with a H710 - instead of putting them in RAID0, I'm > thinking a different H

[ceph-users] Mix hardware on object storage cluster

2019-01-27 Thread Félix Barbeira
loads". - Change osd weight: I think this is more oriented to disk space on every node. Do I have some other options? -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to reduce min_size of an EC pool?

2019-01-17 Thread Félix Barbeira
. > > > > Bryan > > > > > > *From: *ceph-users on behalf of Félix > Barbeira > *Date: *Thursday, January 17, 2019 at 1:27 PM > *To: *Ceph Users > *Subject: *[ceph-users] How to reduce min_size of an EC pool? > > > > I want to bring back my cluste

[ceph-users] How to reduce min_size of an EC pool?

2019-01-17 Thread Félix Barbeira
data min_size from 3 may help; search ceph.com/docs for 'incomplete') pg 10.1fe is incomplete, acting [46,52,33,34,9] (reducing pool default.rgw.buckets.data min_size from 3 may help; search ceph.com/docs for 'incomplete') pg 10.1ff is incomplete, acting [33,21,7,19,52] (reducing pool default.rgw.buckets.data min_size from 3 may help; search ceph.com/docs for 'incomplete') root@ceph-monitor02:~# Somebody has an idea of how to fix this?? Maybe copying the data to a replicated pool with min_size=1 ? All data are hopelessly lost? Thanks in advance. -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Boot volume on OSD device

2019-01-12 Thread Félix Barbeira
h-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-30 Thread Félix Barbeira
eph-node03:~# ping6 -c 3 -M do -s 8952 ceph-node01 > PING ceph-node01(2a02:x:x:x:x:x:x:x) 8952 data bytes > 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq=1 ttl=64 time=0.271 ms > 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq=2 ttl=64 time=0.216 ms > 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq

Re: [ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-30 Thread Félix Barbeira
2002ms rtt min/avg/max/mdev = 0.216/0.255/0.280/0.033 ms root@ceph-node03:~# 2017-10-27 16:02 GMT+02:00 Wido den Hollander : > > > Op 27 oktober 2017 om 14:22 schreef Félix Barbeira >: > > > > > > Hi, > > > > I'm trying to configure a ceph cluster

[ceph-users] How to enable jumbo frames on IPv6 only cluster?

2017-10-27 Thread Félix Barbeira
inet loopback # The primary network interface auto eno1 iface eno1 inet6 auto post-up ifconfig eno1 mtu 9000 root@ceph-node01:# Please help! -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Grafana Dasboard

2017-08-29 Thread Félix Barbeira
e is having the dashboard json file. > > Thanks, > Saravans > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- Félix Barbeira. __

Re: [ceph-users] RGW lifecycle not expiring objects

2017-06-29 Thread Félix Barbeira
gt;> and looks encouraging at the server side: >> >> # radosgw-admin lc list >> [ >> { >> "bucket": ":gta:default.6985397.1", >> "status": "UNINITIAL" >> }, >> { >>

Re: [ceph-users] handling different disk sizes

2017-06-06 Thread Félix Barbeira
but >>> then your weights will be able to utilize the weights and distribute the >>> data between the 2TB, 3TB, and 8TB drives much more evenly. >>> >>> On Mon, Jun 5, 2017 at 9:21 AM Loic Dachary wrote: >>&

[ceph-users] handling different disk sizes

2017-06-05 Thread Félix Barbeira
5269G 3 default.rgw.buckets.data 11 2879G 35.34 5269G 746848 default.rgw.users.email1213 0 5269G 1 # -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] Ceph OSD network with IPv6 SLAAC networks?

2017-04-17 Thread Félix Barbeira
k switch > running in > > > Layer 3 and a /64 is assigned per rack. > > > > > > Layer 3 routing is used between the racks that based on the IPv6 > address > > > we can even determine in which rack the host/OSD is

[ceph-users] radosgw bucket name performance

2016-09-21 Thread Félix Barbeira
data is spread in the placement groups all over the osd nodes, no matter what bucket name he got. Can anyone confirm this? Thanks in advance. -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi

Re: [ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-16 Thread Félix Barbeira
, then i > > >>> would recomend using SATADOM flash modules directly into a SATA port > > >>> internal in the machine. Saves you 2 slots for osd's and they are > > >>> quite reliable. you could even use 2 sd cards if your machine have > > >>>

[ceph-users] what happen to the OSDs if the OS disk dies?

2016-08-12 Thread Félix Barbeira
must have a optimal solution, maybe somebody could help me. Thanks in advance. -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] rgw (infernalis docker) with hammer cluster

2016-03-09 Thread Félix Barbeira
rsion? -- Félix Barbeira. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com