10.
>
>
>
> But at least in our case we switched again to civetweb because it don’t
> provide a clear log without a lot verbose.
>
>
>
> Regards
>
>
>
> Manuel
>
>
>
>
>
> *De:* ceph-users *En nombre de *Félix
> Barbeira
> *Enviado el:*
docs.ceph.com/docs/nautilus/radosgw/frontends/#id3>
The only manner I found is to put in front a nginx server running as a
proxy or an haproxy, but I really don't like that solution because it would
be an overhead component used only to log requests. Anyone in the same
situation?
Thanks
t; http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ion.
> >> Doing 20k IOPS at 1kB block is totally different at 1MB block...
> >> >
> >> > Is there anything that obviously stands out as severely unbalanced?
> The R720XD comes with a H710 - instead of putting them in RAID0, I'm
> thinking a different H
loads".
- Change osd weight: I think this is more oriented to disk space on every
node.
Do I have some other options?
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.
>
>
>
> Bryan
>
>
>
>
>
> *From: *ceph-users on behalf of Félix
> Barbeira
> *Date: *Thursday, January 17, 2019 at 1:27 PM
> *To: *Ceph Users
> *Subject: *[ceph-users] How to reduce min_size of an EC pool?
>
>
>
> I want to bring back my cluste
data min_size from 3 may help; search ceph.com/docs for
'incomplete')
pg 10.1fe is incomplete, acting [46,52,33,34,9] (reducing pool
default.rgw.buckets.data min_size from 3 may help; search ceph.com/docs for
'incomplete')
pg 10.1ff is incomplete, acting [33,21,7,19,52] (reducing pool
default.rgw.buckets.data min_size from 3 may help; search ceph.com/docs for
'incomplete')
root@ceph-monitor02:~#
Somebody has an idea of how to fix this??
Maybe copying the data to a replicated pool with min_size=1 ?
All data are hopelessly lost?
Thanks in advance.
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
h-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
eph-node03:~# ping6 -c 3 -M do -s 8952 ceph-node01
> PING ceph-node01(2a02:x:x:x:x:x:x:x) 8952 data bytes
> 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq=1 ttl=64 time=0.271 ms
> 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq=2 ttl=64 time=0.216 ms
> 8960 bytes from 2a02:x:x:x:x:x:x:x: icmp_seq
2002ms
rtt min/avg/max/mdev = 0.216/0.255/0.280/0.033 ms
root@ceph-node03:~#
2017-10-27 16:02 GMT+02:00 Wido den Hollander :
>
> > Op 27 oktober 2017 om 14:22 schreef Félix Barbeira >:
> >
> >
> > Hi,
> >
> > I'm trying to configure a ceph cluster
inet loopback
# The primary network interface
auto eno1
iface eno1 inet6 auto
post-up ifconfig eno1 mtu 9000
root@ceph-node01:#
Please help!
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
e is having the dashboard json file.
>
> Thanks,
> Saravans
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Félix Barbeira.
__
gt;> and looks encouraging at the server side:
>>
>> # radosgw-admin lc list
>> [
>> {
>> "bucket": ":gta:default.6985397.1",
>> "status": "UNINITIAL"
>> },
>> {
>>
but
>>> then your weights will be able to utilize the weights and distribute the
>>> data between the 2TB, 3TB, and 8TB drives much more evenly.
>>>
>>> On Mon, Jun 5, 2017 at 9:21 AM Loic Dachary wrote:
>>&
5269G
3
default.rgw.buckets.data 11 2879G 35.34 5269G
746848
default.rgw.users.email1213 0 5269G
1
#
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
k switch
> running in
> > > Layer 3 and a /64 is assigned per rack.
> > >
> > > Layer 3 routing is used between the racks that based on the IPv6
> address
> > > we can even determine in which rack the host/OSD is
data
is spread in the placement groups all over the osd nodes, no matter what
bucket name he got. Can anyone confirm this?
Thanks in advance.
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
, then i
> > >>> would recomend using SATADOM flash modules directly into a SATA port
> > >>> internal in the machine. Saves you 2 slots for osd's and they are
> > >>> quite reliable. you could even use 2 sd cards if your machine have
> > >>>
must have a optimal solution,
maybe somebody could help me.
Thanks in advance.
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
rsion?
--
Félix Barbeira.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
20 matches
Mail list logo