> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Gregory Farnum
> Sent: 15 July 2017 00:09
> To: Ruben Rodriguez
> Cc: ceph-users
> Subject: Re: [ceph-users] RBD cache being filled up in small increases instead
> of 4MB
>
> On Fri, Jul 14,
Hi,
short version :
I broke my cluster !
Long version , with context:
With a 4 nodes Proxmox Cluster
The nodes are all Pproxmox 5.05+Ceph luminous with filestore
-3 mon+OSD
-1 LXC+OSD
Was working fine
Added a fifth node (proxmox+ceph) today a broke everything..
Though every node can ping each
When are bugs like these http://tracker.ceph.com/issues/20563 available
in the rpm repository
(https://download.ceph.com/rpm-luminous/el7/x86_64/)?
I sort of don’t get it from this page
http://docs.ceph.com/docs/master/releases/. Maybe something here could
specifically mentioned about the av
I debugged a little, and find that this might have something to do with the
"cache evict" and "list_snaps" operations.
I debugged the "core" file of the process with gdb, and confirmed that the
object that caused the segmentation fault is
rbd_data.d18d71b948ac7.062e, just as the fol
On Sat, Jul 15, 2017 at 9:43 AM, Nick Fisk wrote:
> Unless you tell the rbd client to not disable readahead after reading the 1st
> x number of bytes (rbd readahead disable after bytes=0), it will stop reading
> ahead and will only cache exactly what is requested by the client.
The default is t
Hi,
On 15.07.2017 16:01, Phil Schwarz wrote:
> Hi,
> ...
>
> While investigating, i wondered about my config :
> Question relative to /etc/hosts file :
> Should i use private_replication_LAN Ip or public ones ?
private_replication_LAN!! And the pve-cluster should use another network
(nics) if poss
On 15/07/17 15:33, Jason Dillaman wrote:
> On Sat, Jul 15, 2017 at 9:43 AM, Nick Fisk wrote:
>> Unless you tell the rbd client to not disable readahead after reading the
>> 1st x number of bytes (rbd readahead disable after bytes=0), it will stop
>> reading ahead and will only cache exactly wh
On 15/07/17 09:43, Nick Fisk wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Gregory Farnum
>> Sent: 15 July 2017 00:09
>> To: Ruben Rodriguez
>> Cc: ceph-users
>> Subject: Re: [ceph-users] RBD cache being filled up in small in
On 14/07/17 18:43, Ruben Rodriguez wrote:
> How to reproduce...
I'll provide more concise details on how to test this behavior:
Ceph config:
[client]
rbd readahead max bytes = 0 # we don't want forced readahead to fool us
rbd cache = true
Start a qemu vm, with a rbd image attached with virtio
Hi all,
After updating to 10.2.9, some of our SSD-based OSDs get put into "down"
state and die as in [1].
After bringing these OSDs back up, they sit at 100% CPU utilization and
never become up/in. From the log I see (from [2]):
heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7f1cfad0d7
Hi guys,
does anyone know any news about in what release iSCSI interface is going to
be production ready, if not yet?
I mean without the use of a gateway, like a different endpoint connector to
a CEPH cluster.
Thanks in advance.
Best.
--
ATTE. Alvaro Soto Escobar
-
Hi,
does anyone have experienced or know why the delete process takes longer
that a creation of a RBD volume.
My test was this:
- Create a 1PB volume -> less than a minute
- Delete the volume created -> like 2 days
The result was unexpected by me and till now, don't know the reason, the
pr
12 matches
Mail list logo