>>OK, please report back.
>>
>>Note: we can include the patch If nothing else helps ;-)
Ok, I need to do some "perf report" this week and talk about this with ceph
devs.
I'll keep you in touch.
- Mail original -
De: "Dietmar Maurer"
À: "Alexandre DERUMIER"
Cc: pve-devel@pve.proxmox
> >>> Thought a bit more about that, and it seems to be relatively easy to
> >> implement.
> >>> But maybe 'per user group' is good enough?
> >>
> >> sounds good - but still not sure if i'm able to implement ;-(
> >
> > I already have a prototype (but I am out of office this week).
Hello everybod
> >>But this is a bug inside librbd, so they should fix it there?
>
> Yeah, I'll discuted about this with ceph devs today, at the weekly performance
> meeting.
> http://pad.ceph.com/p/performance_weekly.
OK, please report back.
Note: we can include the patch If nothing else helps ;-)
___
Hi all,
Is there a quick and easy way to get the name of the cluster or do I
have to to parse cluster.conf?
--
Hilsen/Regards
Michael Rasmussen
Get my public GnuPG keys:
michael rasmussen cc
http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
mir datanom net
http://pgp.mit.edu:1137
>>But this is a bug inside librbd, so they should fix it there?
Yeah, I'll discuted about this with ceph devs today, at the weekly performance
meeting.
http://pad.ceph.com/p/performance_weekly.
>>And this does not work with our 2.6.32 kernel...
That's why I set it as optionnal feature. could
> Cpu usage is 4x lower than librbd, so it's possible to reach a lot more iops
> (I have
> benched 10 iops with mutiple iothreads).
But this is a bug inside librbd, so they should fix it there?
And this does not work with our 2.6.32 kernel...
___
p
Signed-off-by: Alexandre Derumier
---
PVE/Storage/RBDPlugin.pm | 25 +
1 file changed, 25 insertions(+)
diff --git a/PVE/Storage/RBDPlugin.pm b/PVE/Storage/RBDPlugin.pm
index 1026d81..eee661f 100644
--- a/PVE/Storage/RBDPlugin.pm
+++ b/PVE/Storage/RBDPlugin.pm
@@ -226,
This allow to use kernel rbd module (available in 3.10 kernel - works with
giant).
Cpu usage is 4x lower than librbd, so it's possible to reach a lot more iops
(I have benched 10 iops with mutiple iothreads).
___
pve-devel mailing list
pve-devel@
On Wed, 12 Nov 2014 09:38:10 AM Dietmar Maurer wrote:
> > Coming to proxmox?
>
> We will keep firefly packages by default. But I will also try to support
> giant for ceph server GUI in future.
Thanks
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
__
>>Coming to proxmox?
I think it's not a LTS, so we should wait for next Hammer release.
(But you can update repository yourself if you want)
- Mail original -
De: "Lindsay Mathieson"
À: pve-devel@pve.proxmox.com
Envoyé: Mercredi 12 Novembre 2014 10:08:32
Objet: [pve-devel] ceph g
> Coming to proxmox?
We will keep firefly packages by default. But I will also try to support giant
for ceph server GUI in future.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Coming to proxmox?
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
12 matches
Mail list logo