Nice job Haomai!
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
On 25 Nov 2013, at 02:50, Haomai Wa
Hi,
1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/)
This has been in production for more than a year now and heavily tested before.
Performance was not expected since frontend server mainly do read (90%).
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%
Having a configurable would be ideal. User should be made beware of
the need for super-caps via documentation in that case.
Quickly eye-balling the code... can this be patched via journaller.cc
for testing?
___
ceph-users mailing list
ceph-users@lis
Hi James,
after having some discussion with the kernel guys and after digging
through the kernel code and sending a patch today ;-)
It is quite easy todo this via the kernel using this one:
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=39c60a0948cc06139e2fbfe084f
Hello,
I can retrieve a container reservation (sum of objects's size inside) via
Swift API:
$ swift -V 1.0 -A http://localhost/auth -U test:swift -K xxx stat
test_container
Account: v1
Container: test_container
Objects: 549
*Bytes: 31665126*
How can I get this information via s3api?
Th
Hi,
I'am trying to list snapshots of pool using ceph-rest-api. The json
format only display the last snapshot of the pool not all.
the ceph version is ceph version 0.67.3
(408cd61584c72c0d97b774b3d8f95c6b1b06341a)
http://@ip/api/v0.1/osd/dump :
32013-11-25
11:37:34.695874ericsnap142013-11
Hi Sebastien.
Thanks! WHen you say "performance was not expected", can you elaborate a
little? Specifically, what did you notice in terms of performance?
On Mon, Nov 25, 2013 at 4:39 AM, Sebastien Han
wrote:
> Hi,
>
> 1) nfs over rbd (http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
>
Recently , I want to enable rbd cache to identify performance benefit. I add
rbd_cache=true option in my ceph configure file, I use 'virsh attach-device' to
attach rbd to vm, below is my vdb xml file.
6b5ff6f4-9f8c-4fe0-84d6-9d795967c7dd
i
I do not know this i
Hi,
Well, basically, the frontend is composed of web servers.
They mostly do reads on the NFS mount.
I believe that the biggest frontend has around 60 virtual machines, accessing
the share and serving it.
Unfortunately, I don’t have any figures anymore but performances were really
poor in gen
On 11/25/2013 07:21 AM, Shu, Xinxin wrote:
Recently , I want to enable rbd cache to identify performance benefit. I
add rbd_cache=true option in my ceph configure file, I use ’virsh
attach-device’ to attach rbd to vm, below is my vdb xml file.
Ceph configuration files are a bit confusing becaus
Hi Narendra
rbd for cinder and glance are according to the ceph documentation here:
http://ceph.com/docs/master/rbd/rbd-openstack/
rbd for VM images configured like so: https://review.openstack.org/#/c/36042/
config sample (nova.conf):
--- cut ---
volume_driver=nova.volume.driver.RBDDriver
rb
Hi Steffen
the virsh secret is defined on all compute hosts. Booting from a volume works
(it's the "boot from image (create volume)" part that doesn't work
cheers
jc
--
SWITCH
Jens-Christian Fischer, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, dire
Is there a vector graphics file (or a higher resolution file of some type)
of the state diagram on the page below, as I can't read the text.
Thanks,
Nate
http://ceph.com/docs/master/dev/peering/
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
Yes, I would like to see this graph.
Thanks
2013/11/25 Regola, Nathan (Contractor)
> Is there a vector graphics file (or a higher resolution file of some type)
> of the state diagram on the page below, as I can't read the text.
>
> Thanks,
> Nate
>
>
> http://ceph.com/docs/master/dev/peering/
Hi,
Any ideas on troubleshooting a "requests are blocked" when all of the
nodes appear to be running OK?
Nothing gets reported in /var/log/ceph/ceph.log as everything is
active+clean throughout the event. All of the nodes can be accessed and
all report the warning while they are blocking.
r
ceph health detail
2013/11/25 Michael
> Hi,
>
> Any ideas on troubleshooting a "requests are blocked" when all of the
> nodes appear to be running OK?
> Nothing gets reported in /var/log/ceph/ceph.log as everything is
> active+clean throughout the event. All of the nodes can be accessed and al
OK waited for it to happen again and detail started with:
HEALTH_WARN 2 requests are blocked > 32 sec; 1 osds have slow requests
2 ops are blocked > 32.768 sec
2 ops are blocked > 32.768 sec on osd.3
1 osds have slow requests
and slowly moving on to:
HEALTH_WARN 154 requests are blocked > 32 se
Hello,
Since yesterday, scrub has detected an inconsistent pg :( :
# ceph health detail(ceph version 0.61.9)
HEALTH_ERR 1 pgs inconsistent; 9 scrub errors
pg 3.136 is active+clean+inconsistent, acting [9,1]
9 scrub errors
# ceph pg map 3.136
osdmap e4363 pg 3.136 (3.136) -> up [9,1] acting
On Mon, Nov 25, 2013 at 5:58 AM, Mark Nelson wrote:
> On 11/25/2013 07:21 AM, Shu, Xinxin wrote:
>>
>> Recently , I want to enable rbd cache to identify performance benefit. I
>> add rbd_cache=true option in my ceph configure file, I use ’virsh
>> attach-device’ to attach rbd to vm, below is my vd
Greg is right, you need to enable RBD admin sockets. This can be a bit
tricky though, so here are a few tips:
1) In ceph.conf on the compute node, explicitly set a location for the
admin socket:
[client.volumes]
admin socket = /var/run/ceph/rbd-$pid.asok
In this example, libvirt/qemu is
On Mon, Nov 25, 2013 at 8:10 AM, Laurent Barbe wrote:
> Hello,
>
> Since yesterday, scrub has detected an inconsistent pg :( :
>
> # ceph health detail(ceph version 0.61.9)
> HEALTH_ERR 1 pgs inconsistent; 9 scrub errors
> pg 3.136 is active+clean+inconsistent, acting [9,1]
> 9 scrub errors
>
It's generated from a .dot file which you can render as you like. :)
Please be aware that that diagram is for developers and will be
meaningless without that knowledge.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Nov 25, 2013 at 6:42 AM, Regola, Nathan (Contractor)
You have found a bug in the underlying ceph command. One can see the
same thing using "ceph -f json-pretty osd dump", we get an dict with
the same "pool_snap_info" key used more than once, like this:
"pool_snaps": { "pool_snap_info": { "snapid": 1,
"stamp": "2013-11-25
We need to install the OS on the 3TB harddisks that come with our Dell
servers. (After many attempts, I've discovered that Dell servers won't
allow attaching an external harddisk via the PCIe slot. (I've tried
everything). )
But, must I therefore sacrifice two hard disks (RAID-1) for the OS? I
do
>
> We need to install the OS on the 3TB harddisks that come with our Dell
> servers. (After many attempts, I've discovered that Dell servers won't allow
> attaching an external harddisk via the PCIe slot. (I've tried everything). )
>
> But, must I therefore sacrifice two hard disks (RAID-1) for
>
> Writes seem to be happing during the block but this is now getting more
> frequent and seems to be for longer periods.
> Looking at the osd logs for 3 and 8 there's nothing of relevance in there.
>
> Any ideas on the next step?
>
Look for iowait and other disk metrics:
iostat -x 1
high i
Our first day of the online Ceph Developer Summit is about to begin.
Connection info is as follows:
IRC: irc.oftc.net #ceph-summit
YouTube Stream: https://www.youtube.com/watch?v=DWK5RrNRhHU
G+ Event Page:
https://plus.google.com/b/100228383599142686318/events/ca4mb81hi3j57nvs9lrcm988m4s
If you
Several people have reported issues with combining OS and OSD journals
on the same SSD drives/RAID due to contention. If you do something
like this I would definitely test to make sure it meets your
expectations. Ceph logs are going to compose the majority of the
writes to the OS storage devices.
For those of you wishing to tune in to the second half of CDS day one,
please join us at:
https://www.youtube.com/watch?v=_kjjCAib_4E
The associated discussion is on irc.oftc.net on channel #ceph-summit
If you have questions please feel free to contact me.
Best Regards,
Patrick McGarry
Direct
That's rather cool (very easy to change). However given that the current
generated size is kinda a big thumbnail and too small to be actually
read meaningfully, would it not make sense to generate a larger
resolution version by default and make the current one a link to it?
Cheers
Mark
On 26
Yes
On 11/25/2013 04:25 PM, Mark Kirkwood wrote:
That's rather cool (very easy to change). However given that the current
generated size is kinda a big thumbnail and too small to be actually
read meaningfully, would it not make sense to generate a larger
resolution version by default and make t
After talking with Sage, Ross, Patrick and Loic, I am thinking to build up
some Ceph user group in China - for Ceph developer/user to talk, learn and
have fun together - and promote Ceph in China. Anybody in the lists are
interested in this? please drop me a mail for further discussion.
I can arra
That's great! I will join you in spirit here in cold Minnesota. :)
Mark
On 11/25/2013 08:59 PM, jiangang duan wrote:
After talking with Sage, Ross, Patrick and Loic, I am thinking to build
up some Ceph user group in China - for Ceph developer/user to talk,
learn and have fun together - and pro
hi, After talking with Sage, Ross, Patrick and Loic, I am thinking to build
up some Ceph user group in China - for Ceph developer/user to talk, learn
and have fun together - and promote Ceph in China. Anybody in the lists are
interested in this? please drop me a mail to work on this together.
I ca
Hi mike, I enable rbd admin sockets according to your suggestions, I add admin
socket option in my ceph.conf, but in /var/run/ceph directory , there is no
asok file, I used to nova to boot instances. Below is my steps to enable rbd
admin socket. If there is something wrong, please let me know:
35 matches
Mail list logo