On 04/14/2015 08:01 PM, shiva rkreddy wrote:
The clusters are in test environment, so its a new deployment of 0.80.9.
OS on the cluster nodes is reinstalled as well, so there shouldn't be
any fs aging unless the disks are slowing down.
The perf measurement is done initiating multiple cinder crea
Can't open at the moment, niever the website or apt.
Trying from Brisbane, Australia.
--
Lindsay
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 04/15/2015 09:30 AM, Lindsay Mathieson wrote:
> Can't open at the moment, niever the website or apt.
>
Yes, it's down here as well. You can try eu.ceph.com if you need the
packages.
Or this one: http://ceph.mirror.digitalpacific.com.au/ (working on
au.ceph.com)
> Trying from Brisbane, Austra
Hi all,
why ceph.com is very slow ?
It is impossible download files for installing ceph.
Regards
Ignazio
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Thanks Mark
Loic also gave me this link
It would be a good start for sure
Best regards
-Message d'origine-
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de Mark
Nelson
Envoyé : mardi 14 avril 2015 14:11
À : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] how t
On 04/15/2015 10:20 AM, Ignazio Cassano wrote:
> Hi all,
> why ceph.com is very slow ?
Not known right now. But you can try eu.ceph.com for your packages and
downloads.
> It is impossible download files for installing ceph.
> Regards
> Ignazio
>
>
>
> __
Many thanks
2015-04-15 10:44 GMT+02:00 Wido den Hollander :
> On 04/15/2015 10:20 AM, Ignazio Cassano wrote:
> > Hi all,
> > why ceph.com is very slow ?
>
> Not known right now. But you can try eu.ceph.com for your packages and
> downloads.
>
> > It is impossible download files for installing cep
Thanks a lot
That helps.
De : Erik McCormick [mailto:emccorm...@cirrusseven.com]
Envoyé : lundi 13 avril 2015 18:32
À : CHEVALIER Ghislain IMT/OLPS
Cc : ceph-users
Objet : Re: [ceph-users] Rados Gateway and keystone
I haven't really used the S3 stuff much, but the credentials should be in
keysto
Hi all,
I have a cluster of 3 nodes, 18 OSDs. I used the pgcalc to give a
suggested number of PGs - here was my list:
Group1 3 rep 18 OSDs 30% data 512PGs
Group2 3 rep 18 OSDs 30% data 512PGs
Group3 3 rep 18 OSDs 30% data 512PGs
Group4 2 rep 18 OSDs 5% data 256PGs
Group5 2
Has anyone compiled ceph (either osd or client) on a Solaris based OS?
The thread on ZFS support for osd got me thinking about using solaris as an
osd server. It would have much better ZFS performance and I wonder if the
osd performance without a journal would be 2x better.
A second thought I had
It would be interesting to see if your script produces the same weights
that my script does! Would you be willing to try running the script I
posted earlier on your cluster? It does not modify anything, just
analyses the content of ceph pg dump. I want to see if it thinks your
weights are op
On 04/15/2015 08:10 AM, Tony Harris wrote:
Hi all,
I have a cluster of 3 nodes, 18 OSDs. I used the pgcalc to give a
suggested number of PGs - here was my list:
Group1 3 rep 18 OSDs 30% data 512PGs
Group2 3 rep 18 OSDs 30% data 512PGs
Group3 3 rep 18 OSDs 30% data 512PGs
Group4
On 04/15/2015 08:16 AM, Jake Young wrote:
Has anyone compiled ceph (either osd or client) on a Solaris based OS?
The thread on ZFS support for osd got me thinking about using solaris as
an osd server. It would have much better ZFS performance and I wonder if
the osd performance without a journ
The LX branded zones might be a way to run OSDs on Illumos:
https://wiki.smartos.org/display/DOC/LX+Branded+Zones
For fun, I tried a month or so ago, managed to have a quorum. OSDs
wouldn't start, I didn't look further as far as debugging. I'll give
it a go when I have more time.
On Wed, Apr 15,
On Wednesday, April 15, 2015, Alexandre Marangone
wrote:
> The LX branded zones might be a way to run OSDs on Illumos:
> https://wiki.smartos.org/display/DOC/LX+Branded+Zones
>
> For fun, I tried a month or so ago, managed to have a quorum. OSDs
> wouldn't start, I didn't look further as far as d
So it was a PG problem. I added a couple of OSD per host, reconfigured the
CRUSH map and the cluster began to work properly.
Thanks
Giuseppe
2015-04-14 19:02 GMT+02:00 Saverio Proto :
> No error message. You just finish the RAM memory and you blow up the
> cluster because of too many PGs.
>
> Sa
On Wednesday, April 15, 2015, Mark Nelson wrote:
>
>
> On 04/15/2015 08:16 AM, Jake Young wrote:
>
>> Has anyone compiled ceph (either osd or client) on a Solaris based OS?
>>
>> The thread on ZFS support for osd got me thinking about using solaris as
>> an osd server. It would have much better Z
People are working on it but I understand there was/is a DoS attack going
on. :/
-Greg
On Wed, Apr 15, 2015 at 1:50 AM Ignazio Cassano
wrote:
> Many thanks
>
> 2015-04-15 10:44 GMT+02:00 Wido den Hollander :
>
>> On 04/15/2015 10:20 AM, Ignazio Cassano wrote:
>> > Hi all,
>> > why ceph.com is ver
Sorry for starting a new thread, I've only just subscribed to the list
and the archive on the mail listserv is far from complete at the moment.
on 8th March David Moreau Simard said
http://www.spinics.net/lists/ceph-users/msg16334.html
that there was a rsync'able mirror of the ceph repo at
http
On 04/15/2015 10:36 AM, Jake Young wrote:
On Wednesday, April 15, 2015, Mark Nelson mailto:mnel...@redhat.com>> wrote:
On 04/15/2015 08:16 AM, Jake Young wrote:
Has anyone compiled ceph (either osd or client) on a Solaris
based OS?
The thread on ZFS support for
Hi All,
Earlier ceph on Debian Jessie was working. Jessie is running 3.16.7 .
Now when I modprobe rbd , no /dev/rbd appear.
# dmesg | grep -e rbd -e ceph
[ 15.814423] Key type ceph registered
[ 15.814461] libceph: loaded (mon/osd proto 15/24)
[ 15.831092] rbd: loaded
[ 22.084573] rbd: no
Hi,
Despite the creation of ec2 credentials which provides an accesskey and a
secretkey for a user, it’s always impossible to connect using S3
(Forbidden/Access denied).
All is right using swift (create container, list container, get object, put
object, delete object)
I use cloudberry client to
http://eu.ceph.com/ has rsync and Hammer.
On Wed, Apr 15, 2015 at 10:17 AM, Paul Mansfield <
paul.mansfi...@alcatel-lucent.com> wrote:
>
> Sorry for starting a new thread, I've only just subscribed to the list
> and the archive on the mail listserv is far from complete at the moment.
>
> on 8th M
I upgraded to 0.94.1 from 0.94 on Monday, and everything had been going
pretty well.
Then, about noon today, we had an mds crash. And then the failover mds
crashed. And this cascaded through all 4 mds servers we have.
If I try to start it ('service ceph start mds' on CentOS 7.1), it appears
to be
Hey, you're right.
Thanks for bringing that to my attention, it's syncing now :)
Should be available soon.
David Moreau Simard
On 2015-04-15 12:17 PM, Paul Mansfield wrote:
> Sorry for starting a new thread, I've only just subscribed to the list
> and the archive on the mail listserv is far fro
I'm curious what people managing larger ceph clusters are doing with
configuration management and orchestration to simplify their lives?
We've been using ceph-deploy to manage our ceph clusters so far, but
feel that moving the management of our clusters to standard tools would
provide a little mor
On 15/04/2015 20:02, Kyle Hutson wrote:
I upgraded to 0.94.1 from 0.94 on Monday, and everything had been
going pretty well.
Then, about noon today, we had an mds crash. And then the failover mds
crashed. And this cascaded through all 4 mds servers we have.
If I try to start it ('service cep
Thank you, John!
That was exactly the bug we were hitting. My Google-fu didn't lead me to
this one.
On Wed, Apr 15, 2015 at 4:16 PM, John Spray wrote:
> On 15/04/2015 20:02, Kyle Hutson wrote:
>
>> I upgraded to 0.94.1 from 0.94 on Monday, and everything had been going
>> pretty well.
>>
>> The
Hi,
>From few days we notice on our cluster many slow request.
Cluster:
ceph version 0.67.11
3 x mon
36 hosts -> 10 osd ( 4T ) + 2 SSD (journals)
Scrubbing and deep scrubbing is disabled but count of slow requests is
still increasing.
Disk utilisation is very small after we have disabled scrubbings
Hi,
Successfully upgrade a small development 4x node Giant 0.87-1 cluster to Hammer
0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in usage.
Only minor thing now ceph -s complaining over too may PGs, previously Giant had
complain of too few, so various pools were bumped up till health
Also our calamari web UI won't authenticate anymore, can’t see any issues in
any log under /var/log/calamari, any hints on what to look for are appreciated,
TIA!
# dpkg -l | egrep -i calamari\|ceph
ii calamari-clients 1.2.3.1-2-gc1f14b2all
Inktank Calamar
On Thu, Apr 16, 2015 at 5:29 AM, Kyle Hutson wrote:
> Thank you, John!
>
> That was exactly the bug we were hitting. My Google-fu didn't lead me to
> this one.
here is the bug report http://tracker.ceph.com/issues/10449. It's a
kernel client bug which causes the session map size increase
infinit
We are using 3.18.6-gentoo. Based on that, I was hoping that the
kernel bug referred to in the bug report would have been fixed.
--
Adam
On Wed, Apr 15, 2015 at 8:02 PM, Yan, Zheng wrote:
> On Thu, Apr 16, 2015 at 5:29 AM, Kyle Hutson wrote:
>> Thank you, John!
>>
>> That was exactly the bug we
On Thu, Apr 16, 2015 at 9:07 AM, Adam Tygart wrote:
> We are using 3.18.6-gentoo. Based on that, I was hoping that the
> kernel bug referred to in the bug report would have been fixed.
>
The bug was supposed to be fixed, but you hit the bug again. could you
check if the kernel client has any hang
What is significantly smaller? We have 67 requests in the 16,400,000
range and 250 in the 18,900,000 range.
Thanks,
Adam
On Wed, Apr 15, 2015 at 8:38 PM, Yan, Zheng wrote:
> On Thu, Apr 16, 2015 at 9:07 AM, Adam Tygart wrote:
>> We are using 3.18.6-gentoo. Based on that, I was hoping that the
On Thu, Apr 16, 2015 at 9:48 AM, Adam Tygart wrote:
> What is significantly smaller? We have 67 requests in the 16,400,000
> range and 250 in the 18,900,000 range.
>
that explains the crash. could you help me to debug this issue.
send /sys/kernel/debug/ceph/*/mdsc to me.
run "echo module ceph
Hello,
On Thu, 16 Apr 2015 00:41:29 +0200 Steffen W Sørensen wrote:
> Hi,
>
> Successfully upgrade a small development 4x node Giant 0.87-1 cluster to
> Hammer 0.94-1, each node with 6x OSD - 146GB, 19 pools, mainly 2 in
> usage. Only minor thing now ceph -s complaining over too may PGs,
> prev
The issue is reproducible in svl-3 with rbd cache set to false.
On the 5th ping-pong, the instance experienced ping drops and did not
recover for 20+ minutes:
(os-clients)[root@fedora21 nimbus-env]# nova live-migration lmtest1
(os-clients)[root@fedora21 nimbus-env]# nova show lmtest1 |grep -E
'hy
Hello,
Can't really help you with you with nova, but using plain libvirt-1.1.1
and qemu-1.5.3 live migration of rbd-backed VMs is (almost*) instant on
the client side. We have rbd write-back cache enabled everywhere and
have no problem at all.
-K.
*There is about a 1-2 second hitch at wors
39 matches
Mail list logo