In reference architecture PDF, downloadable from your website, there was
some reference to a multi rack architecture described in another doc.
Is this paper available ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.
My monitors are suddenly not starting up properly, or at all. Using latest
Debian release from ceph.com/debian-cuttlefish wheezy
One (mon.7 ip ending in .190) starts but says things like this in the logs:
1 mon.7@0(probing) e3 discarding message mon_subscribe({monmap=0+,osdmap=796})
and sending
>
> My monitors are suddenly not starting up properly, or at all. Using latest
> Debian release from ceph.com/debian-cuttlefish wheezy
>
> One (mon.7 ip ending in .190) starts but says things like this in the logs:
> 1 mon.7@0(probing) e3 discarding message
> mon_subscribe({monmap=0+,osdmap=796})
Hi,
I have done a bit of work on the wireshark plugin so it will compile for
WIN32 really as a by product of trying to investigate a problem as a
learning exercise and finding the plugin was not decoding the area I was
interested in. I havn't tried to improve the plugin but thought I would
me
hello everyone
I come from china,when i install ceph-deploy in my server i find some
problem
when i run ./bootstrap i find i canot get the argparse ,i find the url is
a http address
when i write the same address in my webbrowser with https:// before the
address
it can down load it ,
but when i
Hi,
sorry for the late answer : trying to fix that, I tried to delete the
image (rbd rm XXX), the "rbd rm" complete without errors, but "rbd ls"
still display this image.
What should I do ?
Here the files for the PG 3.6b :
# find /var/lib/ceph/osd/ceph-28/current/3.6b_head/ -name
'rb.0.15c26.
Hi,
could someone update
http://ceph.com/docs/next/install/build-prerequisites/
and add a note for debian squeeze on how to get libleveldb-dev and
libsnappy-dev?
--
Mit freundlichen Grüßen,
Florian Wiessner
Smart Weblications GmbH
Martinsberger Str. 1
D-95119 Naila
fon.: +49 9282 9638 2
Note that I still have scrub errors, but rados doesn't see thoses
objects :
root! brontes:~# rados -p hdd3copies ls | grep '^rb.0.15c26.238e1f29'
root! brontes:~#
Le vendredi 31 mai 2013 à 15:36 +0200, Olivier Bonvalet a écrit :
> Hi,
>
> sorry for the late answer : trying to fix that, I trie
Hi Martin,
I notice you have got everything work. Just wants to point out that we use the
following in our nova.conf and it has been working without issue.
cinder_catalog_info=volume:cinder:internalURL
--weiguo
> Date: Thu, 30 May 2013 22:50:12 +0200
> From: mar...@tuxadero.com
> To: josh
Ok, so :
- after a second "rbd rm XXX", the image was gone
- and "rados ls" doesn't see any object from that image
- so I tried to move thoses files
=> scrub is now ok !
So for me it's fixed. Thanks
Le vendredi 31 mai 2013 à 16:34 +0200, Olivier Bonvalet a écrit :
> Note that I still have scrub
Hi Florian -
leveldb-dev and libsnappy-dev backports can be found in ceph.com/debian-leveldb
for natty, oneiric, and squeeze. They are also included in the Ceph release
repositories. I'll update the documentation.
Cheers,
Gary
On May 31, 2013, at 7:27 AM, Smart Weblications GmbH - Florian W
Cool. I did the same thing with Cuttlefish at one point. I scrubbed my
install and started the whole thing--even the storage cluster--from
scratch after doing an update. There might have been a bug in the mix
that got fixed, because I was scratching my head too and after I did
the whole re-install
argparse is a standard Python module, and should be available with your
Python installation, or at least optionally downloadable (on Ubuntu,
it's part of the python2.7 package. It's strange that you don't already
have it, but try checking your OS install facilities for it first.
On 05/31/2013
Possibly related:
http://tracker.ceph.com/issues/5084
I'm seeing the same big delays with peering, and when I today marked an OSD
"out" then "in" after a minute or two it was unexpectedly marked "down". I
restarted it and 8 or so minutes later things were fine again. In the meantime
our RBD KVM
This sounds also a bit like my 2nd problem here:
http://tracker.ceph.com/issues/5216
Am 31.05.2013 20:36, schrieb John Nielsen:
Possibly related:
http://tracker.ceph.com/issues/5084
I'm seeing the same big delays with peering, and when I today marked an OSD "out" then
"in" after a minute or two
Ah. I was using the S3 interface. Yes, that's what I did and
Cuttlefish worked for me. If you're working on Bobtail, I'd stick with
that for evaluation. We'll have a new update to Cuttlefish shortly.
On Fri, May 31, 2013 at 1:02 PM, Daniel Curran wrote:
> Do you mean that Cuttlefish worked for yo
Hello,
First of all, I would like to thank everyone for their input. You all have
been incredibly helpful in helping me work through my ignorance.
What exactly does it mean when you say CephFS is not "production ready"?
To me, this typically indicates a product that still has business
crippling
It might be an internet connection problem.
Try to use the pypi mirror from Google or TsingHua University.
Put below content into your ~/.pip/pip.conf, and try again:
[global]
index-url = http://b.pypi.python.org/simple
[install]
use-mirrors = true
mirrors = http://b.pypi.python.org
b.pypi is pr
18 matches
Mail list logo