Is there an issue ID associated with this? For those of us who made the
long jump and want to avoid any unseen problems.
Thanks,
Jeff
On Tue, Aug 20, 2013 at 7:57 PM, Sage Weil wrote:
> We've identified a problem when upgrading directly from bobtail to
> dumpling; please wait until 0.67.2 bef
Previous experience with OCFS2 was that its actual performance was pretty
lackluster/awful. The bits Oracle threw on top of (I think) ext3 to make it
work as a multi-writer filesystem with all of the signalling that implies
brought the overall performance down.
Jeff
On Wed, Sep 11, 2013 at 9:58
I just wanted to get a quick sanity check (and ammunition for updating
from Grizzly to Havana).
Per
https://blueprints.launchpad.net/nova/+spec/bring-rbd-support-libvirt-images-type
it seems that explicit support for rbd image types has been brought into
OpenStack/Havana. Does this correspond
I've got a cluster with 3 mons, all of which are binding solely to a
cluster network IP, and neither to 0.0.0.0:6789 nor a public IP. I
hadn't noticed the problem until now because it makes little difference
in how I normally use Ceph (rbd and radosgw), but now that I'm trying to
use cephfs it'
here that this IP
address belongs to the cluster network.
On Mon, Jan 13, 2014 at 11:29 AM, Jeff Bachtel
<mailto:jbach...@bericotechnologies.com>> wrote:
I've got a cluster with 3 mons, all of which are binding solely to
a cluster network IP, and neither to 0.0.0.0:6789
Per http://tracker.ceph.com/issues/6022 leveldb-1.12 was pulled out of
the ceph-extras repo due to patches applied by a leveldb fork (Basho
patch). It's back in ceph-extras (since the 28th at least), and on
CentOS 6 is causing an abort on mon start when run with the Firefly
release candidate
This is all on firefly rc1 on CentOS 6
I had an osd getting overfull, and misinterpreting directions I downed
it then manually removed pg directories from the osd mount. On restart
and after a good deal of rebalancing (setting osd weights as I should've
originally), I'm now at
cluster de
ecover from the existing osd.1
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Sat, May 3, 2014 at 9:17 AM, Jeff Bachtel
wrote:
This is all on firefly rc1 on CentOS 6
I had an osd getting overfull, and misinterpreting directions I downed it
then manually removed pg directori
Of course, you have to copy all of the pieces of the rbd image on one
file system somewhere (thank goodness for thin provisioning!) for the
tool to work.
There really should be a better way.
Jake
On Monday, May 5, 2014, Jeff Bachtel <mailto:jbach...@bericotechnologies.com>> wrote:
e the various "lost" commands, but I'm
not sure what the right approach is here. It's possible you're just
out of luck after manually adjusting the store improperly.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, May 5, 2014 at 4:39 PM, Jeff
I'm working on http://tracker.ceph.com/issues/8310 , basically by
bringing osds down and up I've come to a state where on-disk I have pgs,
osds seem to scan the directories on boot, but the crush map isn't
mapping the objects properly.
In addition to that ticket, I've got a decompile of my cru
Wow I'm an idiot for getting the wrong reweight command.
Thanks so much,
Jeff
On May 9, 2014 11:06 AM, "Sage Weil" wrote:
> On Fri, 9 May 2014, Jeff Bachtel wrote:
> > I'm working on http://tracker.ceph.com/issues/8310 , basically by
> bringing
> > osds
I see the EL6 build on http://ceph.com/rpm-firefly/el6/x86_64/ but not
on gitbuilder (last build 07MAY). Is 0.80.1 considered a different
branch ref for purposes of gitbuilder?
Jeff
On 05/12/2014 05:31 PM, Sage Weil wrote:
This first Firefly point release fixes a few bugs, the most visible
be
Overnight, I tried to use ceph_filestore_dump to export a pg that is
missing from other osds from an osd, with the intent of manually copying
the export to the osds in the pg map and importing.
Unfortunately, what is on-disk 59gb of data had filled 1TB when I got in
this morning, and still had
Environment is CentOS 6.4, Apache, mod_fastcgi (from repoforge, so probably
without the continue 100 patches). I'm attempting to install radosgw on the
2nd mon host.
My setup consistently fails when running s3test.py from
http://wiki.debian.org/OpenStackCephHowto (with appropriate values filled
in
13, 2013 at 7:01 PM, Jeff Bachtel
> wrote:
> > Environment is CentOS 6.4, Apache, mod_fastcgi (from repoforge, so
> probably
> > without the continue 100 patches). I'm attempting to install radosgw on
> the
> > 2nd mon host.
> >
> > My setup con
next
branch, things seem to be working (s3test.py is successful).
Thanks for the help,
Jeff
On Tue, May 14, 2013 at 6:35 AM, Jeff Bachtel <
jbach...@bericotechnologies.com> wrote:
> That configuration option is set, the results are the same. To clarify: do
> I need to start radosgw
Hijacking (because it's related): a couple weeks ago on IRC it was
indicated a repo with these (or updated) qemu builds for CentOS should be
coming soon from Ceph/Inktank. Did that ever happen?
Thanks,
Jeff
On Mon, Jun 3, 2013 at 10:25 PM, YIP Wai Peng wrote:
> Hi Andrel,
>
> Have you tried th
18 matches
Mail list logo