Bit 14
> > set Bit 18 set Bit 23 set Bit 25 set Bit 27 set Bit 30 set Bit 35 set
> > Bit 36 set Bit 37 set Bit 39 set Bit 41 set Bit 42 set Bit 48 set Bit
> > 57 set Bit 58 set Bit 59 set
> >
> > So all it's done is *added* Bit 4 which is DEFINE_CEPH_FEATURE(
chooseleaf firstn 0 type host
step emit
}
# end crush map
On Thu, Feb 23, 2017 at 7:37 PM, Brad Hubbard wrote:
> Did you dump out the crushmap and look?
>
> On Fri, Feb 24, 2017 at 1:36 PM, Schlacta, Christ wrote:
>> insofar as I can tell, yes. Everything indicates that the
insofar as I can tell, yes. Everything indicates that they are in effect.
On Thu, Feb 23, 2017 at 7:14 PM, Brad Hubbard wrote:
> Is your change reflected in the current crushmap?
>
> On Fri, Feb 24, 2017 at 12:07 PM, Schlacta, Christ
> wrote:
>> -- Forwarded message
-- Forwarded message --
From: Schlacta, Christ
Date: Thu, Feb 23, 2017 at 6:06 PM
Subject: Re: [ceph-users] Upgrade Woes on suse leap with OBS ceph.
To: Brad Hubbard
So setting the above to 0 by sheer brute force didn't work, so it's
not crush or osd problem.. also,
-- Forwarded message --
From: Schlacta, Christ
Date: Thu, Feb 23, 2017 at 5:56 PM
Subject: Re: [ceph-users] Upgrade Woes on suse leap with OBS ceph.
To: Brad Hubbard
They're from the suse leap ceph team. They maintain ceph, and build
up to date versions for suse leap. W
equire_feature_tunables": 1,
"require_feature_tunables2": 1,
"has_v2_rules": 0,
"require_feature_tunables3": 1,
"has_v3_rules": 0,
"has_v4_buckets": 0,
"require_feature_tunables5": 1,
"has_v5_rules": 0
}
On Th
So I updated suse leap, and now I'm getting the following error from
ceph. I know I need to disable some features, but I'm not sure what
they are.. Looks like 14, 57, and 59, but I can't figure out what
they correspond to, nor therefore, how to turn them off.
libceph: mon0 10.0.0.67:6789 feature
>>
>> -Paul
>>
>> *Ceph on native Infiniband may be available some day, but it seems
>> impractical with the current releases. IP-over-IB is also known to work.
>>
>>
>> On Apr 21, 2016, at 8:12 PM, Schlacta, Christ wrote:
>>
>> Is it po
the current releases. IP-over-IB is also known to work.
>
>
> On Apr 21, 2016, at 8:12 PM, Schlacta, Christ wrote:
>
> Is it possible? Can I use fibre channel to interconnect my ceph OSDs?
> Intuition tells me it should be possible, yet experience (Mostly with
> fibre channe
Is it possible? Can I use fibre channel to interconnect my ceph OSDs?
Intuition tells me it should be possible, yet experience (Mostly with
fibre channel) tells me no. I don't know enough about how ceph works
to know for sure. All my googling returns results about using ceph as
a BACKEND for ex
What do you use as an interconnect between your osds, and your clients?
On Mar 20, 2016 11:39 AM, "Mike Almateia" wrote:
> 18-Mar-16 21:15, Schlacta, Christ пишет:
>
>> Insofar as I've been able to tell, both BTRFS and ZFS provide similar
>> capabilities back t
On Mar 18, 2016 4:31 PM, "Lionel Bouton"
>
> Will bluestore provide the same protection against bitrot than BTRFS?
> Ie: with BTRFS the deep-scrubs detect inconsistencies *and* the OSD(s)
> with invalid data get IO errors when trying to read corrupted data and
> as such can't be used as the source
I posted about this a while ago, and someone else has since inquired,
but I am seriously wanting to know if anybody has figured out how to
boot from a RBD device yet using ipxe or similar. Last I read.
loading the kernel and initrd from object storage would be
theoretically easy, and would only re
Insofar as I've been able to tell, both BTRFS and ZFS provide similar
capabilities back to CEPH, and both are sufficiently stable for the
basic CEPH use case (Single disk -> single mount point), so the
question becomes this: Which actually provides better performance?
Which is the more highly opti
If you can swing 2u chassis and 2.5" drives instead, you can trivially get
between 15 and 24 drives across the front and rear of a beautiful hot-swap
chassis. There are numerous makes and models available from custom builds
up/down through used on ebay. Worth a peek.
On Thu, Feb 11, 2016 at 2:33
In just the last week I've seen at least two failures as a result of
replication factor two. I would highly suggest that for any critical data
you choose an rf of at least three.
With your stated capacity, you're looking at a mere 16TB with rf3. You'll
need to look into slightly more capacity or w
ceph and rbd. I'll be posting a blog post
about it later, but for now, I just thought I'd share the facts in case
anyone here cares besides me.
On Wed, Jan 29, 2014 at 1:52 AM, zorg wrote:
> Hello
> we use libvirt from wheezy-backports
>
>
>
> Le 29/01/201
I can only comment I the log. I would recommend using three logs (6 disks
as mirror pairs) per system, and add a crush map hierarchy level for cache
drive so that any given pg will never mirror twice to the same log. That'll
also reduce your failure domain.
On Jan 29, 2014 4:26 PM, "Geraint Jones"
On Jan 29, 2014 10:44 AM, "Dimitri Maziuk" wrote:
>
> On 01/29/2014 12:40 PM, Schlacta, Christ wrote:
> > Why can't you compile it yourself using rhel's equivalent of dkms?
>
> Because of
>
> >>> fully supported RedHat
> ^^^
of
libvirt to manage your virtual environments?
On Tue, Jan 28, 2014 at 1:30 AM, zorg wrote:
> Hello
> we have a public repository with qemu-kvm wheezy-backports build with rbd
>
> deb http://deb.probesys.com/debian/ wheezy-backports main
>
> hope it can help
>
>
> Le
I'm pasting this in here piecemeal, due to a misconfiguration of the list.
I'm posting this back to the original thread in the hopes of the
conversation being continued. I apologize in advance for the poor
formatting below.
On Mon, Jan 27, 2014 at 12:50 PM, Schlacta, Christ wrote:
Is the list misconfigured? Clicking "Reply" in my mail client on nearly
EVERY list sends a reply to the list, but for some reason, this list is one
of the very, extremely, exceedingly few lists where that doesn't work as
expected. Is the list misconfigured? Anyway, if someone could fix this,
it
I'll have to look at the iscsi and zfs initramfs hooks, and see if I can
model it most concisely on what they currently do. Between the two, I
should be able to hack something up.
On Mon, Jan 27, 2014 at 9:46 PM, Stuart Longland wrote:
> On 28/01/14 15:29, Schlacta, Christ wrote:
Has anyone done the work to boot a machine (physical or virtual) from a
CEPH filesystem or RBD?
I'm very interested in this, as I have several systems that don't need a
LOT of disk throughput and have PLENTY of network bandwidth unused, making
them primary candidates for such a setup. I thought a
So on Debian wheezy, qemu is built without ceph/rbd support. I don't know
about everyone else, but I use backported qemu. Does anyone provide a
trusted, or official, build of qemu from Debian backports that supports
ceph/rbd?
___
ceph-users mailing list
c
So I just have a few more questions that are coming to mind. Firstly, I
have OSDi whose underlying filesystems can be.. Dun dun dun Resized!!
If I choose to expand my allocation to ceph, I can in theory do so by
expanding the quota on the OSDi. (I'm using ZFS) Similarly, if the OSD is
und
What guarantees does ceph place on data integrity? Zfs uses a Merkel tree
to guarantee the integrity of all data and metadata on disk and will
ultimately refuse to return "duff" data to an end user consumer.
I know ceph provides some integrity mechanisms and has a scrub feature.
Does it provide fu
There are some seldom-used files (namely install ISOs) that I want to throw
in ceph to keep them widely available, but throughput and response times
aren't critical for them, nor is redundancy. Is it possible to throw them
into OSDs on cheap, bulk offline storage, and more importantly, will idle
O
can ceph handle a configuration where a custer node is not "always on", but
rather gets booted periodically to sync to the cluster, and is also
sometimes up full time as demand requires? I ask because I want to put an
OSD on each of my cluster nodes, but some cluster nodes only come up as
demand d
29 matches
Mail list logo