the same block device. Because no server is aware of what the other servers
are doing, it’s essentially guaranteed that you’ll have one server partially
overwriting things another server just wrote, resulting in lost data and/or a
broken filesystem.
-
Edward Huyer
School of Interactive Games
On Apr 29, 2016 11:46 PM, Gregory Farnum wrote:
>
> On Friday, April 29, 2016, Edward Huyer mailto:erh...@rit.edu>>
wrote:
This is more of a "why" than a "can I/should I" question.
The Ceph block device quickstart says (if I interpret it correctly) not to use
a
This is more of a "why" than a "can I/should I" question.
The Ceph block device quickstart says (if I interpret it correctly) not to use
a physical machine as both a Ceph RBD client and a node for hosting OSDs or
other Ceph services.
Is this interpretation correct? If so, what is the reasoning?
ne explain what's going on here? I have a pretty strong notion, but
I'm hoping someone can give a definite answer.
This behavior appears to be normal, so I'm not actually worried about it. It
just makes myself and some coworkers go "huh, I wonder what causes that".
-
> >> Running ' ceph osd reweight-by-utilization' clears the issue up
> >> temporarily, but additional data inevitably causes certain OSDs to be
> >> overloaded again.
> >>
> > The only time I've ever seen this kind of uneven distribution is when
> > using too little (and using the default formula w
> > Ceph has a default pool size of 3. Is it a bad idea to run a pool of
> > size 2? What about size 2 min_size 1?
> >
> min_size 1 is sensible, 2 obviously won't protect you against dual disk
> failures.
> Which happen and happen with near certainty once your cluster gets big
> enough.
I though
Ceph has a default pool size of 3. Is it a bad idea to run a pool of size 2?
What about size 2 min_size 1?
I have a cluster I'm moving data into (on RBDs) that is full enough with size 3
that I'm bumping into nearfull warnings. Part of that is because of the amount
of data, part is probably bec
quick-and-dirty
get-something-going-to-play-with tool and manual configuration is preferred for
"real" clusters? I've seen documentation suggesting it's not intended for use
in real clusters, but a lot of other documentation seems to assume it's the
default deploy tool.
-
> -Original Message-
> From: John Nielsen [mailto:li...@jnielsen.net]
> Sent: Monday, June 24, 2013 1:24 PM
> To: Edward Huyer
> Cc: ceph-us...@ceph.com
> Subject: Re: [ceph-users] Resizing filesystem on RBD without
> unmount/mount cycle
>
> On Jun 24, 201
a way to club the filesystem tools into recognizing that the RBD has
changed sizes without unmounting the filesystem?
-
Edward Huyer
School of Interactive Games and Media
Golisano 70-2373
152 Lomb Memorial Drive
Rochester, NY 14623
585-475-6651
erh...@rit.edu<mailto:erh...@rit.edu>
Obligat
> Hi,
>
> I am thinking how to make ceph with 2 pools - fast and slow.
> Plan is to use SSDs and SATAs(or SAS) in the same hosts and define pools that
> use fast and slow disks accordingly. Later it would be easy to grow either
> pool
> by need.
>
> I found example for CRUSH map that does simila
> [ Please stay on the list. :) ]
Doh. Was trying to get Outlook to quote properly, and forgot to hit Reply-all.
:)
> >> The specifics of what data will migrate where will depend on how
> >> you've set up your CRUSH map, when you're updating the CRUSH
> >> locations, etc, but if you move an OS
r data from that back-end network as well; I
realize this is probably not ideal, but I'm hoping/thinking it will be good
enough.
-
Edward Huyer
School of Interactive Games and Media
Golisano 70-2373
152 Lomb Memorial Drive
Rochester, NY 14623
585-475-6651
erh...@rit.edu<mailto:e
13 matches
Mail list logo