t;: 0,
"num_scrub_errors": 0,
"num_objects_recovered": 0,
"num_bytes_recovered": 0,
"num_keys_recovered": 0},
"stat_cat_sum": {},
"up": [
53,
mount of
data for the pg, remained up during the primary's downtime and should
have the state to become the primary for the acting set.
Thanks for listening.
~jpr
On 03/25/2016 11:57 AM, John-Paul Robinson wrote:
> Hi Folks,
>
> One last dip into my old bobtail cluster. (new h
Hi Folks,
One last dip into my old bobtail cluster. (new hardware is on order)
I have three pg in an incomplete state. The cluster was previously
stable but with a health warn state due to a few near full osds. I
started resizing drives on one host to expand space after taking the
osds that se
Hi,
When upgrading to the next release, is it necessary to first upgrade to
the most recent point release of the prior release or can one upgrade
from the initial release of the named version? The release notes don't
appear to indicate it is necessary
(http://docs.ceph.com/docs/master/release-not
eads and writes. Any access to
any backing RBD store from the NFS client hangs.
~jpr
On 10/22/2015 06:42 PM, Ryan Tokarek wrote:
>> On Oct 22, 2015, at 3:57 PM, John-Paul Robinson wrote:
>>
>> Hi,
>>
>> Has anyone else experienced a problem with RBD-to-NFS gatewa
On 10/22/2015 04:03 PM, Wido den Hollander wrote:
> On 10/22/2015 10:57 PM, John-Paul Robinson wrote:
>> Hi,
>>
>> Has anyone else experienced a problem with RBD-to-NFS gateways blocking
>> nfsd server requests when their ceph cluster has a placement group that
>&
Hi,
Has anyone else experienced a problem with RBD-to-NFS gateways blocking
nfsd server requests when their ceph cluster has a placement group that
is not servicing I/O for some reason, eg. too few replicas or an osd
with slow request warnings?
We have an RBD-NFS gateway that stops responding to
kfilling state. (At least, assuming the number of replicas is the
> only problem.). Based on your description of the problem I think this
> is the state you're in, and decreasing min_size is the solution.
> *shrug*
> You could also try and do something like extracting the PG from
Yes. That's the intention. I was fixing the osd size to ensure the
cluster was in health ok for the upgrades (instead of multiple osds in
near full).
Thanks again for all the insight. Very helpful.
~jpr
On 10/21/2015 03:01 PM, Gregory Farnum wrote:
> (which it sounds like you're on — inciden
heavy handed though, given that only this
one pg is affected.
Thanks for any follow up.
~jpr
On 10/21/2015 01:21 PM, Gregory Farnum wrote:
> On Tue, Oct 20, 2015 at 7:22 AM, John-Paul Robinson wrote:
>> Hi folks
>>
>> I've been rebuilding drives in my cluster to add s
Hi folks
I've been rebuilding drives in my cluster to add space. This has gone
well so far.
After the last batch of rebuilds, I'm left with one placement group in
an incomplete state.
[sudo] password for jpr:
HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs stuck unclean
pg 3.ea is stu
et of placement groups to be mapped onto it to achieve the
rebalance?
Thanks,
~jpr
On 09/16/2015 08:37 AM, Christian Balzer wrote:
> Hello,
>
> On Wed, 16 Sep 2015 07:21:26 -0500 John-Paul Robinson wrote:
>
>> > The move journal, partition resize, grow file system approac
n the size of the
disk, save for the journal.
Sorry for any confusion.
~jpr
> On Sep 15, 2015, at 6:21 PM, John-Paul Robinson wrote:
>
> I'm working to correct a partitioning error from when our cluster was
> first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2T
ent any undue stress to the cluster?
I'd prefer to use the second option if I can because I'm likely to
repeat this in the near future in order to add encryption to these disks.
~jpr
On 09/15/2015 06:44 PM, Lionel Bouton wrote:
> Le 16/09/2015 01:21, John-Paul Robinson a écrit :
&
Hi,
I'm working to correct a partitioning error from when our cluster was
first installed (ceph 0.56.4, ubuntu 12.04). This left us with 2TB
partitions for our OSDs, instead of the 2.8TB actually available on
disk, a 29% space hit. (The error was due to a gdisk bug that
mis-computed the end of t
On 05/28/2015 03:18 PM, John-Paul Robinson wrote:
> To follow up on the original post,
>
> Further digging indicates this is a problem with RBD image access and
> is not related to NFS-RBD interaction as initially suspected. The
> nfsd is simply hanging as a result of a hun
To follow up on the original post,
Further digging indicates this is a problem with RBD image access and is
not related to NFS-RBD interaction as initially suspected. The nfsd is
simply hanging as a result of a hung request to the XFS file system
mounted on our RBD-NFS gateway.This hung XFS c
We've had a an NFS gateway serving up RBD images successfully for over a year.
Ubuntu 12.04 and ceph .73 iirc.
In the past couple of weeks we have developed a problem where the nfs clients
hang while accessing exported rbd containers.
We see errors on the server about nfsd hanging for 120sec
We have an NFS to RBD gateway with a large number of smaller RBDs. In
our use case we are allowing users to request their own RBD containers
that are then served up via NFS into a mixed cluster of clients.Our
gateway is quite beefy, probably more than it needs to be, 2x8 core
cpus and 96GB ra
So in the mean time, are there any common work-arounds?
I'm assuming monitoring imageused/imagesize ratio and if its greater
than some tolerance create a new image and move file system content over
is an effective, if crude approach. I'm not clear on how to measure the
amount of storage an image
n the ceph pool stay allocated to this application (the file
system) in that case?
Thanks for any additional insights.
~jpr
On 04/15/2014 04:16 PM, John-Paul Robinson wrote:
> Thanks for the insight.
>
> Based on that I found the fstrim command for xfs file systems.
>
> http:/
Thanks for the insight.
Based on that I found the fstrim command for xfs file systems.
http://xfs.org/index.php/FITRIM/discard
Anyone had experiences using the this command with RBD image backends?
~jpr
On 04/15/2014 02:00 PM, Kyle Bader wrote:
>> I'm assuming Ceph/RBD doesn't have any direct
Hi,
If I have an 1GB RBD image and format it with say xfs of ext4, then I
basically have thin provisioned disk. It takes up only as much space
from the Ceph pool as is needed to hold the data structure of the empty
file system.
If I add files to my file systems and then remove them, how does Cep
I've seen this "fast everything except sequential reads" asymmetry in my
own simple dd tests on RBD images but haven't really understood the cause.
Could you clarify what's going on that would cause that kind of
asymmetry. I've been assuming that once I get around to turning
on/tuning read caching
e:
>
>> On Thu, 19 Dec 2013, John-Paul Robinson wrote:
>> What impact does rebooting nodes in a ceph cluster have on the health of
>> the ceph cluster? Can it trigger rebalancing activities that then have
>> to be undone once the node comes back up?
>>
>> I
What impact does rebooting nodes in a ceph cluster have on the health of
the ceph cluster? Can it trigger rebalancing activities that then have
to be undone once the node comes back up?
I have a 4 node ceph cluster each node has 11 osds. There is a single
pool with redundant storage.
If it take
wrote:
> On Thu, Nov 21, 2013 at 10:13 AM, John-Paul Robinson wrote:
>> Is this statement accurate?
>>
>> As I understand DRBD, you can replicate online block devices reliably,
>> but with Ceph the replication for RBD images requires that the file
>> system b
Is this statement accurate?
As I understand DRBD, you can replicate online block devices reliably,
but with Ceph the replication for RBD images requires that the file
system be offline.
Thanks for the clarification,
~jpr
On 11/08/2013 03:46 PM, Gregory Farnum wrote:
>> Does Ceph provides the d
We're actually pursuing a similar configuration where it's easily
conceivable that we would have 230+ block devices that we want to mount
on a server.
We are moving to a configuration where each user in our cluster has a
distinct ceph block device for their storage. We're mapping them on our
nas
What is this take on such a configuration?
Is it worth the effort of tracking "rebalancing" at two layers, RAID
mirror and possibly Ceph if the pool has a redundancy policy. Or is it
better to just let ceph rebalance itself when you lose a non-mirrored disk?
If following the "raid mirror" approa
Hi,
We've been working with Ceph 0.56 on Ubuntu 12.04 and are able to
create, map, and mount ceph block devices via the RBD kernel module. We
have a CentOS6.4 box one which we would like to do the same.
http://ceph.com/docs/next/install/os-recommendations/
OS recommedations state that we should
Thanks.
After fixing the issue with the types entry in lvm.conf, I discovered
the -vvv option which helped me detect a the second cause for the
"ignored" error: pvcreate saw a partition signature and skipped the device.
The -vvv is s good flag. :)
~jpr
On 09/25/2013 01:52 AM, Wido den Hollander
ote:
> You need to add a line to /etc/lvm/lvm.conf:
>
> types = [ "rbd", 1024 ]
>
> It should be in the "devices" section of the file.
>
> On Tue, Sep 24, 2013 at 5:00 PM, John-Paul Robinson wrote:
>> Hi,
>>
>> I'm exploring a config
Hi,
I'm exploring a configuration with multiple Ceph block devices used with
LVM. The goal is to provide a way to grow and shrink my file systems
while they are on line.
I've created three block devices:
$ sudo ./ceph-ls | grep home
jpr-home-lvm-p01: 102400 MB
jpr-home-lvm-p02: 102400 MB
jpr-h
34 matches
Mail list logo