> On 12 Nov 2015, at 20:49, Bogdan SOLGA <bogdan.so...@gmail.com> wrote:
> 
> Hello Jan!
> 
> Thank you for your advices, first of all!
> 
> The filesystem was created using mkfs.xfs, after creating the RBD block 
> device and mapping it on the Ceph client. I haven't specified any parameters 
> when I created the filesystem, I just ran mkfs.xfs on the image name.
> 
> As you mentioned the filesystem thinking the block device should be larger 
> than it is - I have initially created that image as a 2GB image, and then 
> resized it to be much bigger. Could this be the issue?

Sounds more than likely :-) How exactly did you grow it?

Jan

> 
> There are several RBD images mounted on one Ceph client, but only one of them 
> had issues. I have made a clone, and I will try running fsck on it.
> 
> Fortunately it's not important data, it's just testing data. If I won't 
> succeed repairing it I will trash and re-create it, of course.
> 
> Thank you, once again!
> 
> 
> 
> On Thu, Nov 12, 2015 at 9:28 PM, Jan Schermer <j...@schermer.cz 
> <mailto:j...@schermer.cz>> wrote:
> How did you create filesystems and/or partitions on this RBD block device?
> The obvious causes would be
> 1) you partitioned it and the partition on which you ran mkfs points or 
> pointed during mkfs outside the block device size (happens if you for example 
> automate this and confuse sectors x cylinders, or if you copied the partition 
> table with dd or from some image)
> or
> 2) mkfs created the filesystem with pointers outside of the block device for 
> some other reason (bug?)
> or
> 3) this RBD device is a snapshot that got corrupted (or wasn't snapshotted in 
> crash-consistent state and you got "lucky") and some reference points to a 
> non-sensical block number (fsck could fix this, but I wouldn't trust the data 
> integrity anymore)
> 
> Basically the filesystem thinks the block device should be larger than it is 
> and tries to reach beyond.
> 
> Is this just one machine or RBD image or is there more?
> 
> I'd first create a snapshot and then try running fsck on it, it should 
> hopefully tell you if there's a problem in setup or a corruption.
> 
> If it's not important data and it's just one instance of this problem then 
> I'd just trash and recreate it.
> 
> Jan
> 
>> On 12 Nov 2015, at 20:14, Bogdan SOLGA <bogdan.so...@gmail.com 
>> <mailto:bogdan.so...@gmail.com>> wrote:
>> 
>> Hello everyone!
>> 
>> We have a recently installed Ceph cluster (v 0.94.5, Ubuntu 14.04), and 
>> today I noticed a lot of 'attempt to access beyond end of device' messages 
>> in the /var/log/syslog file. They are related to a mounted RBD image, and 
>> have the following format:
>> 
>> Nov 12 21:06:44 ceph-client-01 kernel: [438507.952532] attempt to access 
>> beyond end of device
>> Nov 12 21:06:44 ceph-client-01 kernel: [438507.952534] rbd5: rw=33, 
>> want=6193176, limit=4194304
>> 
>> After restarting that Ceph client, I see a lot of 'metadata I/O error' 
>> messages in the boot log:
>> 
>> XFS (rbd5): metadata I/O error: block 0x46e001 ("xfs_buf_iodone_callbacks") 
>> error 5 numblks 1
>> 
>> Any idea on why these messages are shown? The health of the cluster shows as 
>> OK, and I can access that block device without (apparent) issues...
>> 
>> Thank you!
>> 
>> Regards,
>> Bogdan
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to