Hi,

I have found that problem is somewhere within the pool itself. I created
another pool and created an RBD within the new pool and it worked fine.

Can anyone point me out on how can I find the problem with the pool and why
any RBD assigned to it fails to be formatted ?

Thank you.


On 3 April 2014 13:51, Thorvald Hallvardsson <
thorvald.hallvards...@gmail.com> wrote:

> Hi guys,
>
> I have got a problem. I created a new 1TB RBD device and mapped in on the
> box. I tried to create a file system on that device but it failed:
>
> root@export01:~# mkfs.ext4 /dev/rbd/pool/server1
> mke2fs 1.42
> (29-Nov-2011)
> Filesystem
> label=
> OS type:
> Linux
> Block size=4096
> (log=2)
> Fragment size=4096
> (log=2)
> Stride=1024 blocks, Stripe width=1024
> blocks
> 64004096 inodes, 256000000
> blocks
> 12800000 blocks (5.00%) reserved for the super
> user
> First data
> block=0
> Maximum filesystem
> blocks=4294967296
> 7813 block
> groups
> 32768 blocks per group, 32768 fragments per
> group
> 8192 inodes per
> group
> Superblock backups stored on
> blocks:
>         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632,
> 2654208,
>         4096000, 7962624, 11239424, 20480000, 23887872, 71663616,
> 78675968,
>         102400000,
> 214990848
>
> Allocating group tables: done
> Writing inode tables: done
> Creating journal (32768 blocks): done
> Writing superblocks and filesystem accounting information:    3/7813
> Warning, had trouble writing out superblocks.
>
> I tried XFS and it also failed:
> root@export01:~# mkfs.xfs /dev/rbd/pool/server1
>
> log stripe unit (4194304 bytes) is too large (maximum is 256KiB)
> log stripe unit adjusted to 32KiB
> meta-data=/dev/rbd/pool/server1 isize=256    agcount=17, agsize=1599488
> blks
>          =                       sectsz=512   attr=2,
> projid32bit=0
> data     =                       bsize=4096   blocks=25600000,
> imaxpct=25
>          =                       sunit=1024   swidth=1024
> blks
> naming   =version 2              bsize=4096
> ascii-ci=0
> log      =internal log           bsize=4096   blocks=12504,
> version=2
>          =                       sectsz=512   sunit=8 blks,
> lazy-count=1
> realtime =none                   extsz=4096   blocks=0,
> rtextents=0
> mkfs.xfs: pwrite64 failed: Input/output error
>
> No errors in any logs. Dmesg is shouting:
> [514937.022686] rbd: rbd22:   result -1 xferred
> 1000
> [514937.022686]
>
> [514937.022742] rbd: rbd22: write 1000 at e600000000
> (0)
> [514937.022742]
>
> [514937.022744] rbd: rbd22:   result -1 xferred
> 1000
> [514937.022744]
>
> [514937.034529] rbd: rbd22: write 1000 at f200000000
> (0)
> [514937.034529]
>
> [514937.034533] rbd: rbd22:   result -1 xferred
> 1000
> [514937.034533]
>
> [514937.417367] rbd: rbd22: write 1000 at ca80000000
> (0)
> [514937.417367]
>
> [514937.417373] rbd: rbd22:   result -1 xferred
> 1000
> [514937.417373]
>
> [514937.417460] rbd: rbd22: write 1000 at db00000000
> (0)
> [514937.417460]
>
> [514937.417463] rbd: rbd22:   result -1 xferred
> 1000
> [514937.417463]
>
> The funny thing is I tried to change the RBD size to something like 100GB
> and it was the same. However when I mapped that RBD on another box file
> system creation was successful. When I mapped that back to the export01 box
> (with the file system already created) I got:
>
> root@export01:~# fdisk -l /dev/rbd/pool/server1
> root@export01:~# dmesg |tail
> [517031.610202] rbd: rbd22: read 4000 at 0 (0)
> [517031.610202]
> [517031.610206] rbd: rbd22:   result -1 xferred 4000
> [517031.610206]
> [517031.610208] end_request: I/O error, dev rbd22, sector 0
> [517031.610554] rbd: rbd22: read 1000 at 0 (0)
> [517031.610554]
> [517031.610556] rbd: rbd22:   result -1 xferred 1000
> [517031.610556]
> [517031.610559] end_request: I/O error, dev rbd22, sector 0
>
> But on the other box it is working absolutely fine.
>
> Any ideas ?
>
> Thank you.
> Regards.
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to