2017-11-09 17:02 GMT+01:00 Alwin Antreich <a.antre...@proxmox.com>:

> Hi Rudi,
> On Thu, Nov 09, 2017 at 04:09:04PM +0200, Rudi Ahlers wrote:
> > Hi,
> >
> > Can someone please tell me what the correct procedure is to upgrade a
> CEPH
> > journal?
> >
> > I'm running ceph: 12.2.1 on Proxmox 5.1, which runs on Debian 9.1
> >
> > For a journal I have a 400GB Intel SSD drive and it seems CEPH created a
> > 1GB journal:
> >
> > Disk /dev/sdf: 372.6 GiB, 400088457216 bytes, 781422768 sectors
> > /dev/sdf1     2048 2099199 2097152   1G unknown
> > /dev/sdf2  2099200 4196351 2097152   1G unknown
> >
> > root@virt2:~# fdisk -l | grep sde
> > Disk /dev/sde: 372.6 GiB, 400088457216 bytes, 781422768 sectors
> > /dev/sde1   2048 2099199 2097152   1G unknown
> >
> >
> > /dev/sda :
> >  /dev/sda1 ceph data, active, cluster ceph, osd.3, block /dev/sda2,
> > block.db /dev/sde1
> >  /dev/sda2 ceph block, for /dev/sda1
> > /dev/sdb :
> >  /dev/sdb1 ceph data, active, cluster ceph, osd.4, block /dev/sdb2,
> > block.db /dev/sdf1
> >  /dev/sdb2 ceph block, for /dev/sdb1
> > /dev/sdc :
> >  /dev/sdc1 ceph data, active, cluster ceph, osd.5, block /dev/sdc2,
> > block.db /dev/sdf2
> >  /dev/sdc2 ceph block, for /dev/sdc1
> > /dev/sdd :
> >  /dev/sdd1 other, xfs, mounted on /data/brick1
> >  /dev/sdd2 other, xfs, mounted on /data/brick2
> > /dev/sde :
> >  /dev/sde1 ceph block.db, for /dev/sda1
> > /dev/sdf :
> >  /dev/sdf1 ceph block.db, for /dev/sdb1
> >  /dev/sdf2 ceph block.db, for /dev/sdc1
> > /dev/sdg :
> >
> >
> > resizing the partition through fdisk didn't work. What is the correct
> > procedure, please?
> >
> > Kind Regards
> > Rudi Ahlers
> > Website: http://www.rudiahlers.co.za
>
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> For Bluestore OSDs you need to set bluestore_block_size to geat a bigger
> partition for the DB and bluestore_block_wal_size for the WAL.
>
>
I think you mean the bluestore_block_db_size in stead of the
bluestore_block_size parameter.



> ceph-disk prepare --bluestore \
> --block.db /dev/sde --block.wal /dev/sde /dev/sdX
>
>
Furthermore using the same drive for db and wal is not nessecary since the
wal will always use the fastest storage available. In this case only
specify a block.db device and the wal will go there too.
If you have an even faster device then the Intel SSD (like an NVME device)
you can specify that as a wal.

So after you set bluestore_block_db_size in ceph.conf issue:

ceph-disk prepare --bluestore --block.db /dev/sde /dev/sdX

Kind regards,
Caspar

This gives you in total four partitions on two different disks.
>
> I think it will be less hassle to remove the OSD and prepare it again.
>
> --
> Cheers,
> Alwin
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to