On 9-6-2017 16:20, Miroslav Lachman wrote: > Willem Jan Withagen wrote on 2017/06/09 15:48: >> On 9-6-2017 11:23, Steven Hartland wrote: >>> You could do effectively this by using dedicated zfs filesystems per >>> jail >> >> Hi Steven, >> >> That is how I'm going to do it, when nothing else works. >> But then I don't get to test the part of building the ceph-cluster from >> raw disk... >> >> I was more thinking along the lines of tinkering with the devd.conf or >> something. And would appreciate opinions on how to (not) do it. > > I totally skipped devd.conf in my mind in previous reply. So maybe you > can really use devd.conf to allow access to /dev/adaX devices or you can > use ZFS zvol if you have big pool and need some smaller devices to test > with.
I want the jail to look as much as a normal system would, and then run ceph-tools on them. And they would like to see /dev/{disk}.... Now I have found /sbin/devfs which allows to add/remove devices to an already existing devfs-mount. So I can 'rule add type disk unhide' and see the disks. Gpart can then list partitions. But any of the other commands is met with an unwilling system: root@ceph-1:/ # gpart delete -i 1 ada0 gpart: No such file or directory So there is still some protection in place in the jail.... However dd-ing to the device does overwrite some stuff. Since after the 'dd if=/dev/zero of=/dev/ada0' gpart reports a corrupt gpartition. But I don't see any sysctl options to toggle that on or off --WjW _______________________________________________ freebsd-jail@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-jail To unsubscribe, send any mail to "freebsd-jail-unsubscr...@freebsd.org"