On Fri, Jan 24, 2025 at 02:53:06PM -0500, James Boyle wrote:
> Hello,
>
> I was hoping to get a little help with bioctl and the 1C raid mode after a
> drive failure. The most recent error message I'm getting when trying to
> start the array in a degraded mode is:
> # bi
Hello,
I was hoping to get a little help with bioctl and the 1C raid mode after a
drive failure. The most recent error message I'm getting when trying to
start the array in a degraded mode is:
# bioctl -c 1C -l /dev/sd0a softraid0
softraid0: RAID 1C requires two or more chunks
Previously
beecdadd...@danwin1210.de wrote:
> But manual says this
> "If it is a DUID, it will be automatically mapped to the appropriate entry
> in /dev"
> I assumed the opposite would be true, if I did mount sd3i, and that mount
> would check it's DUID and check in fstab for it it does not do that?
No way
On Sun, March 3, 2024 11:50 am, Otto Moerbeek wrote:
> On Sun, Mar 03, 2024 at 10:47:31AM -, beecdadd...@danwin1210.de
> wrote:
>
>
>> hi list I want to know how many rounds my computer defaults to for
>> bioctl -r, so I can change it and know how stronger it is can you
On Sun, Mar 03, 2024 at 10:47:31AM -, beecdadd...@danwin1210.de wrote:
> hi list
> I want to know how many rounds my computer defaults to for bioctl -r, so I
> can change it and know how stronger it is can you help me?
>
> after reading mount manual about DUID I realized
hi list
I want to know how many rounds my computer defaults to for bioctl -r, so I
can change it and know how stronger it is can you help me?
after reading mount manual about DUID I realized that it is not working
for me as expected
in /etc/fstab I have the same DUID I got from disklabel of that
disk -gy -b 532480 sd2
>> # fdisk -gy -b 532480 sd3
>> # fdisk -gy -b 532480 sd4
>>
>> For all of them I did:
>> # disklabel -E sd1
>> sd1> a a
>> offset: [64]
>> size: [39825135] *
>> FS type: [4.2BSD] RAID
>> sd1*> w
>> sd1> q
>
> For all of them I did:
> # disklabel -E sd1
> sd1> a a
> offset: [64]
> size: [39825135] *
> FS type: [4.2BSD] RAID
> sd1*> w
> sd1> q
>
> # bioctl -c 1 -l sd1a,sd2a softraid0
To be clear: this creates sd5 ...
> # bioctl -c 1 -l sd3a,sd4a softraid0
... and t
BSD] RAID
sd1*> w
sd1> q
# bioctl -c 1 -l sd1a,sd2a softraid0
# bioctl -c 1 -l sd3a,sd4a softraid0
# dd if=/dev/zero of=/dev/rsd5c bs=1m count=1
# dd if=/dev/zero of=/dev/rsd6c bs=1m count=1
After that newfs.
One thing I forgot:
root@epyc1:~ # sysctl hw | grep drive
hw.sensors.softrai
Hi All,
I created to softraid0 drives, following the FAQ.
ALl seems to be working without problems, however bioctl isn’t able to
“see” the softraid0 drives, sd5 and sd6.
root@epyc1:~ # dmesg | egrep 'sd([0-6])'
sd0 at scsibus1 targ 0 lun 0:
t10.ATA_DELLBOSS_VD_37b61d1b1f56
On Sun, Jan 07, 2024 at 12:40:18PM +0100, Stefan Kreutz wrote:
> You can indeed create multiple 1M RAID disklabel partitions per device
Yes, you can. And that may be the most appropriate solution in this case,
and in cases where you have several machines each with one softraid crypto
partition an
.html#softraidFDE
On Sun, Jan 07, 2024 at 11:15:25AM +0300, 4 wrote:
> how to use one key for multiple disks? i naively believed that since bioctl
> does not have any keys for this, then a key on the specified key's partition
> will be used, and if it is not there, a new one will be cr
how to use one key for multiple disks? i naively believed that since bioctl
does not have any keys for this, then a key on the specified key's partition
will be used, and if it is not there, a new one will be created, and deleting
the key it is the responsibility of the user, but in pra
Am Fr., 5. Jan. 2024 um 12:50 Uhr schrieb Stuart Henderson
:
> > # bioctl -v -P wd0e
> > bioctl: BIOCDISCIPLINE: inapeopriate ioctl for device
> wd0e is not a softraid volume. Use the softraid volume,
> e.g. sd1 or sd0 or similar.
Thanks a lot. After doing
bioctl -c C -l /
On Fri, Jan 05, 2024 at 12:36:41PM +, Roderick wrote:
> # bioctl -v -P wd0e
> bioctl: BIOCDISCIPLINE: inapeopriate ioctl for device
Because wd0e is not a softraid volume.
You have not provided enough information in your message to know for certain
what the correct device is on your
On 2024-01-05, Roderick wrote:
> I get
>
> # bioctl -v -P wd0e
> bioctl: BIOCDISCIPLINE: inapeopriate ioctl for device
>
> Is it not possible to change the pass?
>
> What was supposed that I do under
>
> https://www.openbsd.org/faq/upgrade74.html#ConfigChanges
&
I get
# bioctl -v -P wd0e
bioctl: BIOCDISCIPLINE: inapeopriate ioctl for device
Is it not possible to change the pass?
What was supposed that I do under
https://www.openbsd.org/faq/upgrade74.html#ConfigChanges
???
Thanks for any hint!
Rod
On Thu, Jan 06, 2022 at 12:23:43PM -0500, fo...@dnmx.org wrote:
> So, instead of using a password or a keyfile, I'd like to use a passfile.
> How do I create one? I tried searching on the internet but couldn't find
> an guide.
>
> Do I just put the password itself in the file and chmod it to the
>
Hi misc,
I'd like to have two encrypted 1TB disks in RAID 1 mirror mode (no hardware
RAID installed). Is it possible to use bioctl for that purpose or do I need to
use HW RAID and encrypt mirrored disks with bioctl -cC -l /dev/sd1a softraid0 ?
Please advice.
Martin
On Mon, Oct 19, 2020 at 06:28:50PM +, Martin wrote:
> I'd like to have two encrypted 1TB disks in RAID 1 mirror mode (no hardware
> RAID installed). Is it possible to use bioctl for that purpose or do I need
> to use HW RAID and encrypt mirrored disks with bioctl -c
K_CD) #177: Thu May 7 11:19:02 MDT 2020
> >>>> # dmesg | grep sd
> >>>> dera...@amd64.openbsd.org:
> /usr/src/sys/arch/amd64/compile/RAMDISK_CD
> >>>> wsdisplay1 at vga1 mux 1: console (80x25, vt100 emulation)
> >>>> sd0 at scsib
scsibus1 targ 0 lun 0:
>>>> t10.ATA_QEMU_HARDDISK_Q
>>>> M5_
>>>> sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>>>> sd1 at scsibus1 targ 1 lun 0:
>>>> t10.ATA_QEMU_HARDDISK_Q
>>>> M7_
>>>> sd1: 1
console (80x25, vt100 emulation)
>> > sd0 at scsibus1 targ 0 lun 0:
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M5_
>> > sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>> > sd1 at scsibus1 targ 1 lun 0:
>> > t10.ATA_QEMU_HARDDISK_Q
>&
s1 targ 0 lun 0:
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M5_
>> > sd0: 1907729MB, 512 bytes/sector, 3907029168 sectors, thin
>> > sd1 at scsibus1 targ 1 lun 0:
>> > t10.ATA_QEMU_HARDDISK_Q
>> > M7_
>> > sd1: 1907729MB, 512 bytes/sector, 39
Hello,
wo...@intermezzo.net (Wolly), 2019.06.18 (Tue) 13:58 (CEST):
> 3 years ago I tried to build a "bioctl -c C -l ... " over a "bioctl -c 1
> -l ..." on a hetzner server and I failed.
> Is it possible to do so, and when, what are the requirements?
it is possible
Hello misc,
3 years ago I tried to build a "bioctl -c C -l ... " over a "bioctl -c 1
-l ..." on a hetzner server and I failed.
Is it possible to do so, and when, what are the requirements?
Thank you in advance.
-Heiko
at it happen before production than after.
>
>> So I've been trying to go through the steps again. However nothing
>> I do can elminate the "softraid0 sd0a chunk already in use" message
>> at the "bioctl -c 1 -l sd0a,sd1a softraid0" step.
>>
>
my own
> stupidity.
it happens.
And best that it happen before production than after.
> So I've been trying to go through the steps again. However nothing
> I do can elminate the "softraid0 sd0a chunk already in use" message
> at the "bioctl -c 1 -l sd0a,sd1a soft
However nothing I do can
elminate the "softraid0 sd0a chunk already in use" message at the "bioctl -c 1
-l sd0a,sd1a softraid0" step.
I've tried everything ! Rebooting the server, /dev/zero to the first 500MB of
sd0 and sd1, changing uuid in disklabel, erasing and re
On Tue, 26 Feb 2019, cho...@jtan.com wrote:
[...] What is anyone afraid might happen after that(*)?
You are right, there should be nothing to fear, that is why
answered Stefan.
I though, as obvoiusly also Stefan, it should be good to do "bioctl -d".
[*] RAID and other hard
Roderick writes:
>
> I suspect, umount (that always syncs) is enough and umount
> happens always at shutdown.
How do people cope with "I suspect"? "I suspect" would scare the crap
out of me. Did it never occur that it's possible to _know_?
Not unmounting is dangerous because there are in-memory
I suspect, umount (that always syncs) is enough and umount
happens always at shutdown.
Rodrigo
On Mon, 25 Feb 2019, Kapfhammer, Stefan wrote:
I have the umount and bioctl -d
commands in /etc/rc.shutdown,
in case I forget to do it manually.
If you don't do that proberly, you will ne
Hi,
I have the umount and bioctl -d
commands in /etc/rc.shutdown,
in case I forget to do it manually.
If you don't do that proberly, you will need
to fsck the device, next time you attach it.
-Stefan
Origineel bericht
Van: Roderick
Verzonden: zondag 24 februari 2019 21:53
Aan:
Excuseme that I ask instead of inspecting rc files. :)
I do manually
bioctl -c C -l /dev/XXX softraid0
and mount the resulting device.
Should I manually unmount and do "bioctl -d " before shutdown?
Or just shutdown? The umount will sure be done, but also the bioctl -d?
Thanks
Rodrigo
etienne.m...@magickarpet.org (Etienne), 2018.05.04 (Fri) 19:06 (CEST):
> On 04/05/18 17:40, Marcus MERIGHI wrote:
>
> > I'm currently reading https://marc.info/?l=openbsd-misc&m=141435482820277
> > "crypto softraid and keydisk on same harddrive", 2014-10-26.
> >
> > jsing@ had this patch, which w
On 04/05/18 17:40, Marcus MERIGHI wrote:
I'm currently reading https://marc.info/?l=openbsd-misc&m=141435482820277
"crypto softraid and keydisk on same harddrive", 2014-10-26.
jsing@ had this patch, which was tested and worked for the OP - but was
not commited: https://marc.info/?l=openbsd-misc
ion of the same disk as a keydisk. (take
> all the time you want to laugh, then carry on reading).
>
> So I'm creating two RAID partitions "a" and "p", then run:
>
> bioctl -c C -l sd0a -k sd0p softraid0
>
> and this succeed. I'm then proceed
ing).
So I'm creating two RAID partitions "a" and "p", then run:
bioctl -c C -l sd0a -k sd0p softraid0
and this succeed. I'm then proceeding to a normal installation on sd1,
then reboot, and I'm greeted with the message `ERR M`.
I have tried this with the
e
c:3125818080 unused
m:291611880 20964825RAID
Most of the time, everything is fine:
# bioctl -i sd2
Volume Status Size Device
softraid0 0 Online 149305012224 sd2 RAID1
0 Online 149305012224 0:0.0 noenc
smartctl -i /dev/sd0c
works for me as well. I would like to thank all of you who helped on and
off the list.
Predrag
On Tue, Oct 17, 2017 at 11:30:16PM -0400, Predrag Punosevac wrote:
> Hi Misc,
>
> I am using
>
> # bioctl sd4
> Volume Status Size Device
> softraid0 0 Online 2000396018176 sd4 RAID1
> 0 Online 2000396018176 0:0.0 noen
Quoting Predrag Punosevac :
Hi Misc,
I am using
# bioctl sd4
Volume Status Size Device
softraid0 0 Online 2000396018176 sd4 RAID1
0 Online 2000396018176 0:0.0 noencl
1 Online 2000396018176 0:1.0 noencl
for my desktop
# uname
Hi Misc,
I am using
# bioctl sd4
Volume Status Size Device
softraid0 0 Online 2000396018176 sd4 RAID1
0 Online 2000396018176 0:0.0 noencl
1 Online 2000396018176 0:1.0 noencl
for my desktop
# uname -a
OpenBSD oko.bagdala2.net
On Sunday 25 June 2017 22:28:17 Kevin Chadwick wrote:
> Doh... Yeah, starting from scratch with -r works. I guess quickly finding
> how long rounds take is not quite as easy as bioctl -d and try again.
The number of rounds can also be changed when you change the passphrase on an
existing volume.
Doh... Yeah, starting from scratch with -r works. I guess quickly finding
how long rounds take is not quite as easy as bioctl -d and try again.
I guess the rounds it chooses is equal to a seconds worth, but surprised
that it would be exactly 256. Struck me as a maxed byte or something. Sorry
for
gt; > > > very high as in -r 2000 ?
> > >
> > > Yeah, 2048? Is there a MAX?
> > Not really.
> >
> > Oh it's been only 9 month since bioctl(8) switched over to bcrypt
> > PBKDF. You might run a older version (dmesg would help) in which ca
gt;
> > Yeah, 2048? Is there a MAX?
> Not really.
>
> Oh it's been only 9 month since bioctl(8) switched over to bcrypt
> PBKDF. You might run a older version (dmesg would help) in which case
> you want to go much higher... 16000?
>
> # bioctl -v -c C -l /dev/vnd0a s
Kevin Chadwick wrote:
> On Fri, 23 Jun 2017 18:13:20 +0200
>
>
> > > I started by trying very high values with a simple password and
> > > expected to have to wait a long time but it was always around 7
> > > seconds?
> > very high as in -r 2000 ?
>
> Yeah, 2048? Is there a MAX?
i do not reco
nds?
> > very high as in -r 2000 ?
>
> Yeah, 2048? Is there a MAX?
Not really.
Oh it's been only 9 month since bioctl(8) switched over to bcrypt PBKDF.
You might run a older version (dmesg would help) in which case you want
to go much higher... 16000?
# bioctl -v -c C -l /dev/vnd0a softraid0
shows you what KDF you are using.
On Fri, 23 Jun 2017 18:13:20 +0200
> > I started by trying very high values with a simple password and
> > expected to have to wait a long time but it was always around 7
> > seconds?
> very high as in -r 2000 ?
Yeah, 2048? Is there a MAX?
On Fri, 23 Jun 2017 17:02:18 +0100
Kevin Chadwick wrote:
> On 6.1 i386 with syspatch 004 I am running:
>
> time /sbin/bioctl -c C -l /dev/vnd0a -r31 softraid0
>
> I guess I am simply seeing my passphrase input time and the round has
> a marginal affect? Perhaps more on mem
On 6.1 i386 with syspatch 004 I am running:
time /sbin/bioctl -c C -l /dev/vnd0a -r31 softraid0
I guess I am simply seeing my passphrase input time and the round has
a marginal affect? Perhaps more on memory usage?
Is 31 the highest number of rounds?
I started by trying very high values with a
hen the crypto metadata was
redesigned, which is still yet to happen (and fixing it also means there is
another bug that has to be addressed first...)
Something is obviously still checking/hitting this limit though and is
triggering the failure. There are probably a couple of things to fix here - the
ootable partiton)
disklabel -E sd0
Label editor (enter '?' for help at any prompt)
a a
offset: [64]
size: [70319603585]
FS type: [4.2BSD] RAID
w
q
# bioctl -v -c C -l sd0a softraid0
New passphrase:
Re-type passphrase:
Deriving key using bcrypt PBKDF with 16 rounds...
bioctl
16T
Rounding size to cylinder (16065 sectors): 34359741611
FS type: [4.2BSD] RAID
> w
> q
No label changes.
# bioctl -v -c C -l sd0a softraid0
New passphrase:
Re-type passphrase:
Deriving key using bcrypt PBKDF with 16 rounds...
bioctl: unknown error
# dd if=/dev/random of=/dev/rsd0c bs=1m
tors): 34359741611
FS type: [4.2BSD] RAID
> w
> q
No label changes.
# bioctl -v -c C -l sd0a softraid0
New passphrase:
Re-type passphrase:
Deriving key using bcrypt PBKDF with 16 rounds...
bioctl: unknown error
# dd if=/dev/random of=/dev/rsd0c bs=1m
^C2465+0 records in
2464+0 reco
an .i EFI
| boot partition. The softraid is now 2.7TiB... Grumbl! conclusion :
| bioctl needs a mandatory bootable partition to act correctly even on
| disks not aimed to be bootable.
https://marc.info/?l=openbsd-misc&m=148854591221493&w=2
using the "-b 960" doesn't help:
# d
sd0 (I left off the "-b 960" because this is not a
bootable partiton)
disklabel -E sd0
Label editor (enter '?' for help at any prompt)
> a a
offset: [64]
size: [70319603585]
FS type: [4.2BSD] RAID
> w
> q
# bioctl -v -c C -l sd0a softraid0
New passphrase:
841G4.3T16%/store
[weerd@pom] $ dmesg | grep sd16
sd16 at scsibus12 targ 2 lun 0: SCSI2 0/direct fixed
sd16: 5723166MB, 512 bytes/sector, 11721044513 sectors
It is backed by this physical disk:
[weerd@pom] $ doas bioctl -vhi softraid0
Volume Status Size D
id is now 2.7TiB... Grumbl! conclusion :
| bioctl needs a mandatory bootable partition to act correctly even on
| disks not aimed to be bootable.
https://marc.info/?l=openbsd-misc&m=148854591221493&w=2
--
Christian "naddy" Weisgerber na...@mips.inka.de
sharon s. wrote:
> >
> >> softraid0: invalid metadata format
> > You filled the disk with random data, which is not a valid metadata
> > format...
> I followed the FAQ, http://www.openbsd.org/faq/faq14.html#softraidCrypto .
Sorry, I was hasty. You can also try creating smaller partitions. 16TB,
ditor (enter '?' for help at any prompt)
> > >>
> > >> > a a
> > >>
> > >> offset: [64]
> > >> size: [70319603585]
> > >> FS type: [4.2BSD] RAID
> > >>
> > >> > w
> > >> > q
>
t off the "-b 960" because this is not a
> >> bootable partiton)
> >>
> >> disklabel -E sd0
> >>
> >> Label editor (enter '?' for help at any prompt)
> >>
> >> > a a
> >>
> >> offset: [6
r help at any prompt)
> a a
offset: [64]
size: [70319603585]
FS type: [4.2BSD] RAID
> w
> q
# bioctl -v -c C -l sd0a softraid0
New passphrase:
Re-type passphrase:
Deriving key using bcrypt PBKDF with 16 rounds...
bioctl: unknown error
softraid0: invalid metadata format
You fil
27; for help at any prompt)
> > a a
> offset: [64]
> size: [70319603585]
> FS type: [4.2BSD] RAID
> > w
> > q
>
> # bioctl -v -c C -l sd0a softraid0
> New passphrase:
> Re-type passphrase:
> Deriving key using bcrypt PBKDF with 16 rounds...
> bio
ff the "-b 960" because this is not a
>bootable partiton)
>
>disklabel -E sd0
>
>Label editor (enter '?' for help at any prompt)
> > a a
>offset: [64]
>size: [70319603585]
>FS type: [4.2BSD] RAID
> > w
> > q
>
># bioctl -v -c C -l
/dev/random of=/dev/rsd0c bs=1m (took over a week)
fdisk -iy -g sd0 (I left off the "-b 960" because this is not a
bootable partiton)
disklabel -E sd0
Label editor (enter '?' for help at any prompt)
> a a
offset: [64]
size: [70319603585]
FS type: [4.2BSD] RAID
> w
>
It has a three-drive RAID1
>> softraid array. Up until yesterday, I'd been running a snap from February
>> 18 and everything was behaving as expected.
>>
>> After updating to a fresh snapshot yesterday, I noticed that the output of
>> bioctl is different and a b
erday, I'd been running a snap from February
> 18 and everything was behaving as expected.
>
> After updating to a fresh snapshot yesterday, I noticed that the output of
> bioctl is different and a bit odd. It now shows "0% done", but the array
> and all three mem
/sector, 976773168 sectors
uhub2 at uhub1 port 1 "Intel Rate Matching Hub" rev 2.00/0.04 addr 2
vscsi0 at root
scsibus3 at vscsi0: 256 targets
softraid0 at root
scsibus4 at softraid0: 256 targets
sd3 at scsibus4 targ 1 lun 0: SCSI2 0/direct fixed
sd3: 476937MB, 512 bytes/sector, 9767
running a snap from February
> 18 and everything was behaving as expected.
>
> After updating to a fresh snapshot yesterday, I noticed that the output of
> bioctl is different and a bit odd. It now shows "0% done", but the array
> and all three member drives are showi
ng as expected.
After updating to a fresh snapshot yesterday, I noticed that the output of
bioctl is different and a bit odd. It now shows "0% done", but the array
and all three member drives are showing as online:
$ sudo bioctl sd4
Volume Status Size Device
sof
I have a file server running -current on amd64. It has a three-drive
RAID1
softraid array. Up until yesterday, I'd been running a snap from
February 18
and everything was behaving as expected.
After updating to a fresh snapshot
yesterday, I noticed that the
output of bioctl is different and
On 2016-06-13, Chris Cappuccio wrote:
> c. You must start the first partition past block 0, block 64
> is standard for various reasons.
I think we should consider changing this.
Most mechanical drives these days have 4KB sectors (though many hide
it with synthetic 512 byte sectors) which work OK
Hello misc@,
Phones suck.
# dd if=/dev/random of=/dev/rsd0c bs=1m
#__Zero out random garbage._###
# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
# fdisk -iy sd0
# disklabel -E sd0 (create an "a" partition, see above for more info)
# bioctl -c C -l sd0a softraid0
New passphras
> # dd if=/dev/zero of=/dev/sd1c bs=1m count=1
The first one is an ok alternative to what's done in the FAQ, but I
don't understand your comment on not using the raw disk for the second
command. Using the raw device as it is written *is* correct, see also
the example section in
> 1: 00 0 0 0 - 0 0 0 [ 0: 0 ] unused
> 2: 00 0 0 0 - 0 0 0 [ 0: 0 ] unused
> 3: 00 0 0 0 - 0 0 0 [ 0: 0 ] unused
>
> # disklabel wd1
> 16 partitions:
> # size offset fstype [fsize bsize cpg]
> c: 234441648 0 unused
My guess is that you didn't run fdisk, THEN disklabe
Hello misc@,
The added or modified lines have comments.
# dd if=/dev/random of=/dev/rsd0c bs=1m
#__Zero out random garbage._###
# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1
# fdisk -iy sd0
# disklabel -E sd0 (create an "a" partition, see above for more info)
# bioctl -c
'Encrypting external disks'
http://www.openbsd.org/faq/faq14.html#softraidCrypto
Followed the FAQ instructions EXACTLY to encrypt an external drive, then copied
data to it and after restarting the computer again.. I cannot access the drive,
infact it doesn't look like anything is even on it. Thi
Hey,
On 05/15/16 09:23, Maurice McCarthy wrote:
I believe the installation ramdisk has limited space so you likely used it
all up with "MAKEDEV all". It is limited to install on very old systems.
thanks for the answer. That actually would explain my problem! Maybe the
bioctl err
On Sun, May 15, 2016 at 12:21:48AM +0200 or thereabouts, Leo Unglaub wrote:
>
> But i think i found out what caused the problem. Every time i did a cd
> /dev && sh MAKEDEV all it did not work and bioctl could not read my
> passphrase anymore. When i just created the d
h?)
i am deeply sorry about that. The problem happens only on the installer
from the 5.9 release. I used the AMD64 image of the release.
But i think i found out what caused the problem. Every time i did a cd
/dev && sh MAKEDEV all it did not work and bioctl could not read my
passphra
On 2016-05-14, Leo Unglaub wrote:
> Hey,
>
> On 05/13/16 21:08, Ted Unangst wrote:
>> you might try ktrace, since bioctl is not being very helpful here.
>
> the problem is that i dont have ktrace available on the install iso. I
> tryed to reproduce it on my OpenBSD deskt
Hey,
On 05/13/16 21:08, Ted Unangst wrote:
you might try ktrace, since bioctl is not being very helpful here.
the problem is that i dont have ktrace available on the install iso. I
tryed to reproduce it on my OpenBSD desktop but there i dont have that
problem.
I looked up the part in the
n them resulting in sd3. I used the following command:
>>>
>>>> bioctl -c 1 -l sd0a,sd1a softraid0
>>>
>>>
>>> On the resulting disk i created sd3b with 2 GB Swap and sd3a with 100GB
with
>>> a type RAID.
>>>
>>> Now i want t
Theo Buehler wrote:
> On Fri, May 13, 2016 at 07:28:51PM +0200, Leo Unglaub wrote:
> > Hey friends,
> > i have two identical ssd drives in my laptop. sd0 and sd1. I created a Raid
> > 1 (mirroring) on them resulting in sd3. I used the following command:
> >
>
On Fri, May 13, 2016 at 07:28:51PM +0200, Leo Unglaub wrote:
> Hey friends,
> i have two identical ssd drives in my laptop. sd0 and sd1. I created a Raid
> 1 (mirroring) on them resulting in sd3. I used the following command:
>
> > bioctl -c 1 -l sd0a,sd1a softraid0
>
>
Leo Unglaub wrote:
> > bioctl -c C -l sd3a softraid0
>
> But i get the following error message: bioctl: unable to read passphrase.
>
> Do you have any ideas why this is happening?
you might try ktrace, since bioctl is not being very helpful here.
Hey friends,
i have two identical ssd drives in my laptop. sd0 and sd1. I created a
Raid 1 (mirroring) on them resulting in sd3. I used the following command:
bioctl -c 1 -l sd0a,sd1a softraid0
On the resulting disk i created sd3b with 2 GB Swap and sd3a with 100GB
with a type RAID.
Now
On Sat, 9 Apr 2016 20:18:11 -0400
Matt Schwartz wrote:
> I really like the bioctl full disk encryption feature. I would love
> to see it extended to support multiple users/passkeys. I once worked
> with a commercial full disk encryption product that allowed this ...
You could sto
On Sun, 10 Apr 2016, Matt Schwartz wrote:
> I really like the bioctl full disk encryption feature. I would love to see
> it extended to support multiple users/passkeys. I once worked with a
> commercial full disk encryption product that allowed this and could even be
> managed ov
Okay, I wasn't screaming - cheering on a great operating system, most
definitely. I'll dig into the source code a bit to see what I can learn.
On Apr 9, 2016 9:12 PM, "Jiri B" wrote:
>
> On Sat, Apr 09, 2016 at 08:18:11PM -0400, Matt Schwartz wrote:
> > I
I really like the bioctl full disk encryption feature. I would love to see
it extended to support multiple users/passkeys. I once worked with a
commercial full disk encryption product that allowed this and could even be
managed over a network. Coming up with a solution to manage encryption keys
Hello
I have a question about RAID 1 volumes set up with bioctl.
When I originally set up the softraid, I created a RAID partition that
(essentially) took up the entire drive. However, the disklabel INSIDE the
softraid does NOT use the all the space available (e.g. The chunks making up
the
Aha.
*Is* the keydisk the master key, and hence can't be changed?
Very low priority topic:
What about implementing some routine for regenerating the master key,
even if that would imply reprocessing *all* of the disk's contents?
That could be beneficial in a place where you don't have the s
I think it would make sense to be able to do this. I have a scenario where I
would like to install OpenBSD on a remote machine with a customized bsd.rd in
order to automatically set it all up, feeding a password into the stdin of
bioctl..
Now, bioctl doesn't allow hashed password to b
Tinker wrote:
> Aha.
>
> *Is* the keydisk the master key, and hence can't be changed?
The keydisk is the mask for the master key. It can (in theory) be changed like
changing a password. Really, the key disk is just a prehashed password.
>
>
> Very low priority topic:
>
> What about implementi
Aha.
*Is* the keydisk the master key, and hence can't be changed?
Very low priority topic:
What about implementing some routine for regenerating the master key,
even if that would imply reprocessing *all* of the disk's contents?
That could be beneficial in a place where you don't have the s
Tinker wrote:
> Ah, and maybe equally importantly, what are the security ramifications
> of changing password/keydisk vs. wiping and installing from scratch with
> a new password/keydisk?
The master key, which the data on disk is encrypted with, is masked with your
password. The master key never
I was wondering the exact same thing. Looking forward to finding out.
Original Message
Subject: Re: "bioctl -P" is to change passphrase without wiping the encrypted
partition's contents. How do you generate a new keydisk without wiping the same?
Local Time: Nov
1 - 100 of 307 matches
Mail list logo