Re: Restore pf tables metadata after a reboot
> Even then it seems that some of them turn up again pretty much > instantly after expiry. You could update the expire time on each new connection/port scan attempt. This way you could put say 4 days expire time and block these IPs on all ports on all your systems and new connection attempts would update the expire for all the systems. 4 days is because 5 days is a typical timeout for a temporary error for SMTP. It may happen that someone used for 24hs a cloud instance and then got banned by the cloud provider, the IP used for spam/scans/attacks could be reused for another client for a legit activity. So if that new client for the old IP sends to your client some important mail, it's not lost and doesn't generate an undeliverable mail report, it just takes some days to reach the destination (with retries by the origin server). 4 weeks looks excessive for cloud shared IPs. On 30/5/20 07:25, Peter Nicolai Mathias Hansteen wrote: > > >> 30. mai 2020 kl. 11:54 skrev Walter Alejandro Iglesias : >> >> The problem is most system administrators out there do very little. If >> you were getting spam or attacks from some IP, even if you report the >> issue to the respective whois abuse@ address, chances are attacks from >> that IP won't stop next week, nor even next month. >> >> So, in general terms, I would refrain as much as possible from hurry to >> expiring addresses. Just my opinion. > > Yes, there are a lot of systems out there that seem to be not really > maintained at all. After years of advocating 24 hour expiry some time back I > went to four weeks on the ssh brutes blacklist. Even then it seems that some > of them turn up again pretty much instantly after expiry. > > All the best, > > — > Peter N. M. Hansteen, member of the first RFC 1149 implementation team > http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/ > "Remember to set the evil bit on all malicious network traffic" > delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds. > > > >
Re: I unveil()ed ftp(1)!
No. I'm guessing you don't understand symbolic links. Look, this is a waste of time. Luke Small wrote: > --80daf105a7444c30 > Content-Type: text/plain; charset="UTF-8" > Content-Transfer-Encoding: 8bit > > In the case of 1 URLs couldnât you at least merely unveil â./â as > âcwâ; > make any specified cafile/capath including shortcut resolution as ârâ > (perhaps with the shell âxâ) so that at worst, current directory files > could be overwritten, but not read? > > On Wed, Jun 3, 2020 at 10:39 AM Theo de Raadt wrote: > > > You really don't get it. > > > > + unveil_list = calloc(2 * argc, sizeof(char*)); > > > > Imagine argc is 1. > > > > + for (i = 2 * argc - 2; i >= 0; i -= 2) { > > + if (unveil_list[i]) { > > + if (unveil(unveil_list[i], "r") == > > -1) > > ... > > + if (unveil_list[i | 1]) { > > + if (unveil(unveil_list[i | 1], > > "cw") == -1) > > + err(1, "unveil"); > > ... > > > > > > E2BIG The addition of path would exceed the per-process > > limit for unveiled paths. > > > > > > Great, under fairly normal usage ftp aborts with an error. > > > > Since you start with up to 8 others, it looks like this limit is easily > > hit at around 120 filenames. > > > > So ftp simply fails to perform the task it is designed for. > > > > Your proposal is to break the command. > > > > -- > -Luke > > --80daf105a7444c30 > Content-Type: text/html; charset="UTF-8" > Content-Transfer-Encoding: 8bit > > In the case of 1 URLs couldnât you at least merely > unveil â./â as âcwâ; make any specified cafile/capath including > shortcut resolution as ârâ (perhaps with the shell âxâ) so that at > worst, current directory files could be overwritten, but not > read? class="gmail_attr">On Wed, Jun 3, 2020 at 10:39 AM Theo de Raadt < href="mailto:dera...@openbsd.org";>dera...@openbsd.org> > wrote:You > really don't get it. > > +            unveil_list = calloc(2 * argc, > sizeof(char*)); > > Imagine argc is 1. > > +            for (i = 2 * argc - 2; i >= 0; i -= 2) > { > +                if (unveil_list[i]) { > +                    if > (unveil(unveil_list[i], "r") == -1) > ... > +                if (unveil_list[i | 1]) { > +                    if > (unveil(unveil_list[i | 1], "cw") == -1) > +                        > err(1, "unveil"); > ... > > >    E2BIG       The addition of path would exceed the > per-process >             limit for unveiled paths. > > > Great, under fairly normal usage ftp aborts with an error. > > Since you start with up to 8 others, it looks like this limit is easily > hit at around 120 filenames. > > So ftp simply fails to perform the task it is designed for. > > Your proposal is to break the command. > > -- data-smartmail="gmail_signature">-Luke > > --80daf105a7444c30--
Re: I unveil()ed ftp(1)!
In the case of 1 URLs couldn’t you at least merely unveil “./“ as “cw”; make any specified cafile/capath including shortcut resolution as “r” (perhaps with the shell “x”) so that at worst, current directory files could be overwritten, but not read? On Wed, Jun 3, 2020 at 10:39 AM Theo de Raadt wrote: > You really don't get it. > > + unveil_list = calloc(2 * argc, sizeof(char*)); > > Imagine argc is 1. > > + for (i = 2 * argc - 2; i >= 0; i -= 2) { > + if (unveil_list[i]) { > + if (unveil(unveil_list[i], "r") == > -1) > ... > + if (unveil_list[i | 1]) { > + if (unveil(unveil_list[i | 1], > "cw") == -1) > + err(1, "unveil"); > ... > > > E2BIG The addition of path would exceed the per-process > limit for unveiled paths. > > > Great, under fairly normal usage ftp aborts with an error. > > Since you start with up to 8 others, it looks like this limit is easily > hit at around 120 filenames. > > So ftp simply fails to perform the task it is designed for. > > Your proposal is to break the command. > > -- -Luke
Re: I unveil()ed ftp(1)!
I made symbolic links “ln -s /etc/ssl/cert.pem ”. I used the realpath command and it worked in the software I submitted. On Thu, Jun 4, 2020 at 11:06 AM Theo de Raadt wrote: > No. > > I'm guessing you don't understand symbolic links. > > Look, this is a waste of time. > > > Luke Small wrote: > > > --80daf105a7444c30 > > Content-Type: text/plain; charset="UTF-8" > > Content-Transfer-Encoding: 8bit > > > > In the case of 1 URLs couldn’t you at least merely unveil “./“ as > “cw”; > > make any specified cafile/capath including shortcut resolution as “r” > > (perhaps with the shell “x”) so that at worst, current directory files > > could be overwritten, but not read? > > > > On Wed, Jun 3, 2020 at 10:39 AM Theo de Raadt > wrote: > > > > > You really don't get it. > > > > > > + unveil_list = calloc(2 * argc, sizeof(char*)); > > > > > > Imagine argc is 1. > > > > > > + for (i = 2 * argc - 2; i >= 0; i -= 2) { > > > + if (unveil_list[i]) { > > > + if (unveil(unveil_list[i], > "r") == > > > -1) > > > ... > > > + if (unveil_list[i | 1]) { > > > + if (unveil(unveil_list[i | 1], > > > "cw") == -1) > > > + err(1, "unveil"); > > > ... > > > > > > > > > E2BIG The addition of path would exceed the > per-process > > > limit for unveiled paths. > > > > > > > > > Great, under fairly normal usage ftp aborts with an error. > > > > > > Since you start with up to 8 others, it looks like this limit is easily > > > hit at around 120 filenames. > > > > > > So ftp simply fails to perform the task it is designed for. > > > > > > Your proposal is to break the command. > > > > > > -- > > -Luke > > > > --80daf105a7444c30 > > Content-Type: text/html; charset="UTF-8" > > Content-Transfer-Encoding: 8bit > > > > In the case of 1 URLs couldn’t you at least > merely unveil “./“ as “cw”; make any specified cafile/capath including > shortcut resolution as “r” (perhaps with the shell “x”) so that at worst, > current directory files could be overwritten, but not > read? class="gmail_attr">On Wed, Jun 3, 2020 at 10:39 AM Theo de Raadt < href="mailto:dera...@openbsd.org";>dera...@openbsd.org> > wrote:You > really don't get it. > > > > + unveil_list = calloc(2 * argc, > sizeof(char*)); > > > > Imagine argc is 1. > > > > + for (i = 2 * argc - 2; i >= 0; i -= 2) { > > + if (unveil_list[i]) { > > + if (unveil(unveil_list[i], > "r") == -1) > > ... > > + if (unveil_list[i | 1]) { > > + if (unveil(unveil_list[i | 1], > "cw") == -1) > > + err(1, > "unveil"); > > ... > > > > > > E2BIG The addition of path would exceed the > per-process > > limit for unveiled paths. > > > > > > Great, under fairly normal usage ftp aborts with an error. > > > > Since you start with up to 8 others, it looks like this limit is > easily > > hit at around 120 filenames. > > > > So ftp simply fails to perform the task it is designed for. > > > > Your proposal is to break the command. > > > > -- data-smartmail="gmail_signature">-Luke > > > > --80daf105a7444c30-- > -- -Luke
Problems with clementine music player with 6.7
Hi All, My preferred music player application is (was) clementine. But with a 6.7 snapshot (GENERIC.MP#213 amd64) and clementine-1.4.0rc1p0 the application seems to have problems opening files. For example the file open dialog opens a blank dialog box and a series of assertion failures/errors are reported (verbose mode) such as: > (clementine:67859): Gtk-CRITICAL **: 18:23:29.544: Error building template > class 'GtkDialog' for an instance of type 'GtkDialog': .:2:367 Invalid object > type 'GtkHeaderBar' > (clementine:67859): Gtk-CRITICAL **: 18:23:29.546: Error building template > class 'GtkFileChooserDialog' for an instance of type 'GtkFileChooserDialog': > Unknown internal child: vbox > (clementine:67859): Gtk-CRITICAL **: 18:23:31.896: Error building template > class 'GtkTooltipWindow' for an instance of type 'GtkTooltipWindow': .:2:524 > Invalid object type 'GtkImage' Similarly, clementine cannot build up its music database. When I try to set the location of the music files (Tools -> Preferences -> Music Library) a similar set of errors is reported. Dragging and dropping a music file from Thunar works fine, so there doesn't seem to be a problem with filesystem access ... I know next to nothing about GTK :-( ... any suggestions? I had no issues using clementine with 6.6. Are there alternative players that might work better? Cheers, Robb.
Re: Restore pf tables metadata after a reboot
No reason to expire ssh brute force. They will never stop. Manual flush if someone accidentally locked themselves out. Just my two cents :) > On Jun 4, 2020, at 12:48 AM, Anatoli wrote: > > >> >> Even then it seems that some of them turn up again pretty much >> instantly after expiry. > > You could update the expire time on each new connection/port scan > attempt. This way you could put say 4 days expire time and block these > IPs on all ports on all your systems and new connection attempts would > update the expire for all the systems. > > 4 days is because 5 days is a typical timeout for a temporary error for > SMTP. It may happen that someone used for 24hs a cloud instance and > then got banned by the cloud provider, the IP used for > spam/scans/attacks could be reused for another client for a legit > activity. So if that new client for the old IP sends to your client some > important mail, it's not lost and doesn't generate an undeliverable mail > report, it just takes some days to reach the destination (with retries > by the origin server). > > 4 weeks looks excessive for cloud shared IPs. > > >> On 30/5/20 07:25, Peter Nicolai Mathias Hansteen wrote: >> >> 30. mai 2020 kl. 11:54 skrev Walter Alejandro Iglesias : >>> >>> The problem is most system administrators out there do very little. If >>> you were getting spam or attacks from some IP, even if you report the >>> issue to the respective whois abuse@ address, chances are attacks from >>> that IP won't stop next week, nor even next month. >>> >>> So, in general terms, I would refrain as much as possible from hurry to >>> expiring addresses. Just my opinion. >> >> Yes, there are a lot of systems out there that seem to be not really >> maintained at all. After years of advocating 24 hour expiry some time back I >> went to four weeks on the ssh brutes blacklist. Even then it seems that some >> of them turn up again pretty much instantly after expiry. >> >> All the best, >> >> — >> Peter N. M. Hansteen, member of the first RFC 1149 implementation team >> http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/ >> "Remember to set the evil bit on all malicious network traffic" >> delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds. >> >> >> >> >
realpath(3) to unveil() symbolic links!
You can use unveil() on both a symbolic link and the value recovered by putting it in realpath(3)! I used it in what I submitted for unveiling ftp(1) -- -Luke
Re: Problems with clementine music player with 6.7
I like clementine as well but it disappeared from the compiled ports going from 6.5 to 6.6. I went to musique, which is much simpler but does the job. I haven't migrated to 6.7 yet. Dave Raymond On 6/4/20, Why 42? The lists account. wrote: > > Hi All, > > My preferred music player application is (was) clementine. But with a 6.7 > snapshot (GENERIC.MP#213 amd64) and clementine-1.4.0rc1p0 the application > seems to have problems opening files. > > For example the file open dialog opens a blank dialog box and a series > of assertion failures/errors are reported (verbose mode) such as: >> (clementine:67859): Gtk-CRITICAL **: 18:23:29.544: Error building template >> class 'GtkDialog' for an instance of type 'GtkDialog': .:2:367 Invalid >> object type 'GtkHeaderBar' >> (clementine:67859): Gtk-CRITICAL **: 18:23:29.546: Error building template >> class 'GtkFileChooserDialog' for an instance of type >> 'GtkFileChooserDialog': Unknown internal child: vbox >> (clementine:67859): Gtk-CRITICAL **: 18:23:31.896: Error building template >> class 'GtkTooltipWindow' for an instance of type 'GtkTooltipWindow': >> .:2:524 Invalid object type 'GtkImage' > > Similarly, clementine cannot build up its music database. When I try to > set the location of the music files (Tools -> Preferences -> Music > Library) a similar set of errors is reported. > > Dragging and dropping a music file from Thunar works fine, so there > doesn't seem to be a problem with filesystem access ... > > I know next to nothing about GTK :-( ... any suggestions? I had no > issues using clementine with 6.6. > > Are there alternative players that might work better? > > Cheers, > Robb. > > -- David J. Raymond david.raym...@nmt.edu http://physics.nmt.edu/~raymond
state replication bug in pfsync?
I've been trying to diagnose a mysterious issue where a UDP state disappears before it's supposed to expire. I finally tracked it down to pfsync. On the primary server, the state entries look like: all udp 198.148.6.55:9430 <- 10.128.110.73:9430 MULTIPLE:MULTIPLE age 00:02:21, expires in 00:04:59, 34:34 pkts, 17887:20606 bytes, rule 64 all udp 96.251.22.157:58308 (10.128.110.73:9430) -> 198.148.6.55:9430 MULTIPLE:MULTIPLE age 00:02:21, expires in 00:04:59, 34:34 pkts, 17887:20606 bytes, rule 49 They shouldn't expire for five minutes. However, the same states, at the same time, on the backup server: Thu Jun 4 18:17:27 PDT 2020 all udp 198.148.6.55:9430 <- 10.128.110.73:9430 MULTIPLE:MULTIPLE age 00:02:22, expires in 00:00:00, 0:0 pkts, 0:0 bytes all udp 96.251.22.157:58308 (10.128.110.73:9430) -> 198.148.6.55:9430 MULTIPLE:MULTIPLE expire. And then the synchronization from the backup to the primary removes them. These two systems share a carp vip, and other than the macro defining the local IP address of each individual system, pf.conf is exactly the same on both. How come when the state is transferred to the backup after initially being created on the primary, the state on the backup has the default timeout for udp multiple rather than the custom one defined in my rules: match out on $ext_if from 10.128.0.0/16 nat-to $ext_vip pass out quick on $ext_if proto udp tagged VOIP_UDP keep state (udp.multiple 360) pass in quick on vlan110 proto udp from any to port = 9430 tag VOIP_UDP keep state (udp.multiple 360) That doesn't seem right. Am I missing something? Thanks much.
Re: Filling a 4TB Disk with Random Data
Thanks you @misc. Using dd with a large block size will likely be the course of action. I really need to refresh my memory on this stuff. This is not something we do, or need to do, everyday. Paul your example shows: bs=1048576 How did you choose that number? Could you have gone even bigger? Obviously it is a multiple of 512. The disks in point are 4TB Western Digital Blues. They have 4096 sector sizes. I used a 16G USB stick as a sacrificial lamb to experiment with dd. Interestingly, there is no difference in time between 1m, 1k, and 1g. How is that possible? Obviously this will not be an accurate comparison of the WD disks, but it was still a good practice exercise. Also Paul, to clarify a point you made, did you mean forget the random data step, and just encrypt the disks with softraid0 crypto? I think I like that idea because this is actually a traditional pre-encryption step. I don't agree with it, but I respect the decision. For our purposes, encryption only helps if the disks are off the machine, and someone is trying to access them. This automatically implies that they were stolen. The chances of disk theft around here are slim to none. We have no reason to worry about forensics either - we're not storing nuclear secrets. Thanks for your time On Mon, Jun 1, 2020 at 7:28 AM Paul de Weerd wrote: > On Mon, Jun 01, 2020 at 06:58:01AM -0700, Justin Noor wrote: > | Hi Misc, > | > | Has anyone ever filled a 4TB disk with random data and/or zeros with > | OpenBSD? > > I do this before disposing of old disks. Have written random data to > several sizes of disk, not sure if I ever wiped a 4TB disk. > > | How long did it take? What did you use (dd, openssl)? Can you share the > | command that you used? > > It takes quite some time, but OpenBSD (at least on modern hardware) > can generate random numbers faster than you can write them to spinning > disks (may be different with those fast nvme(4) disks). > > I simply used dd, with a large block size: > > dd if=/dev/random of=/dev/sdXc bs=1048576 > > And then you wait. The time it takes really depends on two factors: > the size of the disk and the speed at which you write (whatever the > bottleneck). If you start, you can send dd the 'INFO' signal (`pkill > -INFO dd` (or press Ctrl-T if your shell is set up for it with `stty > status ^T`)) This will give you output a bit like: > > 30111+0 records in > 30111+0 records out > 31573671936 bytes transferred in 178.307 secs (177074202 bytes/sec) > > Now take the size of the disk in bytes, divide it by that last number > and subtract the second number. This is a reasonable ball-park > indication of time remaining. > > Note that if you're doing this because you want to prevent others from > reading back even small parts of your data, you are better of never > writing your data in plain text (e.g. using softraid(4)'s CRYPTO > discipline), or (if it's too late for that), to physically destroy the > storage medium. Due to smart disks remapping your data in case of > 'broken' sectors, some old data can never be properly overwritten. > > Cheers, > > Paul 'WEiRD' de Weerd > > -- > >[<++>-]<+++.>+++[<-->-]<.>+++[<+ > +++>-]<.>++[<>-]<+.--.[-] > http://www.weirdnet.nl/ >
Problem booting OpenBSD on IGEL M320c (amd64)
I am unable to boot OpenBSD 6.7 on an IGEL M320c machine I have. Booting begins, but after the message `0:1:0: mem address conflict 0x3800/0x800` appears, the screen appears to be cleared and the cursor continues blinking on the bottom left of the screen. After a few seconds, it moves a little past halfway on the screen and continues blinking. That is as far as it gets. Here's a video: https://www.youtube.com/watch?v=UX0Tsel6hzA I obviously wasn't able to get OpenBSD to boot but here's a dmesg from FreeBSD booted on the same machine (alternatively http://0x0.st/iOIZ.txt): ---<>--- Copyright (c) 1992-2019 The FreeBSD Project. Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved. FreeBSD is a registered trademark of The FreeBSD Foundation. FreeBSD 12.1-RELEASE r354233 GENERIC amd64 FreeBSD clang version 8.0.1 (tags/RELEASE_801/final 366581) (based on LLVM 8.0.1) VT(vga): resolution 640x480 CPU: VIA Eden X2 U4200 @ 1.0+ GHz (1000.06-MHz K8-class CPU) Origin="CentaurHauls" Id=0x6fd Family=0x6 Model=0xf Stepping=13 Features=0xbfc9fbff Features2=0x8863a9 AMD Features=0x20100800 AMD Features2=0x1 VIA Padlock Features=0x1ec33dcc VT-x: HLT,PAUSE TSC: P-state invariant real memory = 1073741824 (1024 MB) avail memory = 863457280 (823 MB) Event timer "LAPIC" quality 100 ACPI APIC Table: FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs FreeBSD/SMP: 2 package(s) x 1 core(s) random: unblocking device. ioapic0: Changing APIC ID to 7 ioapic1: Changing APIC ID to 8 ioapic0 irqs 0-23 on motherboard ioapic1 irqs 24-47 on motherboard Launching APs: 1 Timecounter "TSC" frequency 162285 Hz quality 800 random: entropy device external interface kbd1 at kbdmux0 000.23 [4335] netmap_init netmap: loaded module [ath_hal] loaded module_register_init: MOD_LOAD (vesa, 0x8112e050, 0) error 19 random: registering fast source VIA Nehemiah Padlock RNG random: fast provider: "VIA Nehemiah Padlock RNG" nexus0 vtvga0: on motherboard cryptosoft0: on motherboard acpi0: on motherboard acpi0: Power Button (fixed) cpu0: on acpi0 ACPI Error: Needed [Buffer/String/Package], found [Integer] 0xf8000337d280 (20181213/exresop-723) ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20181213/dswexec-606) ACPI Error: Method parse/execution failed \134_PR.CPU0._OSC, AE_AML_OPERAND_TYPE (20181213/psparse-689) ACPI Error: Needed [Buffer/String/Package], found [Integer] 0xf80003384b00 (20181213/exresop-723) ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20181213/dswexec-606) ACPI Error: Method parse/execution failed \134_PR.CPU0._OSC, AE_AML_OPERAND_TYPE (20181213/psparse-689) ACPI Error: Method parse/execution failed \134_PR.CPU0._PDC, AE_AML_OPERAND_TYPE (20181213/psparse-689) ACPI Error: Needed [Buffer/String/Package], found [Integer] 0xf8000337d100 (20181213/exresop-723) ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20181213/dswexec-606) ACPI Error: Method parse/execution failed \134_PR.CPU1._OSC, AE_AML_OPERAND_TYPE (20181213/psparse-689) ACPI Error: Needed [Buffer/String/Package], found [Integer] 0xf80003384c80 (20181213/exresop-723) ACPI Error: AE_AML_OPERAND_TYPE, While resolving operands for [OpcodeName unavailable] (20181213/dswexec-606) ACPI Error: Method parse/execution failed \134_PR.CPU1._OSC, AE_AML_OPERAND_TYPE (20181213/psparse-689) ACPI Error: Method parse/execution failed \134_PR.CPU1._PDC, AE_AML_OPERAND_TYPE (20181213/psparse-689) atrtc0: port 0x70-0x71,0x74-0x75 on acpi0 atrtc0: registered as a time-of-day clock, resolution 1.00s Event timer "RTC" frequency 32768 Hz quality 0 hpet0: iomem 0xfed0-0xfed003ff irq 0,8 on acpi0 Timecounter "HPET" frequency 14318180 Hz quality 950 Event timer "HPET" frequency 14318180 Hz quality 450 Event timer "HPET1" frequency 14318180 Hz quality 450 Event timer "HPET2" frequency 14318180 Hz quality 450 attimer0: port 0x40-0x43 on acpi0 Timecounter "i8254" frequency 1193182 Hz quality 0 Event timer "i8254" frequency 1193182 Hz quality 100 Timecounter "ACPI-fast" frequency 3579545 Hz quality 900 acpi_timer0: <24-bit timer at 3.579545MHz> port 0x408-0x40b on acpi0 acpi_button0: on acpi0 acpi_button1: on acpi0 pcib0: port 0xcf8-0xcff on acpi0 pcib0: Length mismatch for 3 range: a801 vs a800 pci0: on pcib0 vgapci0: mem 0xf900-0xf9ff,0xf800-0xf8ff,0x3800-0x3fff irq 40 at device 1.0 on pci0 vgapci0: Boot video device hdac0: mem 0xfb204000-0xfb207fff irq 41 at device 1.1 on pci0 pcib1: irq 27 at device 3.0 on pci0 pci1: on pcib1 pcib2: irq 31 at device 3.1 on pci0 pci2: on pcib2 xhci0: mem 0xfb10-0xfb107fff irq 28 at device 0.0 on pci2 xhci0: 32 bytes context size, 32-bit DMA xhci0: Unable to map MSI-X table usbus0: waiting for BIO
Re: Filling a 4TB Disk with Random Data
Hi Justin, On Thu, Jun 04, 2020 at 08:39:24PM -0700, Justin Noor wrote: | Thanks you @misc. | | Using dd with a large block size will likely be the course of action. | | I really need to refresh my memory on this stuff. This is not something we | do, or need to do, everyday. | | Paul your example shows: | | bs=1048576 | | How did you choose that number? Could you have gone even bigger? Obviously | it is a multiple of 512. It's just 1m. Yes, I could've gone bigger, but that wouldn't add much. 1m is just my defaut so i can more easily tell how much has been done upon SIGINFO, as the records are then 1m large. So in my sample output 30111 MB had been written. | The disks in point are 4TB Western Digital Blues. They have 4096 sector | sizes. 1m is of course a multiple of 4k :) | I used a 16G USB stick as a sacrificial lamb to experiment with dd. | Interestingly, there is no difference in time between 1m, 1k, and 1g. How | is that possible? Obviously this will not be an accurate comparison of the | WD disks, but it was still a good practice exercise. | | Also Paul, to clarify a point you made, did you mean forget the random data | step, and just encrypt the disks with softraid0 crypto? I think I like that | idea because this is actually a traditional pre-encryption step. I don't | agree with it, but I respect the decision. For our purposes, encryption | only helps if the disks are off the machine, and someone is trying to | access them. This automatically implies that they were stolen. The chances | of disk theft around here are slim to none. We have no reason to worry | about forensics either - we're not storing nuclear secrets. Well, you didn't mention the why: what are you trying to accomplish by overwriting your 4TB disk with random data? If it is to prevent others from accessing the data after you dispose of the disk then you should be aware of the caveat I mentioned. I get rid of old computers by overwriting the disk(s) and installing the latest snapshot. That's why I do this .. but it's not clear why you want to do it. Cheers, Paul | Thanks for your time | | | On Mon, Jun 1, 2020 at 7:28 AM Paul de Weerd wrote: | | > On Mon, Jun 01, 2020 at 06:58:01AM -0700, Justin Noor wrote: | > | Hi Misc, | > | | > | Has anyone ever filled a 4TB disk with random data and/or zeros with | > | OpenBSD? | > | > I do this before disposing of old disks. Have written random data to | > several sizes of disk, not sure if I ever wiped a 4TB disk. | > | > | How long did it take? What did you use (dd, openssl)? Can you share the | > | command that you used? | > | > It takes quite some time, but OpenBSD (at least on modern hardware) | > can generate random numbers faster than you can write them to spinning | > disks (may be different with those fast nvme(4) disks). | > | > I simply used dd, with a large block size: | > | > dd if=/dev/random of=/dev/sdXc bs=1048576 | > | > And then you wait. The time it takes really depends on two factors: | > the size of the disk and the speed at which you write (whatever the | > bottleneck). If you start, you can send dd the 'INFO' signal (`pkill | > -INFO dd` (or press Ctrl-T if your shell is set up for it with `stty | > status ^T`)) This will give you output a bit like: | > | > 30111+0 records in | > 30111+0 records out | > 31573671936 bytes transferred in 178.307 secs (177074202 bytes/sec) | > | > Now take the size of the disk in bytes, divide it by that last number | > and subtract the second number. This is a reasonable ball-park | > indication of time remaining. | > | > Note that if you're doing this because you want to prevent others from | > reading back even small parts of your data, you are better of never | > writing your data in plain text (e.g. using softraid(4)'s CRYPTO | > discipline), or (if it's too late for that), to physically destroy the | > storage medium. Due to smart disks remapping your data in case of | > 'broken' sectors, some old data can never be properly overwritten. | > | > Cheers, | > | > Paul 'WEiRD' de Weerd | > | > -- | > >[<++>-]<+++.>+++[<-->-]<.>+++[<+ | > +++>-]<.>++[<>-]<+.--.[-] | > http://www.weirdnet.nl/ | > -- >[<++>-]<+++.>+++[<-->-]<.>+++[<+ +++>-]<.>++[<>-]<+.--.[-] http://www.weirdnet.nl/
Re: Filling a 4TB Disk with Random Data
On Thu, Jun 04, 2020 at 08:39:24PM -0700, Justin Noor wrote: > Thanks you @misc. > > Using dd with a large block size will likely be the course of action. > > I really need to refresh my memory on this stuff. This is not something we > do, or need to do, everyday. > > Paul your example shows: > > bs=1048576 > > How did you choose that number? Could you have gone even bigger? Obviously > it is a multiple of 512. > > The disks in point are 4TB Western Digital Blues. They have 4096 sector > sizes. > > I used a 16G USB stick as a sacrificial lamb to experiment with dd. > Interestingly, there is no difference in time between 1m, 1k, and 1g. How > is that possible? Obviously this will not be an accurate comparison of the > WD disks, but it was still a good practice exercise. Did you write to the raw device? That make a big difference. At some point increasing buffer size will not help, since you already are hitting some other (hw or sw) limit to the bandwidth. -Otto > > Also Paul, to clarify a point you made, did you mean forget the random data > step, and just encrypt the disks with softraid0 crypto? I think I like that > idea because this is actually a traditional pre-encryption step. I don't > agree with it, but I respect the decision. For our purposes, encryption > only helps if the disks are off the machine, and someone is trying to > access them. This automatically implies that they were stolen. The chances > of disk theft around here are slim to none. We have no reason to worry > about forensics either - we're not storing nuclear secrets. > > Thanks for your time > > > On Mon, Jun 1, 2020 at 7:28 AM Paul de Weerd wrote: > > > On Mon, Jun 01, 2020 at 06:58:01AM -0700, Justin Noor wrote: > > | Hi Misc, > > | > > | Has anyone ever filled a 4TB disk with random data and/or zeros with > > | OpenBSD? > > > > I do this before disposing of old disks. Have written random data to > > several sizes of disk, not sure if I ever wiped a 4TB disk. > > > > | How long did it take? What did you use (dd, openssl)? Can you share the > > | command that you used? > > > > It takes quite some time, but OpenBSD (at least on modern hardware) > > can generate random numbers faster than you can write them to spinning > > disks (may be different with those fast nvme(4) disks). > > > > I simply used dd, with a large block size: > > > > dd if=/dev/random of=/dev/sdXc bs=1048576 > > > > And then you wait. The time it takes really depends on two factors: > > the size of the disk and the speed at which you write (whatever the > > bottleneck). If you start, you can send dd the 'INFO' signal (`pkill > > -INFO dd` (or press Ctrl-T if your shell is set up for it with `stty > > status ^T`)) This will give you output a bit like: > > > > 30111+0 records in > > 30111+0 records out > > 31573671936 bytes transferred in 178.307 secs (177074202 bytes/sec) > > > > Now take the size of the disk in bytes, divide it by that last number > > and subtract the second number. This is a reasonable ball-park > > indication of time remaining. > > > > Note that if you're doing this because you want to prevent others from > > reading back even small parts of your data, you are better of never > > writing your data in plain text (e.g. using softraid(4)'s CRYPTO > > discipline), or (if it's too late for that), to physically destroy the > > storage medium. Due to smart disks remapping your data in case of > > 'broken' sectors, some old data can never be properly overwritten. > > > > Cheers, > > > > Paul 'WEiRD' de Weerd > > > > -- > > >[<++>-]<+++.>+++[<-->-]<.>+++[<+ > > +++>-]<.>++[<>-]<+.--.[-] > > http://www.weirdnet.nl/ > >