On 26/11/24 18:29, Doug Moore wrote:
I think @kib has found the source of the problem. I've attached an
attempt to fix it.
Thanks for your work!
I have noticed this is already in base and upgraded successfully, issue
is now solved for me.
Sorry for the delay in reporting this.
--
Guido Fa
stress2/misc/tmpfs26.sh | 179 +
tools/test/stress2/misc/tmpfs27.sh | 49 ++
tools/test/stress2/misc/tmpfs28.sh | 61 +
3 files changed, 289 insertions(+)
?
> $ ./all.sh -o tmpfs24.sh
> 20241128 22:33:38 all: tmpfs24.sh
> Min hole s
ay to more consistently trigger it, it will be
much easier.
I ran all of the tmpfs*.sh tests from HEAD which all pass except for
tmpfs24.sh.
$ ./all.sh -o tmpfs24.sh
20241128 22:33:38 all: tmpfs24.sh
Min hole size is 4096, file size is 524288000.
data #1 @ 0, size=4096)
hole #2 @ 4096, size=40
Dennis Clarke wrote on
Date: Fri, 29 Nov 2024 03:15:54 UTC :
> On 11/28/24 21:25, Dennis Clarke wrote:
> >
> > On a machine here I see top reports this with " top -CSITa -s 10"
> >
> >
> > last pid: 6680; load averages:0.29,0.12,0 up 0+11:40:46
> > 02:23:01
> > 51 processes:
On 11/28/24 21:25, Dennis Clarke wrote:
On a machine here I see top reports this with " top -CSITa -s 10"
last pid: 6680; load averages: 0.29, 0.12, 0 up 0+11:40:46
02:23:01
51 processes: 2 running, 47 sleeping, 2 waiting
CPU: 0.6% user, 0.0% nice, 0.2% system, 0.0% interrupt
On a machine here I see top reports this with " top -CSITa -s 10"
last pid: 6680; load averages:0.29,0.12,0 up 0+11:40:46
02:23:01
51 processes: 2 running, 47 sleeping, 2 waiting
CPU: 0.6% user, 0.0% nice, 0.2% system, 0.0% interrupt, 99.2% idle
Mem: 587M Active, 480G Inac
On 11/28/24 10:55, Alan Somers wrote:
On Thu, Nov 28, 2024, 9:47 AM Dennis Clarke wrote:
On 11/28/24 09:52, Alan Somers wrote:
On Thu, Nov 28, 2024, 8:45 AM Dennis Clarke
wrote:
...
For "zpool import", the "-c" argument instructs zfs which cachefile to
search for importable pools. "-O"
Sean C. Farley wrote on
Date: Thu, 28 Nov 2024 18:16:16 UTC :
> On Mon, 25 Nov 2024, Mark Millard wrote:
>
> > On Nov 25, 2024, at 18:05, Mark Millard wrote:
> >
> >> Top posting going in a different direction that
> >> established a way to control the behavior in my
> >> context . . .
> >
> >
On Thu, Nov 28, 2024 at 7:49 AM wrote:
>
>
>
> > On 28 Nov 2024, at 15:04, Rick Macklem wrote:
> >
> > On Thu, Nov 28, 2024 at 4:36 AM Bob Bishop wrote:
> >>
> >> Hi,
> >>
> >>> On 27 Nov 2024, at 21:56, Rick Macklem wrote:
> >>>
> >>> Hi,
> >>>
> >>> PR#282995 reports that the "-alldirs" expor
On Mon, 25 Nov 2024, Mark Millard wrote:
On Nov 25, 2024, at 18:05, Mark Millard wrote:
Top posting going in a different direction that
established a way to control the behavior in my
context . . .
For folks new to the discoveries: the context here
is poudriere bulk builds, for USE_TMPFS=al
On Nov 28, 2024, at 04:19, Andriy Gapon wrote:
> On 28/11/2024 13:42, Dag-Erling Smørgrav wrote:
>> Andriy Gapon writes:
>>> FWIW, I am not sure if it's relevant but I am seeing a similar pattern
>>> of corruption on tmpfs although in a different context, on FreeBSD
>>> 13.3.
>> Not relevant at
On Thu, Nov 28, 2024, 9:47 AM Dennis Clarke wrote:
> On 11/28/24 09:52, Alan Somers wrote:
> > On Thu, Nov 28, 2024, 8:45 AM Dennis Clarke
> wrote:
> >
> ...
> >
> > For "zpool import", the "-c" argument instructs zfs which cachefile to
> > search for importable pools. "-O", on the other hand, s
> On 28 Nov 2024, at 15:04, Rick Macklem wrote:
>
> On Thu, Nov 28, 2024 at 4:36 AM Bob Bishop wrote:
>>
>> Hi,
>>
>>> On 27 Nov 2024, at 21:56, Rick Macklem wrote:
>>>
>>> Hi,
>>>
>>> PR#282995 reports that the "-alldirs" export option is broken,
>>> since it allows an export where the
On 11/28/24 09:52, Alan Somers wrote:
On Thu, Nov 28, 2024, 8:45 AM Dennis Clarke wrote:
...
For "zpool import", the "-c" argument instructs zfs which cachefile to
search for importable pools. "-O", on the other hand, specifies how the
cachefile property should be set after the pool is impor
On 11/28/24 10:02, Dimitry Andric wrote:
On 28 Nov 2024, at 14:05, Dennis Clarke wrote:
This is a baffling problem wherein two zpools no longer exist after
boot. This is :
...
titan# camcontrol devlist
at scbus0 target 0 lun 0 (pass0,ada0)
at scbus1 target 0 lun 0 (pass1,ada
On Thu, Nov 28, 2024 at 4:36 AM Bob Bishop wrote:
>
> Hi,
>
> > On 27 Nov 2024, at 21:56, Rick Macklem wrote:
> >
> > Hi,
> >
> > PR#282995 reports that the "-alldirs" export option is broken,
> > since it allows an export where the directory path is not a mount point.
> >
> > I'll admit I did no
On 28 Nov 2024, at 14:05, Dennis Clarke wrote:
>
> This is a baffling problem wherein two zpools no longer exist after
> boot. This is :
>
> titan# uname -apKU
> FreeBSD titan 15.0-CURRENT FreeBSD 15.0-CURRENT #1
> main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024
> root@titan:/usr
On Thu, Nov 28, 2024, 8:45 AM Dennis Clarke wrote:
> On 11/28/24 08:52, Alan Somers wrote:
> > On Thu, Nov 28, 2024, 7:06 AM Dennis Clarke
> wrote:
> >
> >>
> >> This is a baffling problem wherein two zpools no longer exist after
> >> boot. This is :
> .
> .
> .
> > Do you have zfs_enable="YES"
On 11/28/24 08:52, Alan Somers wrote:
On Thu, Nov 28, 2024, 7:06 AM Dennis Clarke wrote:
This is a baffling problem wherein two zpools no longer exist after
boot. This is :
.
.
.
Do you have zfs_enable="YES" set in /etc/rc.conf? If not then nothing will
get imported.
Regarding the cachefil
On 11/28/24 09:10, Juraj Lutter wrote:
Are there any differences in each pool’s properties? (zpool get all …)
Well, they are all different. There is a pool called leaf which is a
mirror of two disks on two SATA/SAS backplanes. There is proteus which
*was* working great over iSCSI and then t
On 11/28/24 08:58, Ronald Klop wrote:
Btw:
The /etc/rc.d/zpool script looks into these cachefiles:
for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do
I didn’t check where the cachefile pool property is used.
Hope this helps resolving the issue. Or maybe helps you to provide more
> On 28 Nov 2024, at 15:06, Dennis Clarke wrote:
>
> On 11/28/24 08:52, Alan Somers wrote:
>> On Thu, Nov 28, 2024, 7:06 AM Dennis Clarke wrote:
>>>
>
> See the FREEBSD CTLDISK 001 ? That is over iSCSI.
>
> However, as I say, the devices exist but the pools vanish
> unless I import them
On 11/28/24 08:52, Alan Somers wrote:
On Thu, Nov 28, 2024, 7:06 AM Dennis Clarke wrote:
This is a baffling problem wherein two zpools no longer exist after
boot. This is :
...
Do you have zfs_enable="YES" set in /etc/rc.conf? If not then nothing will
get imported.
Regarding the cachefile
Btw:
The /etc/rc.d/zpool script looks into these cachefiles:
for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do
I didn’t check where the cachefile pool property is used.
Hope this helps resolving the issue. Or maybe helps you to provide more information about your setup.
Regar
On Thu, Nov 28, 2024, 7:06 AM Dennis Clarke wrote:
>
> This is a baffling problem wherein two zpools no longer exist after
> boot. This is :
>
> titan# uname -apKU
> FreeBSD titan 15.0-CURRENT FreeBSD 15.0-CURRENT #1
> main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024
> root@titan:/us
Are the other disks available at the moment the boot process does zpool import?
Regards,
Ronald
Van: Dennis Clarke
Datum: 28 november 2024 14:06
Aan: Current FreeBSD
Onderwerp: zpools no longer exist after boot
This is a baffling problem wherein two zpools no longer exist after
boot. This
This is a baffling problem wherein two zpools no longer exist after
boot. This is :
titan# uname -apKU
FreeBSD titan 15.0-CURRENT FreeBSD 15.0-CURRENT #1
main-n273749-4b65481ac68a-dirty: Wed Nov 20 15:08:52 GMT 2024
root@titan:/usr/obj/usr/src/amd64.amd64/sys/GENERIC-NODEBUG amd64 amd64
1500
Hi,
> On 27 Nov 2024, at 21:56, Rick Macklem wrote:
>
> Hi,
>
> PR#282995 reports that the "-alldirs" export option is broken,
> since it allows an export where the directory path is not a mount point.
>
> I'll admit I did not recall this semantic for -alldirs and I now see it is
> only
> doc
On 28/11/2024 13:42, Dag-Erling Smørgrav wrote:
Andriy Gapon writes:
FWIW, I am not sure if it's relevant but I am seeing a similar pattern
of corruption on tmpfs although in a different context, on FreeBSD
13.3.
Not relevant at all. In this case the file is not actually corrupted
but `insta
Andriy Gapon writes:
> FWIW, I am not sure if it's relevant but I am seeing a similar pattern
> of corruption on tmpfs although in a different context, on FreeBSD
> 13.3.
Not relevant at all. In this case the file is not actually corrupted
but `install(1)` skips over some of it when copying beca
On 26/11/2024 17:52, Mark Millard wrote:
libsass.so.1.0.0 still has .got.plt starting with (this time):
2bed60
2bed70
2bed80
2
31 matches
Mail list logo