Till I upgraded my 9front I was using a fossil server (from9legacy) on my
9front FreeBSD Bhyve system, probably 9front release Emailschaden.
Now it doesn't work anymore. Remaking the programs did not help either. The
fossil server now hangs immediately after start. Neither file nor console
serv
Thank you for looking after my problem.
As Ori asked, here is a stack trace from the probably hanging thread.
/proc/504/text:amd64 plan 9 executable
/sys/lib/acid/port
/sys/lib/acid/amd64
acid: lstk()
pread(a0=0x4)+0xe /sys/src/libc/9syscall/pread.s:6
read(buf=0x44f158,n=0x4)+0x27 /sys/src/lib
I have put a file containing the 4 lstk of 4 threads at
http://pkeus.de/~wb/threads.lstk
In the meantime, I tried to use an older 9front 9pc64 kernel, but this didn't
help either.
Unfortunately, I did not make a snapshot before the last sysupdate.
The next thing I try ,is setting up a fresh c
The venti server is in active use, e.g. for providing my backups via vacfs and
vnfs on FreeBSD.
I even checked that a 2nd fossil (this time on a Raspberry Pi4) is working.
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/Tac8d983292c826
Solved.
Noam was right. Restarting venti made the thing working again.
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/Tac8d983292c826c1-M4bb44f64e1443090796ea9ae
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
Some replies seem to hold openat() for superfluous. In fact, IMHO the
traditional syscall open() should be deprecated.
In FreeBSD at least, the libc open() does in fact use the real thing
__sys_openat(). in /usr/src/lib/libc/sys/open.c
via the interposing table.
Some days ago, I experimented w
I am experiencing a strange problem. When I try to boot my 9front system using
a file server over tcp, I get no /net/tls.
The same kernel booting on a local hjfs filesystem has it. I think, that the
files in both systems are also he same.
I can drawterm only to the latter configuration.
Config
On Saturday, 27 April 2024, at 11:49 PM, cinap_lenrek wrote:
> i suppose the following is missing in your /lib/namespace:
> bind -a #a /net
This binding has to be very early to be effective. It is done
/sys/src/9/boot/bootrc. Why does it disappear when the filesystem is not local?
---
I just inserted the line
On Saturday, 27 April 2024, at 11:49 PM, cinap_lenrek wrote:
> bind -a #a /net
into termrc after the ip init.
Now I have /net/tls, but
aux/listen1 'tcp!*!rcpu' /rc/bin/service/tcp17019
still does not allow drawterm access.
--
9f
On Sunday, 28 April 2024, at 5:32 PM, cinap_lenrek wrote:
> because the namespace from bootrc is not inherited.
init creates a complrely new namespace using
/lib/namespace from your root file-system:
Thank you. I copied /lib/namespace to the file server. Now it works as it
should.
--
When I use drawterm to access 9front from FreeBSD with a running factotum, no
additional user identification is needed.
This is fine, as long as I do not try to use another identity. Factotum seems
to overrid any -u user option in the drawterm command. It logs me in always as
myself, even if I
Imho, we would like to get some more info, what happened.
1) Was it the fossil write buffer?
2) Was it the venti index ?
We can safely exclude the venti arenas, or ?
As you mentioned a 2nd set of SSDs, how long did the last one hold?
Why is the whole set affected? Raid-x?
I am using fossil on plan9port (which should be similar to 9legacy) from
9front. The only thing which I needed was to enable p9sk1 for the hostowner on
9front (the auth server) and a factotum entry for this in the file server,
IIRC.
--
9fans: 9fans
Permal
Installing fossil on 9front is not really difficult. Fossil is just a userland
server which probably can even be copied as a binary, as long as the cpu is the
same.
Here are the hostowner factotum/ctl readout from the auth server:
key proto=p9sk1 user=bootes dom=fritz.box !password?
key proto=
Just a simple note. When I compared the fossil version posted by Moody in the
original discussion thread to the one I am using (and IIRC it is the one in
the 9legacy git repository), I found that they differed in 2 points. One was
the increase of a msg buffer, which is probably no big issue, bu
I would like to refresh my questions from may 4th.
Can it be the case that the venti index file exhibits a desastrous write
pattern for SSDs?
I presume that each new block written to venti causes a random block to be
rewritten in the index file, until the bucket is full (after 215 writes).
Gi
For the napkin calculation: On disk, the IEntry is 38Bytes. Alas, writes occur
always in (the ssd internal) blocksize. So, essentially (assuming 4096 byte
blocksize, which is quite optimistic), we have a write efficiency of less than
1 percent.
A good firmware in the ssd could avoid needing a n
> i'm curious what straightforward storage structure wouldn't be. trying to
> second-guess ssd firmware seems a tricky design criterion.
>
Designing for minimal disk stress: Never rewrite data already written to disk.
Now we have big and quite cheap main memory.
I don't critisize the historica
After studying Steve Stallion's SSD venti disaster, I decided to do my own try
to fix the issues of venti.
Despite my reservations on the lasting wisdom of some of the design choices, I
try to use the traditional arena disk layout.
Only the on-disk index is replaced with a trie-based in-memory
For now, I failed to push my changes to github.
If anybody is interested in the files, they are available in
http://pkeus.de/~wb/mventi
Copy the files int $PLAN9/src/cmd/venti/srv
and try "mk o.buildtrie" .
I also inserted my code into index.c to check that index lookup gives the same
result
On Thursday, 13 June 2024, at 6:08 AM, ori wrote:
> Sounds fairly interesting, though I'm curious how it compares;
my guess was that because of the lack of locality due to using
hashes for the score, a trie wouldn't be that different from a
hash table.
You are right. Lack of locality is a main issu
I updated http://pkeus.de/~wb/mventi, adding a file mventi.c which is venti.c +
parts of index.c (to avoid linking the real index.o).
I have some performance data now.
I had to add a 4th arenapartition, which sealed the old partitions, so readonly
acces is sufficient for them,.
I served the par
Some news on my effort.
This morning I used my venti do to real work, serving the fossil filesystem to
boot a 386 vm.
So far, it looks good. I did not try to write yet.
I changed my trie.c for some optimisations:
I ditched my union trienode for separate struct trieleaf and struct trienode
with
I am actively using planport on FreeBSD14. So far, I found some programs which
do not work: These are some servers, where threadmaybackground() should be
included, vacfs vnfs (as I reported under issues on plan9port github).
9pfuse also is not expected to be usable, as the fusekernel and the su
After I managed to avoid storing the full scores in main memory, I am confident
that the files in http://pkeus.de/~wb/mventi are worth a try for the public.
So far, I don't see much restrictions on use. The arena on-disk format is
unchanged.
I now see chances to further shorten the trienodes t
*** 9pserve.c Fri Mar 1 15:46:35 2024
--- /home/wb/plan9port/src/cmd/9pserve.cFri Jun 21 09:43:38 2024
***
*** 86,92
Queue *inq;
int verbose = 0;
int logging = 0;
! int msize = 8192+24;
u32int xafid = NOFID;
int attached;
int versioned;
--- 86,92
Queue
On Friday, 21 June 2024, at 11:47 AM, hruodr wrote:
> Thanks! There is in FreeBSD a plan9port port and package:
>
The port may be ok, but the package is deficient. If you try to use the rc from
the package you will find a very inconvenient place for rcmain compiled in. So,
this rc is not usable
Working on my data set, I found that there are several duplicate entries in the
arenas.
I am confident that the data are consistent, though.
Which may be the circumstances, that this happens more than 1 time?
The arenas have been written by p9p venti server on my wdmycloud NAS.
Perhaps I should
For FreeBSD/plan9port users:
I cobbled a geom module to serve a ffs filesystem backup created by p9p vbackup
as a readonly disk device on FreeBSD.
The source and a Read.me file are found in http://pkeus.de/~wb/vgate
--
9fans: 9fans
Permalink:
https://9fan
I now have a vague idea, what may have happened.
The really strange thing is that I did not write to this venti at all (at least
not intentionally).
Now I tried to repair the config by deleting the last arena parttion, reformat
a new one and buildindex from scratch.
After some time, when I use
The symptoms look like disk full error.
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/T68f44cf88ca61ff3-M5f914074957df945a1b6432a
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription
The main reason for the remaining mventi issues were, that I ignored the
message about clump miscalculation in one arena. Why it is not really
destructive in the original venti, I don't know.
So, if this message occurs, mventi will not serve the venti protocol.
Clump miscalculation indicates th
What about swap memory?
I wonder, why the process works, if slowed down by manual scrolling.
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/T68f44cf88ca61ff3-M208a397f8c49e88256d1dacf
Delivery options: https://9fans.topicbox.com/groups/9
On Sunday, 7 July 2024, at 8:23 PM, Marco Feichtinger wrote:
> No swap.
Are you sure that this is not your problem?
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/T68f44cf88ca61ff3-M7ef5bb5a39e349ab52bf1cf0
Delivery options: https://9fans
On Tuesday, 30 July 2024, at 7:29 AM, Marco Feichtinger wrote:
> So I am curious how does it work,
how does one to set it up, so the arenas get mirrored automatically,
and why do you use it instead of fs(3) mirror?
Adding to this ill-fated thread. Mirroring venti arenas was just a stillborn
idea
Noam is right in most of his text.
But I have to add that the following sentence should be taken with some grain
of salt.
> If the index is on a separate drive, though - e.g. index on SSD, data on HDDs
> - mirrorarenas can be used to keep the arenas in sync between multiple (sets
> of) HDDs,
On Sunday, 4 August 2024, at 2:57 PM, noam wrote:
> i'm unsure which thread you're talking about; can you link me to
more info on mventi? I've been working on a better venti implementation
as well [1], and it'd be nice to have another reference :)
The other thread is "yet another try to fixup venti
Now I have managed to get my files pushed to github. So the new location of my
plan9port fork is now at
https://github.com/vestein463/plan9port
In addition, there are a couple of FreeBSD related hosted ther, eg. a program
ggatev,
which can be used to serve a vbackup'ed UFS filesystem as a geom
Though I see your point, you will need to show a really important improvement
to justify any change in 9p protocol.
Best thing you can do: Implement it on your own source, and show the benefits.
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/
I don't think that this demand is a good idea on any OS. The ftp protocol is
designed to transfer files as a whole.
To execute on a demand paging system you need to access the file randomly.
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans
As far as I can tell, it is not bug but a feature that every window has its own
idea of the location of dot,
as is documented on the sam man page.
--
9fans: 9fans
Permalink:
https://9fans.topicbox.com/groups/9fans/Tccea439a6627e81b-M370479f3ef97b47316b98315
Thanks for this reminder.
To install on FreeBSD, Make.Linux has te be copied to Make.FreeBSD and the
usual /usr/local additions be made; you need gmake instead of standard bmake.
> -I /usr/local/include
> -L /usr/local/lib -lreadline
> and -fcommon in CFLAGS
I just updated my drawterm from https://git.9front.org/plan9front/drawterm .
A minor issue was a missing library -lasound in the Makefile Make.freebsd
The resulting binary now hangs after the cpu: and auth: prompts.
No user: prompt needed because factotum is active.
An older version just works o
On Sunday, 27 October 2024, at 10:57 AM, Ole-Hjalmar Kristensen wrote:
> I
have tried it out by feeding it with the output from printarenas, and
it seems to work reasonably well. Does anyone have any good ideas about
how to incrementally extract the set of scores that has been added to a
venti
The version I used to use was that from 23 Apr 24.
So far, I have identified commit 877bce095a192ead0e9b6e0d5ce3071482cf0f6e from
8 Sep 24 as the culprit,
which implements procinterrupt().
Looking at this patch, I don't see anything which may work in FreeBSD other
than in Linux or whatever the c
Using vac for Unix backup has some shortcomings.
I, too, used vbackup on FreeBSD (UFS2). Whether the different blocksize (on
most disks: 32kB) to 8kB vac default may detrimental, is disputable. Now, I
have moved to ZFS, and a different approach is needed. ZFS's own backup tools
are useful only
46 matches
Mail list logo