On Wed, Nov 17, 2010 at 09:38:46AM +0200, Pavel Zholkover wrote:
> I did a Go runtime port for x86, it is in already in the main hg repository.
> Right now it is cross-compile from Linux for example (GOOS=plan9 8l -s
> when linking. notice the -s, it is required).
>
I have Plan 9 versions of the t
Hi,
I did a Go runtime port for x86, it is in already in the main hg repository.
Right now it is cross-compile from Linux for example (GOOS=plan9 8l -s
when linking. notice the -s, it is required).
There were a few changes made to the upstream so the following patch
is needed until the fix is comm
> #I0tcpack pc f01ff12a dbgpc ...
and what's at that pc?
- erik
> On Wed, Nov 17, 2010 at 06:33:13AM +0100, cinap_len...@gmx.de wrote:
> > sorry for not being clear. what i ment was that qpc is for the last
> > qlock we succeeded to acquire. its *not* the one we are spinning on.
> > also, qpc is not set to nil on unlock.
> >
> Ok, so we set qpctry (qpcdbg?)
On Wed, Nov 17, 2010 at 08:45:00AM +0200, Lucio De Re wrote:
> ... and from whatever the other proc is that also contributes to this
> jam. I don't have the name right in front of me, but I will post it
> separately. As far as I know it's always those two that interfere with
> exportfs and usually
On Wed, Nov 17, 2010 at 06:33:13AM +0100, cinap_len...@gmx.de wrote:
> sorry for not being clear. what i ment was that qpc is for the last
> qlock we succeeded to acquire. its *not* the one we are spinning on.
> also, qpc is not set to nil on unlock.
>
Ok, so we set qpctry (qpcdbg?) to qpc befor
On Wed, Nov 17, 2010 at 06:22:33AM +0100, cinap_len...@gmx.de wrote:
>
> qpc is the just the caller of the last successfull *acquired* qlock.
> what we know is that the exportfs proc spins in the q->use taslock
> called by qlock() right? this already seems wired... q->use is held
> just long eno
sorry for not being clear. what i ment was that qpc is for the last
qlock we succeeded to acquire. its *not* the one we are spinning on.
also, qpc is not set to nil on unlock.
--
cinap
--- Begin Message ---
> > acid: src(0xf0148c8a)
> > /sys/src/9/ip/tcp.c:2096
> > 2091 if(waserro
qpc is the just the caller of the last successfull *acquired* qlock.
what we know is that the exportfs proc spins in the q->use taslock
called by qlock() right? this already seems wired... q->use is held
just long enougth to test q->locked and manipulate the queue. also
sched() will avoid switch
>
>
> I'll try without the bloom filter.
>
> Now it's working... I probably don't need this enhancement anyway, but at
least it appears to be working now. Unvac of a previously generated score
is working fine.
Dave
>
>> On Tuesday, November 16, 2010, David Leimbach wrote:
>> > On Tuesday, Nove
On Tue, Nov 16, 2010 at 8:09 PM, David Leimbach wrote:
> Could sparse files be an issue? Bloom always shows up wrong when I
> restart.
>
Nope... Didn't make a difference it seems.
I recreated my venti setup, and it starts ok. I do a vac and an unvac, then
kill it and restart and get the follo
> Hm, I thought I understood waserror(), but now I'm sure I don't. What
> condition is waserror() attempting to handle here?
waserror() sets up an entry in the error stack.
if there is a call to error() before poperror(),
then that entry is poped and waserror() returns
1. it's just like set_jmp
>> Now, the qunlock(s) should not precede the qlock(s), this is the first
>> case in this procedure:
>
> it doesn't. waserror() can't be executed before the code
> following it. perhpas it could be more carefully written
> as
>
>> > 2095 qlock(s);
>> > 2091 if(waserr
> > acid: src(0xf0148c8a)
> > /sys/src/9/ip/tcp.c:2096
> > 2091 if(waserror()){
> > 2092 qunlock(s);
> > 2093 nexterror();
> > 2094 }
> > 2095 qlock(s);
> >>2096qunlock(tcp);
> > 2097
Could sparse files be an issue? Bloom always shows up wrong when I restart.
On Tuesday, November 16, 2010, David Leimbach wrote:
> On Tuesday, November 16, 2010, Russ Cox wrote:
>> On Tue, Nov 16, 2010 at 5:43 PM, David Leimbach wrote:
>>> I'm trying to figure out how to correctly sync a plan9
> Well, here is an acid dump, I'll inspect it in detail, but I'm hoping
> someone will beat me to it (not hard at all, I have to confess):
>
> rumble# acid /sys/src/9/pc/9pccpuf
> /sys/src/9/pc/9pccpuf:386 plan 9 boot image
> /sys/lib/acid/port
> /sys/lib/acid/386
>
[ ... ]
This bit looks suspic
On Tue, Nov 16, 2010 at 10:19 PM, David Leimbach wrote:
> On Tuesday, November 16, 2010, Russ Cox wrote:
>> On Tue, Nov 16, 2010 at 5:43 PM, David Leimbach wrote:
>>> I'm trying to figure out how to correctly sync a plan9port venti instance so
>>> I can start it back up again and have it actuall
On Tuesday, November 16, 2010, Russ Cox wrote:
> On Tue, Nov 16, 2010 at 5:43 PM, David Leimbach wrote:
>> I'm trying to figure out how to correctly sync a plan9port venti instance so
>> I can start it back up again and have it actually function :-).
>> using venti/sync doesn't appear to get the
On Tue, Nov 16, 2010 at 5:43 PM, David Leimbach wrote:
> I'm trying to figure out how to correctly sync a plan9port venti instance so
> I can start it back up again and have it actually function :-).
> using venti/sync doesn't appear to get the job done...
It should. Not using venti/sync should
On Tue, 16 Nov 2010 14:43:20 PST David Leimbach wrote:
> --0016e6464d1a9112a304953348f5
> Content-Type: text/plain; charset=ISO-8859-1
>
> I'm trying to figure out how to correctly sync a plan9port venti instance so
> I can start it back up again and have it actually function :-).
>
> using ve
I'm trying to figure out how to correctly sync a plan9port venti instance so
I can start it back up again and have it actually function :-).
using venti/sync doesn't appear to get the job done...
Dave
On Mon, Nov 15, 2010 at 19:32, wrote:
>> I always had the impression that the object formats
>> used by the various ?l are more for kernels and the
>> various formats expected by loaders than for userland
>> apps. For userland, I would think the intent is for
>> there to be a single consistent o
> cinap is right, the bug is in the kernel. we know
> that because it's a lock loop. that can only happen
> if the kernel screws up. also, the address is a kernel
> address (starts with 0xf).
Well, here is an acid dump, I'll inspect it in detail, but I'm hoping
someone will beat me to it (not h
On 16 November 2010 16:32, Charles Forsyth wrote:
>>unfortunately, there's just not enough bits to easily export
>>(an export)+.
>
> i think that works: it checks for clashes.
only when a file is actually walked to.
of course, that's fine in practise - the only thing
that actually cares about qi
>unfortunately, there's just not enough bits to easily export
>(an export)+.
i think that works: it checks for clashes.
> i'm sure that somewhere it was suggested that high order bits of Qid.path
> should be avoided by file servers to allow for their use to make qids unique
> but i haven't been able to find that.
unfortunately, there's just not enough bits to easily export
(an export)+.
i wonder if there's some wa
>i'd say it's a bug. fossil could easily reserve some number of bits
>of the qid (say 20 bits) to make the files in the dump unique
>while still allowing space for a sufficient number of live files.
that's possibly closest to the intent of the qid discussion in intro(5),
although
it's not clear t
> I tried acid, but I'm just not familiar enough with it to make it
> work. I tried
>
> rumble% acid 2052 /bin/exportfs
> /bin/exportfs:386 plan 9 executable
> /sys/lib/acid/port
> /sys/lib/acid/386
> acid: src(0xf01e7377)
> no source for ?file?
cinap is right
On Mon, Nov 15, 2010 at 2:00 PM, Dan Adkins wrote:
> That brings up a question of interest to me. How do you effectively
> read ahead with the 9p protocol? Even if you issued many read
> requests in parallel, the server is allowed to return less data than
> was asked for. You'll end up with hol
On 16 November 2010 01:18, erik quanstrom wrote:
>> > i claim that a fs with this behavior would be broken. intro(5)
>> > seems to agree with this claim, unless i'm misreading.
>>
>> you're right - fossil is broken in this respect, as is exportfs
>> {cd /mnt/term/dev; ls -lq | sort} for a quick d
30 matches
Mail list logo