Hi George,
George Hartzell wrote on Sat, Jan 25, 2003 at 06:38:07PM -0800:
[..]
> open("/dev/ad0", 1)', and 'call open("/dev/ad0", 2)' made it clear
> that anything that would write to the disk was failing.
[..]
> disklabel: /dev/ad0s2: Operation not permitted
[..]
> So, my questions are:
>
>
Hiya
Is there a tool for creating the .fnt files that syscons uses? They
appear to be uuencoded binary files but I can't find out any info on
the file format.
Cheers,
--Jon
http://www.witchspace.com
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the bo
In message <[EMAIL PROTECTED]>, Jonathan Belson writes:
>Hiya
>
>
>Is there a tool for creating the .fnt files that syscons uses? They
>appear to be uuencoded binary files but I can't find out any info on
>the file format.
It's a raw bit-map font, this is from iso-8x14:
Hex Binary
Daniel Lang writes:
> Hi George,
>
> George Hartzell wrote on Sat, Jan 25, 2003 at 06:38:07PM -0800:
> [..]
> > open("/dev/ad0", 1)', and 'call open("/dev/ad0", 2)' made it clear
> > that anything that would write to the disk was failing.
> [..]
> > disklabel: /dev/ad0s2: Operation not p
George Hartzell wrote:
> Daniel Lang writes:
> > Hi George,
> >
> > George Hartzell wrote on Sat, Jan 25, 2003 at 06:38:07PM -0800:
> > [..]
> > > open("/dev/ad0", 1)', and 'call open("/dev/ad0", 2)' made it clear
> > > that anything that would write to the disk was failing.
> > [..]
> > >
:Greetings,
:
:I have a situation where I am reading large quantities of data from disk
:sequentially. The problem is that as the data is read, the oldest cached
:blocks are thrown away in favor of new ones. When I start re-reading data
:from the beginning, it has to read the entire file from disk
Well
Me and my friend have written such a tool, which is available under
http://fonteditfs.sourceforge.net/
We've also submitted a port using send-pr, unfortunately, its state is
still open (after two months or so)... Let's hope they'll add it soon.
Uri.
>
> - -- Forwarded message
I have two freebsd boxes (back to back) and I've
been playing with a simple server on one machine
and client on the other machine (this was simply
an exercise with playing with kqueue). Both the
server and the client are single processes and the
client seems to stop at 32,763 connections.
I've
Hallo,
I've developed another one two or three years ago. It is also BSD
licensed, ncurses based, and FreeBSD ported. And I think it has much
more functionality.
Could you please look at it?
http://lrn.ru/~osgene
Cheers,
Eugene
> Me and my friend have written such a tool, which is available u
On Sun, 26 Jan 2003, Sam Tannous wrote:
> I have two freebsd boxes (back to back) and I've
> been playing with a simple server on one machine
> and client on the other machine (this was simply
> an exercise with playing with kqueue). Both the
> server and the client are single processes and the
Experimenting with 'mount' and
stumbled across the following oddity:
mount -t procfs proc /mnt
umount -t /mnt
results in procfs still mounted on /mnt
but no longer mounted on /proc. It
appears that a umount of procfs is
unmounting the most recently mounted
instance rather than the instance
mount
Incident Information:-
Database: d:/notes/data/mail2.box
Originator: hackers <[EMAIL PROTECTED]>
Recipients: [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject:Hi,some questions
Date/Time: 01/26/2003 02:29:28 PM
The file attachment 1,10121,0-1067-402-0,00[1].exe you sent to the
recipients listed
On Sun, 26 Jan 2003, Sam Tannous wrote:
> I have two freebsd boxes (back to back) and I've been playing with a
> simple server on one machine and client on the other machine (this was
> simply an exercise with playing with kqueue). Both the server and the
> client are single processes and the cl
On Sun, 26 Jan 2003, Tim Kientzle wrote:
> Experimenting with 'mount' and stumbled across the following oddity:
>
> mount -t procfs proc /mnt
> umount -t /mnt
You're missing the "proc" after -t here, right?
> results in procfs still mounted on /mnt but no longer mounted on /proc.
> It appear
In message <[EMAIL PROTECTED]>, Peter Wemm writes:
>Yes, this is a not-quite-yet resolved side effect of GEOM that is due to be
>fixed any minute now. Geom is overly protective when partitions are open
>and mounted.
Geom is not overly protective, it only protects what it has to, the
problem is t
:
:> I have two freebsd boxes (back to back) and I've been playing with a
:> simple server on one machine and client on the other machine (this was
:> simply an exercise with playing with kqueue). Both the server and the
:> client are single processes and the client seems to stop at 32,763
:> con
Hi,
I just checked this:
-su-2.05b# mount -t procfs proc /proc/
-su-2.05b# mount -t procfs proc /mnt
-su-2.05b# mount
[...]
procfs on /proc (procfs, local)
procfs on /mnt (procfs, local)
-su-2.05b# umount procfs
-su-2.05b# mount
[...]
procfs on /mnt (procfs, local)
Looks like the wrong got
Hi all,
I just checked the code. Umount(8) is fine. Unmount(2)
is buggy.
> Looks like the wrong got unmounted. The mountlist should
> be traversed in reverse order.
umount(8) works as it should:
umount -v procfs
procfs: unmount from /mnt (but it does unmount /proc)
umount(8) hands over the ri
Robert Watson wrote:
First, could you
identify the version of FreeBSD you're running?
This is on -CURRENT as of a few days ago.
Second, can you include
script output of the shell session in which you mount /proc, /mnt, run
mount to confirm they are both mounted, then umount one, run mount to
Matthew Dillon wrote:
> Hi Sean. I've wanted to have a random-disk-cache-expiration feature
> for a long time. We do not have one now. We do have mechanisms in
> place to reduce the impact of sequential cycling a large dataset so
> it does not totally destroy unrelated cached dat
Sam Tannous wrote:
> I have two freebsd boxes (back to back) and I've
> been playing with a simple server on one machine
> and client on the other machine (this was simply
> an exercise with playing with kqueue). Both the
> server and the client are single processes and the
> client seems to stop
:I really dislike the idea of random expiration; I don't understand
:the point, unless you are trying to get better numbers on some
:..
Well, the basic scenario is something like this: Lets say you have
512MB of ram and you are reading a 1GB file sequentially, over and over
again. The
Robert Watson wrote:
> Some of this has to do with limits on the available ancillary ports for
> out-going connections. Try adding additional IP addresses to the client
> machine, and forcing your client software to use specific IP addresses.
[ ... ]
> Hard-coding local addreses in your
> applic
Matthew Dillon wrote:
> :I really dislike the idea of random expiration; I don't understand
> :the point, unless you are trying to get better numbers on some
> :..
>
>Well, the basic scenario is something like this: Lets say you have
>512MB of ram and you are reading a 1GB file sequential
Hello, I posted this a few weeks ago on freebsd-questions and reposted
it a few days ago. I didn't get any responses beyond, "Hey, since the
Promise card does RAID, why bother with vinum?" (To which I responded,
in a nutshell, I want to learn vinum and the RAID the Promise card does
isn't super g
Sean Hamilton proposes:
Wouldn't it seem logical to have [randomized disk cache expiration] in
place at all times?
Terry Lambert responds:
:I really dislike the idea of random expiration; I don't understand
:the point, unless you are trying to get better numbers on some
>>:benchmark.
Matt
In the last episode (Jan 26), Jonathan Belson said:
> Is there a tool for creating the .fnt files that syscons uses? They
> appear to be uuencoded binary files but I can't find out any info on
> the file format.
They're only uuencoded for easy storage in CVS. Vidcontrol can take
regular raw 8xN
- Original Message -
From: "Tim Kientzle" <[EMAIL PROTECTED]>
| Cycling through large data sets is not really that uncommon.
| I do something like the following pretty regularly:
| find /usr/src -type f | xargs grep function_name
|
| Even scanning through a large dataset once can really
On Sun, 26 Jan 2003, Sean Hamilton wrote:
>
> In my case I have a webserver serving up a few dozen files of about 10 MB
> each. While yes it is true that I could purchase more memory, and I could
> purchase more drives and stripe them, I am more interested in the fact that
> this server is constan
On Sunday 26 January 2003 11:55 pm, Sean Hamilton wrote:
| - Original Message -
| From: "Tim Kientzle" <[EMAIL PROTECTED]>
|
| | Cycling through large data sets is not really that uncommon.
| | I do something like the following pretty regularly:
| | find /usr/src -type f | xargs grep
Brian T. Schellenberger wrote:
This to me is imminently sensible.
In fact there seem like two rules that have come up in this discussion:
1. For sequential access, you should be very hesitant to throw away
*another* processes blocks, at least once you have used more than, say,
25% of the cache
Tim Kientzle wrote:
> Cycling through large data sets is not really that uncommon.
> I do something like the following pretty regularly:
> find /usr/src -type f | xargs grep function_name
>
> Even scanning through a large dataset once can really hurt
> competing applications on the same machin
Sean Hamilton wrote:
> In my case I have a webserver serving up a few dozen files of about 10 MB
> each. While yes it is true that I could purchase more memory, and I could
> purchase more drives and stripe them, I am more interested in the fact that
> this server is constantly grinding away becaus
"Brian T. Schellenberger" wrote:
> 2. For sequential access, you should stop caching before you throw away
> your own blocks. If it's sequential it is, it seems to me, always a
> lose to throw away your *own* processes older bllocks on thee same
> file.
You can not have a block in a vm object whi
Thus spake Tim Kientzle <[EMAIL PROTECTED]>:
> Sean Hamilton proposes:
>
> >Wouldn't it seem logical to have [randomized disk cache expiration] in
> >place at all times?
>
> Terry Lambert responds:
>
> >>:I really dislike the idea of random expiration; I don't understand
> >>:the point, unless y
M. Basically what it comes down to is that without foreknowledge
of the data locations being accessed, it is not possible for any
cache algorithm to adapt to all the myrid ways data might be accessed.
If you focus the cache on one methodology it will probably perform
terri
36 matches
Mail list logo