hard
disk is connected to the first SATA port on the motherboard.
Any insight would be greatly appreciated.
slash
starting /bin/rc
sdE0: LLBA 390,721,968 sectors
SAMSUNG SP2004C VM100-32 S07GJ10Y522190 [newdrive]
sdF0: bad led type 0
mouseport is (ps2, ps2intellimouse, 0, 1, 2)[ps2]:
vgasize [640x480x8]: 1600x1200x8
monitor is [xga]:
i8042: 28 returned to the f5 command
usb/kb...
There is no ether device for the onboard NIC.
slash
The wiki says "The Plan 9 updates page contains an ANSI/POSIX port of
aux/vga that is useful only for dumping registers on various systems."
I am having trouble finding this tool. Any pointers?
slash
I am about to upgrade the disk on my cpu/disk server to a bigger one,
and I want to maintain all the data. What is the most elegant way to
do this? The new disk is blank.
> Do you use Fossil with or without Venti?
> There is many way to do it.
>
> Without Venti, you could use replica(1) to copy your file system
> from the old Fossil to the new Fossil.
Fossil without Venti.
Work has kept me busy but now I finally managed to attach the new
drive to the system. Plan9
> if you (slash) could just grab the atazz binary
> (ftp://ftp.quanstro.net/other/8.atazz)
> and send me the output of
> echo 'identify device' | 8.atazz -r >[2=] /dev/sdE0 > /tmp/somefile
> that would be great. thanks!
Here you go. Again, sdE0 is the
> echo 'identify device' | 8.atazz -r /dev/sdE0 > /tmp/somefile
>
> would be better
Here goes.
sdE0.out
Description: Binary data
sdE1.out
Description: Binary data
> the way to interpret this information is you may use 512
> byte sectors if you really want to suffer terrible performance
> (usually 1/3 the normal performance for reasonablly random
> workloads.)
That doesn't sound tempting at all. I am still within Amazon's return
window. Can anyone recommend
On Thu, Oct 6, 2011 at 3:35 AM, Peter A. Cejchan wrote:
> i use WD Caviar Green model WD20EARS (2TB SATA II) without any problems. I
> installed from erik's 9atom.iso
Did you toggle any jumpers on the drive? I finally gave up and returned it.
A new day, a new disk (with 512 byte sectors).
I ins
> i think disk/mkfs is nearly idea for this, and isn't very dangerous. since
> your
> new fossil will start empty, you can't overwrite anything in the old fs.
How do I generate the proto file? Do I have to go through an archive?
Thank you for your patience.
> that's a good question. if you just want to copy everything, i use the shell
> idiom
> disk/mkfs <{echo +}
I ran this in /usr/bootes/ as bootes:
su# disk/mkfs -a -s / <{echo +} > arch
processing /fd/7
mkfs: /fd/7:1: can't open //dev/consctl: '//dev/consctl' permission denied
mkfs:
> you're not the first person to make this mistake, so i should
> have remembered this problem. sorry.
Please don't apologize. You are the one guiding the blind.
> you need to mount both new and old afresh in /n/ and copy
> using your destination as /n/new and source as /n/old. using
> / as you
snarf-paste error. the command was:
su# disk/mkfs -s /n/old -d /n/new -U -r <{echo +}
> Now let's see if it boots.
Almost there. I took out the old drive and made the new one sdE0. It
started booting, until:
fossil(#S/sdE0/fossil)... fsOpen: can't find /dev/sdE1/fossil ... panic
Does fossil store the device name somewhere on the disk? (The drive
was sdE1 when I formatted it.) How
> fossil/conf /dev/sdE0/fossil > fossil.conf
> fossil/conf -w /dev/sdE0/fossil fossil.conf
This was exactly what I needed. Thank you! My migration from old to
new drive is now complete.
I have some files on an external ext2 drive that have whitespace and
umlauts (ä, ö) in them. trfs took care of the whitespace. But ext2srv
presents umlauts as a question mark symbol (�) and won't let me access
the file (error: file does not exist).
Where is the problem? These files show correctly
> if you know what the charset on disk is, you could probablly hack ext2fs
> into translating names. or (less hacky) you could write a transliterating fs,
> or add this to trfs' duties.
Thank you. So now I know ext2srv is not doing any file name conversion. Good.
Say I wanted to add the followin
> unicode codepoints (runes) are abstract. we need to deal with encodings.
> the encoding utf-8 uses is not a single byte for anything above 0x7f.
> so essentially the encoding phase would be name[i] = (uchar)r. the decoding
> phase
> would be r = (Rune)name[i].
Thank you. I modified trfs.c and
I began consolidating files with great joy from external ext2 disks to
the new 2 TB fossil drive using trfs.latin1. After only 300 or so
gigabytes I saw this on the console:
fossil: diskWriteRaw failed: /dev/sdE0/fossil: score 0x016d1415: data
Sun Oct 16 09:37:58 EDT 2011
part=data block 23925781:
> i've also had trouble with usb disks.
The errors come from sdE0 which is connected to sata port on the
motherboard. Even when there are no usb disks attached at all.
su# cat /dev/sdE0/ctl
inquiry Hitachi HDS5C3020ALA632
model Hitachi HDS5C3020ALA632
serial ML0220FL0220F313JE2D
firm ML6OA5
> dd -if /dev/sdE0/fossil -of /dev/null -bs 512k # or whatever partition.
No errors, but I noticed it stopped just before 200 gigabytes. The
total size of the plan9 partition is 1.81 terabytes.
A related question: how should I interpret /dev/sdE0/fossil file size?
# ls -l /dev/sdE0/fossil
--rw-
> No errors, but I noticed it stopped just before 200 gigabytes.
I kept reading the disk and got several read errors. Here is the first one:
fossil: diskReadRaw failed: /dev/sdE0/fossil: score 0x016d1389:
part=data block 23925641: eof reading disk
Bad sector or something else?
> to me that looks like a mismatch between fossil's expectations for the
> partition and that actual partition size. i think you're just reading past
> the end of the partition.
Indeed!
su# disk/prep /dev/sdE0/plan9
9fat 0 204800(204800 sectors, 100.00 MB)
nvram
> How did you used disk/prep?
I ran 'disk/prep -bw -a^(9fat nvram fossil swap) /dev/sdE1/plan9'.
When I ran it, my old disk was sdE0 and the new was sdE1. Now I notice
the layout prep created is identical on both disks!
su# disk/prep /dev/sdE0/plan9 # old
9fat0 204800 (
I have the following wildcard entry in my /lib/ndb/local:
cname=server.local
dom=*.local
Essentially I want every name to resolve back to my server.
Now, ndb/dnsdebug is able to resolve any host just fine:
cpu% ndb/dnsdebug
> foobar
answer server.local
Yes ping works and I can also make nslookups for hosts that don't match the
wildcard.
$ nslookup server.local
Server: 10.0.0.1
Address: 10.0.0.1#53
Name: server.local
Address: 10.0.0.1
$ nslookup other.local
Server: 10.0.0.1
Address: 10.0.0.1#53
Name: other.local
Address: 10.0.0.2
snoopy confi
>
> where is your soa record?
/lib/ndb/local:
dom=local soa=
refresh=3600 ttl=3600
ns=server.local
mb=em...@abcxyz.com
ndb/dnsquery also fails for wildcard names but works for real ones:
cpu% ndb/dnsquery
> f
!dns: resource does not exist; negrcode 0
> bar
!dns: resource does not exist; negrcode 0
> server
server.local ip 10.0.0.1
> other
other.local ip 10.0.0.2
Why do dnsquery and dnsdebug give different res
>
>
> ndb/dnsquery also fails for wildcard names but works for real ones:
>
> cpu% ndb/dnsquery
> > f
> !dns: resource does not exist; negrcode 0
> > bar
> !dns: resource does not exist; negrcode 0
> > server
> server.local ip 10.0.0.1
> > other
> other.local ip 10.0.0.2
>
> Why do dnsquery and
Dear 9fans,
I am booting my Raspberry Pi 4B off the 9legacy SD card image
(http://www.9legacy.org/download.html) and it boots fine with the default
config.txt, but there is a 48-pixel wide black border on the screen.
term% echo `{ dd -if /dev/screen -bs 64 -count 1}
0+1 records in
0+1 records
physically powered off after
making changes to the config file. ctrl-t ctrl-t r is not sufficient.
/
> On 5. Apr 2024, at 16.33, slash 9fans wrote:
>
> Dear 9fans,
>
> I am booting my Raspberry Pi 4B off the 9legacy SD card image
> (http://www.9legacy.org/download.html) and
> latin-1 bytes 00-FF turn into unicode runes 00-FF.
Then why doesn't it Just Work? Now I am confused (again).
32 matches
Mail list logo