If it helps anyone who might see this in the future. I still haven't figured it
out. I ran a dependency checker on the application and even though >I< can
browse the share that it's located on, the application says that its dlls
cannot be found even though they are in the same dir as the app.
F
To clarify how odd that is: /zpool1/test/share2 is mounted on a web server at
/mount/point. Going to /mount/point as root and chowning * caused the issue to
happen with /zpool1/test/share1.
This is reproducible, by the way. I can cause this to happen again, right now
if I wanted to...
Another
Sure, but it's really straightforward:
0,5,10,15,20,25,30,35,40,45,50,55 * * * * chown -R user1:group1
/zpool1/test/share2/* 2> /dev/null ; chmod -R g+w /zpool1/test/share2/* 2>
/dev/null
Here's the thing: There's no way that it was a hard/soft link. I know what
those are and I haven't linked any
I'm using zfs/osol snv_134. I have 2 zfs volumes: /zpool1/test/share1 and
/zpool1/test/share2. share1 is using CIFS, share2: nfs.
I've recently put a cronjob in place that changes the ownership of share2 to a
user and a group, on the test filer every 5 minutes. The cron job actually runs
in ope
Thank you, all of you, for the super helpful responses, this is probably one of
the most helpful forums I've been on. I've been working with ZFS on some
SunFires for a little while now, in prod, and the testing environment with oSol
is going really well. I love it. Nothing even comes close.
If
Yes, and I apologize for basic nature of these questions. Like I said, I'm
pretty wet behind the ears with zfs. The MB/sec metric comes from dd, not zpool
iostat. zpool iostat usually gives me units of k. I think I'll try with smaller
raid sets and come back to the thread.
Thanks, all
--
This m
40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, and
6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 40
very rarely.
As far as random vs. sequential. Correct me if I'm wrong, but if I used dd to
make files from /dev/zero, wouldn't that be sequ
Well, I've searched my brains out and I can't seem to find a reason for this.
I'm getting bad to medium performance with my new test storage device. I've got
24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the Areca
raid controller, the driver being arcmsr. Quad core AMD with