Grant,
Didn't see a response so I'll give it a go.
Ripping a disk away and silently inserting a new one is asking for
trouble imho. I am not sure what you were trying to accomplish but
generally replace a drive/lun would entail commands like
zpool offline tank c1t3d0
cfgadm | grep c1t3d0
sa
> I'm using Solaris 10 (10/08). This feature is what
> exactly i want. thank for response.
Duh. What I meant previously was that this feature
is not available in the Solaris 10 releases.
Cindy
--
This message posted from opensolaris.org
___
zfs-discus
H have a similar problem:
r...@moby1:~# zpool import
pool: bucket
id: 12835839477558970577
state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:
bucket UNAVAIL insufficient replicas
raidz2UNAVAIL corrupted data
c
if you rsync data to zfs over existing files, you need to take something more
into account:
if you have a snapshot of your files and rsync the same files again, you need
to use "--inplace" rsync option , otherwise completely new blocks will be
allocated for the new files. that`s because rsync w
Jeff Bonwick writes:
>> > Yes, I made note of that in my OP on this thread. But is it enough to
>> > end up with 8gb of non-compressed files measuring 8gb on
>> > reiserfs(linux) and the same data showing nearly 9gb when copied to a
>> > zfs filesystem with compression on.
>>
>> whoops.. a he
Hello list,
What would be the best zpool configuration for a cache/proxy server
(probably based on squid) ?
In other words with which zpool configuration I could expect best
reading performance ? (there'll be some writes too but much less).
Thanks.
--
Francois
___
Francois,
Your best bet is probably a stripe of mirrors. i.e. a zpool made of many
mirrors.
This way you have redundancy, and fast reads as well. You'll also enjoy
pretty quick resilvering in the event of a disk failure as well.
For even faster reads, you can add dedicated L2ARC cache devic
OpenSolaris Forums wrote:
> if you rsync data to zfs over existing files, you need to take
> something more into account:
>
> if you have a snapshot of your files and rsync the same files again,
> you need to use "--inplace" rsync option , otherwise completely new
> blocks will be allocated for th
Gary Mills wrote:
I've been watching the ZFS ARC cache on our IMAP server while the
backups are running, and also when user activity is high. The two
seem to conflict. Fast response for users seems to depend on their
data being in the cache when it's needed. Most of the disk I/O seems
to be wr
What is the best write performance improvement anyone has seen (if any)
on a ZFS stripe over EMC SAN?
I'd be interested to hear results for both - striped and non-striped EMC
config.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.
Hi François,
You should take care of the recordsize in your filesystems. This should
be tuned according to the size of the most accessed files.
Maybe disabling the "atime" is also good idea (but it's probably
something you already know ;) ).
We've also noticed some cases where enabling compress
Jonathan schrieb:
OpenSolaris Forums wrote:
if you have a snapshot of your files and rsync the same files again,
you need to use "--inplace" rsync option , otherwise completely new
blocks will be allocated for the new files. that`s because rsync will
write entirely new file and rename it over th
Daniel Rock wrote:
> Jonathan schrieb:
>> OpenSolaris Forums wrote:
>>> if you have a snapshot of your files and rsync the same files again,
>>> you need to use "--inplace" rsync option , otherwise completely new
>>> blocks will be allocated for the new files. that`s because rsync will
>>> write en
Harry,
ZFS will only compress data if it is able to gain more than 12% of space
by compressing the data (I may be wrong on the exact percentage). If ZFS
can't get get that 12% compression at least, it doesn't bother and will
just store the block uncompressed.
Also, the default ZFS compressio
Greg Mason writes:
> Harry,
>
> ZFS will only compress data if it is able to gain more than 12% of
> space by compressing the data (I may be wrong on the exact
> percentage). If ZFS can't get get that 12% compression at least, it
> doesn't bother and will just store the block uncompressed.
>
> Al
OpenSolaris Forums writes:
> if you rsync data to zfs over existing files, you need to take
> something more into account:
>
> if you have a snapshot of your files and rsync the same files again,
> you need to use "--inplace" rsync option , otherwise completely new
> blocks will be allocated for
Jonathan writes:
> It appears I may have misread the initial post. I don't really know how
> I misread it, but I think I missed the snapshot portion of the message
> and got confused. I understand the interaction between snapshots,
> rsync, and --inplace being discussed now.
I don't think you
Hi Francois,
I use ZFS with Squid proxies here at MIT. (MIT New Zealand that is ;))
My basic set up is like so.
- 2 x Sun SPARC v240's dual CPU's with 2 x 36 GB boot disks and 2 x 73
GB cache disks. Each machine has 4GB RAM.
- Each has a copy of squid, Squidguard and an apache server.
- A
Hi Remco.
Yes, I realize that was asking for trouble. It wasn't supposed to be a test of
yanking a LUN. We needed a LUN for a VxVM/VxFS system and that LUN was
available. I was just surprised at the panic, since the system was quiesced at
the time. But there is coming a time when we will b
Hi,
For anyone interested, I have blogged about raidz on-disk layout at:
http://mbruning.blogspot.com/2009/04/raidz-on-disk-format.html
Comments/corrections are welcome.
thanks,
max
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
On Apr 7, 2009, at 16:43, OpenSolaris Forums wrote:
if you have a snapshot of your files and rsync the same files again,
you need to use "--inplace" rsync option , otherwise completely new
blocks will be allocated for the new files. that`s because rsync
will write entirely new file and rena
Hi folks,
I would appreciate it if someone can help me understand some weird
results I'm seeing with trying to do performance testing with an SSD
offloaded ZIL.
I'm attempting to improve my infrastructure's burstable write capacity
(ZFS based WebDav servers), and naturally I'm looking at im
Patrick,
The ZIL is only used for synchronous requests like O_DSYNC/O_SYNC and
fsync(). Your iozone command must be doing some synchronous writes.
All the other tests (dd, cat, cp, ...) do everything asynchronously.
That is they do not require the data to be on stable storage on
return from the w
We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make "zfs send"
usable.
Exactly how does "build 105" translate to Solaris 10 10/08? My current
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the
n
Hi All,
I have corefile where we see NULL pointer de-reference PANIC as we have
sent (deliberately) NULL pointer for return value.
vdev_disk_io_start()
...
...
error = ldi_ioctl(dvd->vd_lh, zio->io_cmd,
(uintptr_t)&zio->io_dk_callback,
FWIW, I strongly expect live ripping of a SATA device to not panic the disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
"fault-tolerant" and "drive dropping away at any time" is a rather expected
scenario.
[I've popped disks out live in many cases, both when I was
On Fri, 10 Apr 2009, Rince wrote:
FWIW, I strongly expect live ripping of a SATA device to not panic the disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
"fault-tolerant" and "drive dropping away at any time" is a rather expected
scenario.
Ripping a SATA device
On Fri, Apr 10, 2009 at 12:43 AM, Andre van Eyssen wrote:
> On Fri, 10 Apr 2009, Rince wrote:
>
> FWIW, I strongly expect live ripping of a SATA device to not panic the
>> disk
>> layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to
>> be
>> "fault-tolerant" and "drive droppi
28 matches
Mail list logo