On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with so-
called "enterprise" SSDs. SSDs with capacitor-backed write caches
seem to be fastest.
Do you have any
Hi All,
I would like to know whether the ZFS native API for SunOS
(http://www.opensolaris.org/os/community/zfs/source/) is publicly available
now? I see in some old mailing lists (2 years old) that they were not publicly
available. Is this still true?
Also I see there is a java API available at
I had a pool which was exported and due to some issues on my SAN i was never
able to import it again. Can anyone tell me how can i destroy the exported pool
to free up the LUN. I tried to create a new pool on the same pool but it gives
me following error
# zpool create emcpool4 emcpower0c
cann
James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with
so-called "enterprise" SSDs. SSDs with capacitor-backed write caches
seem to be fastest.
On 05/07/2009, at 1:57 AM, Ross Walker wrote:
Barriers are by default are disabled on ext3 mounts... Google it and
you'll see interesting threads in the LKML. Seems there was some
serious performance degradation in using them. A lot of decisions in
Linux are made in favor of performance over da
On Jul 5, 2009, at 6:06 AM, James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with
so-called "enterprise" SSDs. SSDs with capacitor-backed wr
Ross Walker wrote:
On Jul 5, 2009, at 6:06 AM, James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (and cost) with
so-called "enterprise" SSDs. SSDs with c
On 06/07/2009, at 9:31 AM, Ross Walker wrote:
There are two types of SSD drives on the market, the fast write SLC
(single level cell) and the slow write MLC (multi level cell). MLC
is usually used in laptops as SLC drives over 16GB usually go for
$1000+ which isn't cost effective in a lapt
I did a quick search but couldn't find anything about this little problem.
I have an X4100 production machine (called monster) that has a J4200 full of
500GB drives attached. It's running OpenSolaris 2009.06 and fully up to date.
It takes daily snapshots and sends them to another machine as a ba
On Jul 5, 2009, at 7:47 PM, Richard Elling
wrote:
Ross Walker wrote:
On Jul 5, 2009, at 6:06 AM, James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation in performance (a
DL Consulting wrote:
I did a quick search but couldn't find anything about this little problem.
I have an X4100 production machine (called monster) that has a J4200 full of
500GB drives attached. It's running OpenSolaris 2009.06 and fully up to date.
It takes daily snapshots and sends them t
Thanks.
I'll fiddle things so it tells me what the return value is and use the last
common snapshot rather than the last received snapshot.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail
Ross Walker wrote:
On Jul 5, 2009, at 7:47 PM, Richard Elling
wrote:
Ross Walker wrote:
On Jul 5, 2009, at 6:06 AM, James Lever wrote:
On 04/07/2009, at 3:08 AM, Bob Friesenhahn wrote:
It seems like you may have selected the wrong SSD product to use.
There seems to be a huge variation
Just reread your response. If the send/recv fails the snapshot should NOT turn
up on chucky (the recv machine) right? However, it is turning up but the
original on the sending machine is being destroyed by something (which I'm
guessing is the time-slider-cleanup cronjob below)
Here's the full c
On Jul 5, 2009, at 9:20 PM, Richard Elling
wrote:
Ross Walker wrote:
Thanks for the info. SSD is still very much a moving target.
I worry about SSD drives long term reliability. If I mirror two of
the same drives what do you think the probability of a double
failure will be in 3, 4, 5
Hi,
How do I discover the disk name to use for zfs commands such as:
c3d0s0? I tried using format command but it only gave me the first 4
letters: c3d1. Also why do some command accept only 4 letter disk
names and others require 6 letters?
Thanks
Hua-Ying
___
Hi
Hua-Ying Ling wrote:
> How do I discover the disk name to use for zfs commands such as:
> c3d0s0? I tried using format command but it only gave me the first 4
> letters: c3d1. Also why do some command accept only 4 letter disk
> names and others require 6 letters?
Usually i find
cfgadm -a
17 matches
Mail list logo