Hi,
as Richard Elling wrote earlier:
"For more background, low-cost SSDs intended for the boot market are
perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root
and the rest for an L2ARC. For small form factor machines or machines
with max capacity of 8GB of RAM (a typical home system)
Thanks for the reply.
I didn't get very much further.
Yes, ZFS loves raw devices. When I had two devices I wouldn't be in this mess.
I would simply install opensolaris on the first disk and add the second ssd to
the
data pool with a zpool add mpool cache cxtydz Notice that no slices or
partitio
Thank you Erik for the reply.
I misunderstood Dan's suggestion about the zvol in the first place. Now you
make the same suggestion also. Doesn't zfs prefer raw devices? When following
this route the zvol used as cache device for tank makes use of the ARC of rpool
what doesn't seem right. Or is
Thank you Darren.
So no zvol's as L2ARC cache device. That leaves partitions and slices.
When I tried to add a second partition, the first contained slices with the
root pool, as cache device. Zpool refused, it reported that the device CxTyDzP2
(note P2) wasn't supported. Perhaps I did something
Hi all,
yes it works with the partitions.
I think that I made a typo during the initial testing off adding a partition as
cache, probably swapped the 0 for an o.
Tested with a b134 gui and text installer on the x86 platform.
So here it goes:
Install opensolaris into a partition and leave some s
Hi all,
I want to backup a pool called mpool. I want to do this by doing a zfs send of
a mpool snapshot and receive into a different pool called bpool. All this on
the same machine.
I'm sharing various filesystems via zfs sharenfs and sharesmb.
Sending and receiving of the entire pool works as e
Hi Richard,
thanks for the reply. As you can see I already use that option. But that
doesn't prevent the filesystems in the pool from mounting when I import the
pool after it was exported. I'm specifically looking for a zpool import option
to prevent the filesystems from mounting automatically.
OCZ not only introduced these "enterprise" SSDs but also "Maximum
Performance/Enterprise Solid State Drives" a couple of days ago.
An SLC version: Vertex 2 EX
http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maximum-performance-enterprise-solid-state-drives/ocz-vertex-2-ex-se
I just lookup it up again and as far as i can see the super cap is present in
the MLC version as well as the SLC
http://www.ocztechnology.com/products/solid-state-drives/2-5--sata-ii/maximum-performance-enterprise-solid-state-drives/ocz-vertex-2-pro-series-sata-ii-2-5--ssd-.html
From the page:
I fully agree with your post. NFS is much simpler in administration.
Although I don't have any experience with the DDRdrive X1, I've read and heard
from various people actually using them that it's the best "available" SLOG
device. Before everybody starts yelling "ZEUS" or "LOGZILLA". Was anybod
Yes, the sandforce based ssd's are also interesting. I think both, the 1500
sure can, could be fitted with the necessary supercap to prevent dataloss in
case of unexpected power loss. And the 1500 based models will available with a
SAS interface needed for clustering. Something the DDRdrive cann
I wasn't planning to buy any SSD as a ZIL. I merely acknowledged that an
sandforce with supercap MIGHT be a solution. At least the supercap should take
care of the data loss in case of a power failure. But they are still in the
consumer realm have not been picked up by the enterprise (yet) for w
Hi,
how can I find out what the actual value when the default applies to a zfs
property?
# zfs get checksum mpool
NAME PROPERTY VALUE SOURCE
mpool checksum on default
(In this particular case I know what the value is, either fletcher2 or
fletcher4 depending on the build)
But how can one find ou
Thank you for the reply.
I must admit that upon closer inspection alot of properties indeed do present
the actual value.
For the checksum property I used zdb - | grep fletcher to determine wether
it was fletcher2 or fletcher4 was used for checksumming the filesystem. Using
the OS build numb
After following this topic the last days, and nearly everybody contributed to
it, I think it's time to add a new factor.
Vibration.
First some prove how sensitive modern drives are:
http://blogs.sun.com/brendan/entry/unusual_disk_latency
Most "enterprise" drives also contain circuitry to handle
@Bob, yes you're completely right. This kind of engineering is what you get
when buying a 2540 for example. All parts are nicely matched. When you build
your own whitebox the parts might not match.
But that wasn't my point. Vibration, in the drive and excited by the drive,
increases with the s
Although it's bit much Nexenta oriented, command wise. It's a nice
introduction. I did found one thing, page 28 about the zil. There's no zil
device, the zil can be written to an optional slog device. And the last line
first paragraph, "If you can, use memory based SSD devices". At least change
Hi,
can anybody describe the correct procedure to replace a disk (in a working OK
state) with a another disk without degrading my pool?
For a mirror I thought off adding the spare, you'll get a three device mirror.
Let it resilver. Finally remove the disk I want.
But what would be the correct
Thanks for the replies.
I guess I misunderstood the manual:
zpool replace [-f] pool old_device [new_device]
Replaces old_device with new_device. This is equivalent to attaching
new_device, waiting for it to resilver, and then detaching old_device.
The size of new_device must be greater
Also in reply to the previous email by Will.
Can anyone shed more light on the combination lsi sas hba , the lsisasx36
expander chip (or it's relatives) and sata disks.
I'm investigating a migration from discrete channels (like in the thumper) to a
multiplexed solution via a sas expander.
I'm aw
So to wrap it up. According to Will, a supermicro chassis using a single lsi
expander connected to sata disks can utilize the wide sas port between hba and
the chassis. (like a J4500 Richard mentioned. How much I like these systems
(thumper etc), they're way out of my budget.) Will did see more
Thanks posting this solution.
But I would like to point out that bug 6574286 "removing a slog doesn't work"
still isn't resolved. A solution is under it's way, according to George Wilson.
But in the mean time, IF something happens you might be in a lot of trouble.
Even without some unfortunate
Hi,
I'm using asus m3a78 boards (with the sb700) for opensolaris and m2a* boards
(with the sb600) for linux some of them with 4*1GB and others with 4*2Gb ECC
memory. Ecc faults will be detected and reported. I tested it with a small
tungsten light. By moving the light source slowly towards the
I didn't meant using slog for the root pool. I meant using the slog for a data
pool. Where the data pool consists of (rotating) hard disk and complement them
with a ssd based slog. But instead of a dedicated ssd for the slog I want the
root pool share the ssd with the slog. Both can mirrored to
24 matches
Mail list logo