t of space in your
case if you're going to have more than 8 drives in this rig. Random
googling: http://www.pc-pitstop.com/sas_cables_adapters/AD8788-2.asp and
http://www.amazon.com/HighPoint-External-Mini-SAS-SFF8088-Ext-MS-1MES/dp/B000JQ51CM
or
something.
>
> Cheer
HI
I use GELI with ZFS all the time. Works fine for me so far.
Am 31.07.12 21:54, schrieb Robert Milkowski:
>> Once something is written deduped you will always use the memory when
>> you want to read any files that were written when dedup was enabled, so
>> you do not save any memory unless you
Next gen spec sheets suggest the X25-E will get a "Power Safe Write
Cache," something it does not have today.
See:
http://www.anandtech.com/Show/Index/3965?cPage=5&all=False&sort=0&page=1&slug=intels-3rd-generation-x25m-ssd-specs-revealed
(Article is about X25-M, scroll down for X25-E info.)
On
tal Caviar Blue WD10EALS 1TB drives [1]. Does anyone have any
experience with these drives?
If this is the wrong way to go, does anyone have a recommendation for
1TB drives I can get for <= 90$?
[1] http://www.wdc.com/en/products/products.asp?driveid=793
Thanks for any help,
--
- Pat
Thanks, that worked!!
It needed "-Ff"
The pool has been recovered with minimal loss in data.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I tried booting with b134 to attempt to recover the pool. I attempted with one
disk of the mirror. Zpool tells me to use -F for import, fails, but then tells
me to use -f, which also fails and tells me to use -F again. Any thoughts?
j...@opensolaris:~# zpool import
pool: atomfs
id: 13446
Thanks for the info.
I'll try the live CD method when I have access to the system next week.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Also, I tried to run zpool clear, but the system crashes and reboots.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
This system is running stock 111b runinng on an Intel Atom D945GCLF2
motherboard. The pool is of two mirrored 1TB sata disks. I noticed the system
was locked up, rebooted and the pool status shows as follows:
pool: atomfs
state: FAULTED
status: An intent log record could not be read.
I've found that when I build a system, it's worth the initial effort
to install drives one by one to see how they get mapped to names. Then
I put labels on the drives and SATA cables. If there were room to
label the actual SATA ports on the motherboard and cards, I would.
While this isn't foolproo
Thank you very much!
This is exactly what i searched for!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I've had success with the SIIG SC-SAE012-S2. PCIe and no problems
booting off of it in 2008.11.
On Jun 27, 2009, at 3:02 PM, Simon Breden
wrote:
Hi,
Does anyone know of a reliable 2 or 4 port SATA card with a solid
driver, that plugs into a PCIe slot, so that I can benefit from the
h
So there is no possibility to do this with or before the lucreate command?
hm. well-
thank you anyway then
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
Thx for your quick answer, but that is exactly what i am trying: to manage this
by lucreate command or earlier.
The reason for that is VERY small volumes.
I hope it is possible to create the BE without using much diskspace.. and swap
is a little chance to get at least some space.. now it is on 5
Good morning everybody
I was migrating my ufs – rootfilesystem to a zfs – one, but was a little upset
finding out that it became bigger (what was clear because of the swap and dump
size).
Now I am questioning myself if it is possible to set the swap and dump size by
using the lucreate – comman
I'm using ZFS snapshots and send and receive for a proof of concept, and
I'd like to better understand how the incremental feature works.
Consider this example:
1. create a tar file using tar -cvf of 10 image files
2. ZFS snapshot the filesystem that contains this tar file
3. Use ZFS send
I'm fighting with an identical problem here & am very interested in this
thread.
Solaris 10 127112-11 boxes running ZFS on a fiberchannel raid5 device
(hardware raid).
Randomly one lun on a machine will stop writing for about 10-15 minutes
(during a busy time of day), and then all of a sudde
Yes, we are currently running ZFS, just without L2 ARC, or offloaded ZIL.
Mark J Musante wrote:
On Fri, 10 Apr 2009, Patrick Skerrett wrote:
degradation) when these write bursts come in, and if I could buffer
them even for 60 seconds, it would make everything much smoother.
ZFS already
about short bursts that happen once or twice a
day. The rest of the time everything runs very smooth.
Thanks.
Eric D. Mudama wrote:
On Fri, Apr 10 at 8:07, Patrick Skerrett wrote:
Thanks for the explanation folks.
So if I cannot get Apache/Webdav to write synchronously, (and it does
n
hanks.
Neil Perrin wrote:
Patrick,
The ZIL is only used for synchronous requests like O_DSYNC/O_SYNC and
fsync(). Your iozone command must be doing some synchronous writes.
All the other tests (dd, cat, cp, ...) do everything asynchronously.
That is they do not require the data to be on stable st
Hi folks,
I would appreciate it if someone can help me understand some weird
results I'm seeing with trying to do performance testing with an SSD
offloaded ZIL.
I'm attempting to improve my infrastructure's burstable write capacity
(ZFS based WebDav servers), and naturally I'm looking at im
IHAC using ZFS in production, and he's opening up some files with the
O_SYNC flag. This affects subsequent write()'s by providing
synchronized I/O file integrity completion. That is, each write(2) will
wait for both the file data and file status to be physically updated.
Because of this, he's
> This seems a bit like black magic. Maybe that's what I need,
> eh?
Feel the magic at
http://www.cuddletech.com/blog/pivot/entry.php?id=729
Greetings,
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
reate foo /test/old
# zpool create bar /test/new
# zfs create -s -V 150m bar/vol
# mkfile 20m /foo/test.file
# zpool attach foo /test/old /dev/zvol/dsk/bar/vol
# zpool detach foo /test/old
Greetings,
Patrick
___
zfs-discuss mailing list
zfs-discuss@o
Thanks. I'll give that a shot. I neglected to notice what forum it was in since
the question morphed into "when will Solaris support port multipliers?"
Thanks again.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@o
At the risk of groveling, I'd like to add one more to the set of people wishing
for this to be completed. Any hint on a timeframe? I see reference to this bug
back in 2006
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6409327), so I was
wondering if there was any progress.
Thanks
staging in order to get the best performance possible.
Could you provide some information regarding this topic?
Thanks in advance for your help
Regards
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
had to move a pool around and attached an
external USB-drive with a disk considerably larger than my zpool.
So I created a zpool on the disk and used a zvol as the
buffer-vdev.
HTH,
Patrick
___
zfs-discuss mailing list
zfs-discuss@o
thanks all for the feedback! i definitely learned a lot-- storage isn't
anywhere near my field of expertise, so it's great to get some real examples to
go with all the buzzwords you hear around the watercooler. ;)
i'll probably give one of the raid-z or mirroring setups suggested a try when i
hi,
i just set up snv_54 on an old p4 celeron system and even tho the processor is
crap, it's got 3 7200RPM HDs: 1 80GB and 2 40GBs. so i'm wondering if there is
an optimal way to lay out the ZFS pool(s) to make this old girl as fast as
possible
as it stands now i've got the following dri
i have a machine with a disk that has some sort of defect and i've found that
if i partition only half of the disk that the machine will still work. i tried
to use 'format' to scan the disk and find the bad blocks, but it didn't work.
so as i don't know where the bad blocks are but i'd still li
Is there a difference - Yep,
'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and
options
whereas
'none' tells ZFS not to mount the ZFS filesystem at all. Then you would
need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint
poolname/fsname' to get it mounted.
i'm replacing the stock HD in my vaio notebook with 2 100GB 7200 RPM hitachi--
yes it can hold 2 HDs. ;) i was thinking about doing some sort of striping
setup to get even more performance, but i am hardly a storage expert, so i'm
not sure if it is better to set them up to do sofware RAID or t
Hey,
You'll need to use one of the OpenSolaris/ZFS community releases
to use the snapshot -r option, starting at build 43.
Bugger,
Anyone have an idea if it'll be patched into 06/06, or would it be a
future release plan/plot/idea/etc...
P
___
zfs-d
Hey,
Would 'zfs snapshot -r poolname' achieve what you want?
I suppose, the idea, would... but alas :
[EMAIL PROTECTED]:/# zfs snapshot -r [EMAIL PROTECTED]
invalid option 'r'
usage:
snapshot <[EMAIL PROTECTED]|[EMAIL PROTECTED]>
[EMAIL PROTECTED]:/#
( solaris 06/06, with all patchen
Hi,
Is it possible to create a snapshot, for ZFS send purposes, of an entire pool ?
Patrick
--
Patrick
patrick eefy net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
ssing somthing. ( it's a gbe cross over from one
v20z, to another )
I'm getting the figures from the zfs list i'm doing on the destination...
so, is there a faster way ? am i missing somthing ?
Patrick
--
Patrick
-------
27;t think that it
should make that much of a difference.
As far as i remember, ZFS snapshot/send/etc... access the device, not
the filesystem.
P
--
Patrick
patrick eefy net
___
zfs-discuss mailing list
zfs-dis
Hi,
*sigh*, one of the issues we recognized, when we introduced the new
cheap/fast file system creation, was that this new model would stress
the scalability (or lack thereof) of other parts of the operating
system. This is a prime example. I think the notion of an automount
option for zfs dir
s on boot, becomes a pain.
So ... how about an automounter? Is this even possible? Does it exist ?
Heeeeelll!!
Patrick
--
Patrick
----
patrick eefy net
___
zfs-discuss mailing list
zfs-discuss@
ry is, in turn, shared out over NFS. Are there any issues I should
be aware of with this sort of installation?
Thanks for any advice or input!
Patrick Narkinsky
Sr. System Engineer
EDS
This message posted from opensolaris.org
___
zfs-discuss maili
John Danielson wrote:
.
Patrick Petit wrote:
David Edmondson wrote:
On 4 Aug 2006, at 1:22pm, Patrick Petit wrote:
When you're talking to Xen (using three control-A's) you should
hit
Darren J Moffat wrote:
Richard Lowe wrote:
Patrick Petit wrote:
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a
domU is booted from a disk image located on an emulated ZFS volume.
Has this been also
Richard Lowe wrote:
Patrick Petit wrote:
Hi,
Some additional elements. Irrespective of the SCSI error reported
earlier, I have established that Solaris dom0 hangs anyway when a
domU is booted from a disk image located on an emulated ZFS volume.
Has this been also observed by other members
explanation to this problem? What would be the troubleshooting steps?
Thanks
Patrick
___
xen-discuss mailing list
xen-discuss@opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
James C. McPherson wrote:
Patrick Petit wrote:
Darren Reed wrote:
Patrick Petit wrote:
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console
Darren Reed wrote:
Patrick Petit wrote:
Hi,
Using a ZFS emulated volume, I wasn't expecting to see a system [1]
hang caused by a SCSI error. What do you think? The error is not
systematic. When it happens, the Solaris/Xen dom0 console keeps
displaying the following message and the s
/vol2compressionoffdefault
tank/vol2readonly offdefault**
*
3 - Boot a Linux Xen domU kernel on that volume which contains an ext3fs
rootfs partition and a swap partion.
Thanks,
Patrick
---
Eric Schrock wrote:
On Tue, Aug 01, 2006 at 01:10:44PM +0200, Patrick Petit wrote:
Hi There,
I looked at the ZFS admin guide in attempt to find a way to leverage ZFS
capabilities (storage pool, mirroring, dynamic stripping, etc.) for Xen
domU file systems that are not ZFS. Couldn't
comparison to a physical block device.
Would ZFS perform good enough in this configuration?
Thanks
- Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hi all,
i recently replaced the drive in my ferrari 4000 with a 7200rpm drive and i put
the original drive in a silvestone USB enclosure. when i plug it vold puts the
icon on the desktop and i can see the root UFS filesystem, but i can't import
the zpool that held all my user data. ;(
i found
Hey Frank,
Frank Cusack wrote:
Patrick Bachmann:
IMHO it is sufficient to just document this best-practice.
I disagree. The documentation has to AT LEAST state that more than 9
disks gives poor performance. I did read that raidz should use 3-9 disks
in the docs but it doesn't say WH
Hi,
Note though that neither of them will backup the ZFS properties, but
even zfs send/recv doesn't do that either.
From a previous post, i remember someone saying that was being added,
or at least being suggested.
Patrick
___
zfs-discuss ma
d be a new partition, moving data over with copying -
instead of moving - the troublesome file, just in case - not sure if zfs allows
for links that cross zfs partitions and thus optimizes such moves, then zfs
destroy data/test, but there might be a better way?)
patrick mauritz
This
Hi,
sounds like your workload is very similar to mine. is all public
access via NFS?
Well it's not 'public directly', courier-imap/pop3/postfix/etc... but
the maildirs are accessed directly by some programs for certain
things.
for small file workloads, setting recordsize to a value lower tha
Hi,
I've just started using ZFS + NFS, and i was wondering if there is
anything i can do to optimise it for being used as a mailstore ? (
small files, lots of them, with lots of directory's and high
concurrent access )
So any ideas guys?
P
___
zfs-dis
Hey,
thanks ;)
although it seems after a reboot things are sorted ;)
Patrick
On 6/15/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
You're missing some of the daemons:
daemon 337 1 0 11:41:03 ? 0:00 /usr/sbin/rpcbind
daemon 469 1 0 11:41:04 ?
m a BSD box
before )
The solaris box is a default install with generic_limited_net.xml run,
and dtlogin -d
Ideas ?
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
snapshots? would that be an option, thus
giving you a less redundant, yet still redundant solution...?
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
thout answering those questions first, you will risk a suboptimal
solution.
Anything i missed?
Patrick
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
h_the_zfs_external
The page has an idea that seems somewhat fiddly, and i'd rather not
trust it on a production-type enviroment, anyone have any more 'info'
for me?
P
--
Patrick
patrick eefy net
_
61 matches
Mail list logo