the claims are meaningless.
http://mail.opensolaris.org/pipermail/opensolaris-help/2009-November/015824.html
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for gam_server / gamin.
$ nm /usr/lib/gam_server | grep port_create
[458] | 134589544| 0|FUNC |GLOB |0|UNDEF |port_create
The patch for port_create has never gone upstream however, while gvfs uses
glib's gio, which has backends for inotify, solari
's no pricing on the webpage though - does anyone know how it compares
in price to a logzilla?
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
h use Marvell MV88SX and works very well in
Solaris. (Package SUNWmv88sx).
They're PCI-X SATA cards, the AOC-SASLP-MV8 is a PCIe SAS card and has no
(Open)Solaris driver.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolari
you do it in theory?
> And in practice?
>
> Now say we are talking about a virtual hard drive,
> rather than a physical hard drive.
> How would that affect the answer to the above questions?
http://brad.livejournal.com/2116715.html has a utility that can be used to
test if your sy
s mount space/zfscachetest
Doing initial (unmount/mount) 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real5m45.66s
user0m5.63s
sys 1m14.66s
Doing second 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real15m29.42s
user0m5.65
James Lever wrote:
>
> On 07/07/2009, at 8:20 PM, James Andrewartha wrote:
>
>> Have you tried putting the slog on this controller, either as an SSD or
>> regular disk? It's supported by the mega_sas driver, x86 and amd64 only.
>
> What exactly are you suggest
James Lever wrote:
> We also have a PERC 6/E w/512MB BBWC to test with or fall back to if we
> go with a Linux solution.
Have you tried putting the slog on this controller, either as an SSD or
regular disk? It's supported by the mega_sas driver, x86 and amd64 only.
--
James
even faster than it is now.
Are you aware of posix_fadvise(2) and madvise(2)?
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rvell SAS
driver for Solaris at all, so I'd say it's not supported.
http://www.hardforum.com/showthread.php?t=1397855 has a fair few people
testing it out, but mostly under Windows.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@op
ion-and-the-zero-length-file-problem/
http://lwn.net/Articles/323169/
http://mjg59.livejournal.com/108257.html http://lwn.net/Articles/323464/
http://thunk.org/tytso/blog/2009/03/15/dont-fear-the-fsync/
http://lwn.net/Articles/323752/ *
http://lwn.net/Articles/322823/ *
* are currently subscriber-only,
08-June/048457.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-June/048550.html
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ess
> by calculating the best theoretical correct speed (which should be
> really slow, one write per disc spin)
>
> this has been on my TODO list for ages.. :(
Does the perl script at http://brad.livejournal.com/2116715.html do what you
want?
--
James Andrewartha
___
do not exist on the sending side are des-
troyed.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
too.
My impression is you should change the recordsize on the first filesystem
before performing the zfs send. This will then be used for all files when
you receive the filesystem. I haven't tested this with recordsize, but I did
with compression and I imagine recordsize (and others) will beha
icles.x/13732
http://www.techreport.com/articles.x/13253
http://www.techreport.com/articles.x/14583
http://www.storagereview.com/ is promising some SSD benchmarks soon.
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ith battery backup
RAM, Areca (who formerly specialised in SATA controlers) now do SAS RAID at
reasonable prices, and have Solaris drivers.
--
James Andrewartha
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
James Andrewartha wrote:
> On Thu, 2008-01-17 at 09:29 -0800, Richard Elling wrote:
>> You don't say which version of ZFS you are running, but what you
>> want is the -R option for zfs send. See also the example of send
>> usage in the zfs(1m) man page.
>
> Sorry
Dave Lowenstein wrote:
> Couldn't we move fixing "panic the system if it can't find a lun" up to
> the front of the line? that one really sucks.
That's controlled by the failmode property of the zpool, added in PSARC
2007/567 which was integrated
ention of send -R in the
man page. Ah, it's PSARC/2007/574 and nv77. I'm not convinced it'll
solve my problem (sending the root filesystem of a pool), but I'll
upgrade and give it a shot.
Thanks,
James Andrewartha
___
zfs-discuss m
t receive: destination 'space' exists
# zfs send [EMAIL PROTECTED] | ssh musundo "zfs recv -vn [EMAIL PROTECTED]"
cannot receive: destination does not exist
What am I missing here? I can't recv to space, because it exists, but I
can't make it not exist since it's t
21 matches
Mail list logo