On 25.06.2010 14:32, Mika Borner wrote:
>
> It seems we are hitting a boundary with zfs send/receive over a network
> link (10Gb/s). We can see peak values of up to 150MB/s, but on average
> about 40-50MB/s are replicated. This is far away from the bandwidth that
> a 10Gb link can offer.
>
> Is i
On 06.06.2010 08:06, devsk wrote:
> I had an unclean shutdown because of a hang and suddenly my pool is degraded
> (I realized something is wrong when python dumped core a couple of times).
>
> This is before I ran scrub:
>
> pool: mypool
> state: DEGRADED
> status: One or more devices has ex
On 13.04.2010 10:12, Ian Collins wrote:
> On 04/13/10 05:47 PM, Daniel wrote:
>> Hi all.
>>
>> Im pretty new to the whole OpenSolaris thing, i've been doing a bit of
>> research but cant find anything on what i need.
>>
>> I am thinking of making myself a home file server running OpenSolaris
>> wit
On 07.04.2010 18:05, Ron Marshall wrote:
> I finally decided to get rid of my Windows XP partition as I rarely used it
> except to fire it up to install OS updates and virus signatures. I had some
> trouble locating information on how to do this so I thought I'd document it
> here.
>
> My syst
On 04.02.2010 12:12, dick hoogendijk wrote:
>
> Frank Cusack wrote:
>> Is it possible to emulate a unionfs with zfs and zones somehow? My zones
>> are sparse zones and I want to make part of /usr writable within a zone.
>> (/usr/perl5/mumble to be exact)
>
> Why don't you just export that direct
On 28.01.2010 15:55, dick hoogendijk wrote:
>
> Cindy Swearingen wrote:
>
>> On some disks, the default partitioning is not optimal and you have to
>> modify it so that the bulk of the disk space is in slice 0.
>
> Yes, I know, but in this case the second disk indeed is smaller ;-(
> So I wonder
Michael Schuster schrieb:
> Thomas Maier-Komor wrote:
>
>>> Script started on Wed Oct 28 09:38:38 2009
>>> # zfs get dedup rpool/export/home
>>> NAME PROPERTY VALUE SOURCE
>>> rpool/export/home dedup on
Chavdar Ivanov schrieb:
> Hi,
>
> I BFUd successfully snv_128 over snv_125:
>
> ---
> # cat /etc/release
> Solaris Express Community Edition snv_125 X86
>Copyright 2009 Sun Microsystems, Inc. All Rights Reserved.
> Use is subject to license
Hi everybody,
I am considering moving my data pool from a two disk (10krpm) mirror
layout to a three disk raidz-1. This is just a single user workstation
environment, where I mostly perform compile jobs. From past experiences
with raid5 I am a little bit reluctant to do so, as software raid5 has a
Marcel Gschwandl schrieb:
> Hi all!
>
> I'm running a Solaris 10 Update 6 (10/08) system and had to resilver a zpool.
> It's now showing
>
>
> scrub: resilver completed after 9h0m with 21 errors on Wed Nov 4 22:07:49
> 2009
>
>
> but I haven't found an option to see what files where affect
Thomas Maier-Komor wrote:
> Hi,
>
> I have a corrupt pool, which lives on a .vdi file of a VirtualBox. IIRC
> the corruption (i.e. pool being not importable) was caused when I killed
> virtual box, because it was hung.
>
> This pool consists of a single vdev and I would re
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I have a corrupt pool, which lives on a .vdi file of a VirtualBox. IIRC
the corruption (i.e. pool being not importable) was caused when I killed
virtual box, because it was hung.
This pool consists of a single vdev and I would really like to get
Hi,
I am just having trouble with my opensolaris in a virtual box. It
refuses to boot with the following crash dump:
panic[cpu0]/thread=d5a3edc0: assertion failed: 0 ==
dmu_buf_hold_array(os, object, offset, size, FALSE, FTAG, &numbufs,
&dbp), file: ../../common/fs/zfs/dmu.c, line: 614
d5a3eb08
Ben schrieb:
> Thomas,
>
> Could you post an example of what you mean (ie commands in the order to use
> them)? I've not played with ZFS that much and I don't want to muck my system
> up (I have data backed up, but am more concerned about getting myself in a
> mess and having to reinstall, th
dick hoogendijk schrieb:
> On Wed, 24 Jun 2009 03:14:52 PDT
> Ben wrote:
>
>> If I detach c5d1s0, add a 1TB drive, attach that, wait for it to
>> resilver, then detach c5d0s0 and add another 1TB drive and attach
>> that to the zpool, will that up the storage of the pool?
>
> That will do the tri
Volker A. Brandt schrieb:
2) disks that were attached once leave a stale /dev/dsk entry behind
that takes full 7 seconds to stat() with kernel running at 100%.
>>> Such entries should go away with an invocation of "devfsadm -vC".
>>> If they don't, it's a bug IMHO.
>> yes, they go away. B
Volker A. Brandt schrieb:
>> 2) disks that were attached once leave a stale /dev/dsk entry behind
>> that takes full 7 seconds to stat() with kernel running at 100%.
>
> Such entries should go away with an invocation of "devfsadm -vC".
> If they don't, it's a bug IMHO.
>
>
> Regards -- Volker
y
Andre van Eyssen schrieb:
> On Mon, 22 Jun 2009, Jacob Ritorto wrote:
>
>> Is there a card for OpenSolaris 2009.06 SPARC that will do SATA
>> correctly yet? Need it for a super cheapie, low expectations,
>> SunBlade 100 filer, so I think it has to be notched for 5v PCI slot,
>> iirc. I'm OK with
Hi,
I just tried replicating a zfs dataset, which failed because the dataset
has a mountpoint set and zfs received tried to mount the target dataset
to the same directory.
I.e. I did the following:
$ zfs send -R mypool/h...@20090615 | zfs receive -d backup
cannot mount '/var/hg': directory is not
Troy Nancarrow (MEL) schrieb:
> Hi,
>
> Please forgive me if my searching-fu has failed me in this case, but
> I've been unable to find any information on how people are going about
> monitoring and alerting regarding memory usage on Solaris hosts using ZFS.
>
> The problem is not that the ZFS
Hi,
there was recently a bug reported against EXT4 that gets triggered by
KDE: https://bugs.edge.launchpad.net/ubuntu/+source/linux/+bug/317781
Now I'd like to verify that my understanding of ZFS behavior and
implementations is correct, and ZFS is unaffected from this kind of
issue. Maybe somebod
Julius Roberts wrote:
>>> How do i compile mbuffer for our system,
>
> Thanks to Mike Futerko for help with the compile, i now have it installed OK.
>
>>> and what syntax to i use to invoke it within the zfs send recv?
>
> Still looking for answers to this one? Any example syntax, gotchas
> et
>
> Seems like there's a strong case to have such a program bundled in Solaris.
>
I think, the idea of having a separate configurable buffer program with a high
feature set fits into UNIX philosophy of having small programs that can be used
as building blocks to solve larger problems.
mbuffe
- original Nachricht
Betreff: Re: [zfs-discuss] 'zfs recv' is very slow
Gesendet: Fr, 14. Nov 2008
Von: Bob Friesenhahn<[EMAIL PROTECTED]>
> On Fri, 14 Nov 2008, Joerg Schilling wrote:
> >
> > On my first Sun at home (a Sun 2/50 with 1 MB of RAM) in 1986, I could
> > set the socket
Jerry K schrieb:
> Hello Thomas,
>
> What is mbuffer? Where might I go to read more about it?
>
> Thanks,
>
> Jerry
>
>
>
>>
>> yesterday, I've release a new version of mbuffer, which also enlarges
>> the default TCP buffer size. So everybody using mbuffer for network data
>> transfer might
Joerg Schilling schrieb:
> Andrew Gabriel <[EMAIL PROTECTED]> wrote:
>
>> That is exactly the issue. When the zfs recv data has been written, zfs
>> recv starts reading the network again, but there's only a tiny amount of
>> data buffered in the TCP/IP stack, so it has to wait for the network to
Roch schrieb:
> Thomas, for long latency fat links, it should be quite
> beneficial to set the socket buffer on the receive side
> (instead of having users tune tcp_recv_hiwat).
>
> throughput of a tcp connnection is gated by
> "receive socket buffer / round trip time".
>
> Could that be Ross' p
Hi,
I'm observing a change in the values returned by zpool_get_prop_int. In Solaris
10 update 5 this function returned the values for ZPOOL_PROP_CAPACITY in bytes,
but in update 6 (i.e. nv88?) it seems to be returning the value in kB.
Both Solaris versions were shipped with libzfs.so.2. So how
Christiaan Willemsen schrieb:
>
>> do the disks show up as expected in format?
>>
>> Is your root pool just a single disk or is it a mirror of mutliple
>> disks? Did you attach/detach any disks to the root pool before rebooting?
>>
> No, we did nothing at all to the pools. The root pool is a ha
Christiaan Willemsen schrieb:
> Since the last reboot, our system wont boot anymore. It hangs at the "Use is
> subject to license terms." line for a few minutes, and then gives an error
> that it can't find the device it needs for making the root pool, and
> eventually reboots.
>
> We did not c
Bob Friesenhahn schrieb:
> On Tue, 21 Oct 2008, Håvard Krüger wrote:
>
>> Is it possible to build a RaidZ with 3x 1TB disks and 5x 0.5TB disks,
>> and then swap out the 0.5 TB disks as time goes by? Is there a
>> documentation/wiki on doing this?
>
> Yes, you can build a raidz vdev with all of th
org
>> Subject: Re: [zfs-discuss] Improving zfs send performance
>>
>> Thomas Maier-Komor schrieb:
>>> BTW: I release a new version of mbuffer today.
>> WARNING!!!
>>
>> Sorry people!!!
>>
>> The latest version of mbuffer has a regression that c
Thomas Maier-Komor schrieb:
> BTW: I release a new version of mbuffer today.
WARNING!!!
Sorry people!!!
The latest version of mbuffer has a regression that can CORRUPT output
if stdout is used. Please fall back to the last version. A fix is on the
way...
- Tho
Ross schrieb:
> Hi,
>
> I'm just doing my first proper send/receive over the network and I'm getting
> just 9.4MB/s over a gigabit link. Would you be able to provide an example of
> how to use mbuffer / socat with ZFS for a Solaris beginner?
>
> thanks,
>
> Ross
> --
> This message posted fro
Carsten Aulbert schrieb:
> Hi again,
>
> Thomas Maier-Komor wrote:
>> Carsten Aulbert schrieb:
>>> Hi Thomas,
>> I don't know socat or what benefit it gives you, but have you tried
>> using mbuffer to send and receive directly (options -I and -O)?
>
&g
Carsten Aulbert schrieb:
> Hi Thomas,
>
> Thomas Maier-Komor wrote:
>
>> Carsten,
>>
>> the summary looks like you are using mbuffer. Can you elaborate on what
>> options you are passing to mbuffer? Maybe changing the blocksize to be
>> consistent with
Carsten Aulbert schrieb:
> Hi all,
>
> although I'm running all this in a Sol10u5 X4500, I hope I may ask this
> question here. If not, please let me know where to head to.
>
> We are running several X4500 with only 3 raidz2 zpools since we want
> quite a bit of storage space[*], but the performa
Joseph Mocker schrieb:
> Hello,
>
> I haven't seen this discussed before. Any pointers would be appreciated.
>
> I'm curious, if I have a set of disks in a system, is there any benefit
> or disadvantage to breaking the disks into multiple pools instead of a
> single pool?
>
> Does multiple poo
Frank Fischer wrote:
> After having massive problems with a supermicro X7DBE box using AOC-SAT2-MV8
> Marvell controllers and opensolaris snv79 (same as described here:
> http://sunsolve.sun.com/search/document.do?assetkey=1-66-233341-1) we just
> start over using new hardware and opensolaris 20
Tom Buskey schrieb:
>> On Fri, Jun 6, 2008 at 16:23, Tom Buskey
>> <[EMAIL PROTECTED]> wrote:
>>> I have an AMD 939 MB w/ Nvidea on the motherboard
>> and 4 500GB SATA II drives in a RAIDZ.
>> ...
>>> I get 550 MB/s
>> I doubt this number a lot. That's almost 200
>> (550/N-1 = 183) MB/s per
>> dis
[EMAIL PROTECTED] wrote:
> Uwe,
>
> Please see pages 55-80 of the ZFS Admin Guide, here:
>
> http://opensolaris.org/os/community/zfs/docs/
>
> Basically, the process is to upgrade from nv81 to nv90 by using the
> standard upgrade feature. Then, use lucreate to migrate your UFS root
> file system
Darren J Moffat schrieb:
> Joerg Schilling wrote:
>> "Poulos, Joe" <[EMAIL PROTECTED]> wrote:
>>
>>> Is there a ZFS equivalent of ufsdump and ufsrestore?
>>>
>>>
>>>
>>> Will creating a tar file work with ZFS? We are trying to backup a
>>> ZFS file system to a separate disk, and would like
Hi,
I've run into an issue with a test machine that I'm happy to encounter
with this machine, because it is no real trouble. But I'd like to know
the solution for this issue in case I run into it again...
I've installed OpenSolaris 2008.05 on an USB disk on a laptop. After
installing I've modifie
after some fruitful discussions with Jörg, it turned out that my mtwrite
patch prevents tar, star, gtar, and unzip from setting the file times
correctly. I've investigated this issue and updated the patch accordingly.
Unfortunately, I encountered an issue concerning semaphores, which seem
to ha
Bob Friesenhahn schrieb:
> On my drive array (capable of 260MB/second single-process writes and
> 450MB/second single-process reads) 'zfs iostat' reports a read rate of
> about 59MB/second and a write rate of about 59MB/second when executing
> 'cp -r' on a directory containing thousands of 8MB f
Richard Elling wrote:
>
> The size of the ARC (cache) is available from kstat in the zfs
> module (kstat -m zfs). Neel wrote a nifty tool to track it over
> time called arcstat. See
> http://www.solarisinternals.com/wiki/index.php/Arcstat
>
> Remember that this is a cache and subject to evi
Kava schrieb:
> My 2 cents ... read somewhere that you should not be running LVM on top of
> ZFS ... something about additional overhead.
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
>
Thiago Sobral schrieb:
> Hi Thomas,
>
> Thomas Maier-Komor escreveu:
>> Thiago Sobral schrieb:
>>>
>>> I need to manage volumes like LVM does on Linux or AIX, and I think
>>> that ZFS can solve this issue.
>>>
>>> I read the SVM specific
Thiago Sobral schrieb:
> Hi folks,
>
> I need to manage volumes like LVM does on Linux or AIX, and I think that
> ZFS can solve this issue.
>
> I read the SVM specification and certainly it doesn't will be the
> solution that I'll adopt. I don't have Veritas here.
>
Why do you think it doesn'
Robert Milkowski schrieb:
> Hello Thomas,
>
> Friday, January 18, 2008, 10:31:17 AM, you wrote:
>
> TMK> Hi,
>
> TMK> I'd like to move a disk from one controller to another. This disk is
> TMK> part of a mirror in a zfs pool. How can one do this without having to
> TMK> export/import the pool
Hi,
I'd like to move a disk from one controller to another. This disk is
part of a mirror in a zfs pool. How can one do this without having to
export/import the pool or reboot the system?
I tried taking it offline and online again, but then zpool says the disk
is unavailable. Trying a zpool re
>
> the ZIL is always there in host memory, even when no
> synchronous writes
> are being done, since the POSIX fsync() call could be
> made on an open
> write channel at any time, requiring all to-date
> writes on that channel
> to be committed to persistent store before it returns
> to the appl
I observed the following on a machine running Solaris 10 update 4:
I was sending a zfs with zfs send to a zpool on the same machine for backup
purposes. After a while (the machine was otherwise idle), the desktop froze and
a couple of seconds later I saw that the page scanner had kicked in and wa
Hi,
now, as I'm back to Germany,I've got access to my machine at home with ZFS, so
I could test my binary patch for multi-threading with tar on a ZFS filesystems.
Results look like this:
.tar, small files (e.g. gcc source tree), speedup: x8
.tar.gz, small files (gcc sources tree), speedup x4
.ta
> Hello Thomas,
>
> With ZFS as local file system it shouldn't be a
> problem unless tar
> fdsync's each file but then removing fdsyncs would be
> easier.
>
> In case of nfs/zfs multi-threaded tar should help but
> I guess not for
> writes but rather for file/dirs creation and file
> closes. If y
Hi everybody,
many people, like myself, tested the performance of the ZFS filesystem by doing
a "tar xf something.tar". Unfortunately, ZFS doesn't handle this workload
pretty well as all writes are being executed sequentially. So some people
requested a multi-threaded tar...
Well, here it come
Hi,
I'm not sure if this is the right forum, but I guess this topic will be bounced
into the right direction from here.
With ZFS using as much physical memory as it can get, dumps and livedumps via
'savecore -L' are huge in size. I just tested it on my workstation and got a
1.8G vmcore file,
Hi Tim,
I just retried to reproduce it to generate a reliable test case. Unfortunately,
I cannot reproduce the error message. So I really have no idea what might have
cause it
Sorry,
Tom
This message posted from opensolaris.org
___
zfs-discuss
Is this a known problem/bug?
$ zfs snapshot zpool/[EMAIL PROTECTED]
internal error: unexpected error 16 at line 2302 of ../common/libzfs_dataset.c
this occured on:
$ uname -a
SunOS azalin 5.10 Generic_118833-24 sun4u sparc SUNW,Sun-Blade-2500
This message posted from opensolaris.org
_
Hi,
concerning this issue I didn't find anything in the bug database, so I thought
I report it here...
When running live-upgrade on a system with a zfs, LU creates directories for
all ZFS filesystems in the ABE. This causes svc:/system/filesystem/local to go
to maintainance state, when booting
Hi everybody,
this question has probably been asked before, but I couldn't find an answer to
it anywhere...
What privileges are require to be able to do a snapshot as a regular user? Is
it already possible to pass ownership of a ZFS filesystem to a specific user,
so that he's able to do a snap
I have tested it, and it is _much_ better now. Unfortunately adding "set
txg_time = 60" in /etc/system does not set this value upon system startup. It
only works using mdb at runtime. Do you have an idea, what might be wrong?
Cheers,
Tom
This message posted from opensolaris.org
_
Thanks Robert,
that's exactly what I was looking for. I will try it when I come back home
tomorrow. Is it possible to set this value in /etc/system, too?
Cheers,
Tom
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@
Hi,
after switching over to zfs from ufs for my ~/ at home, I am a little bit
disturbed by the noise the disks are making. To be more precise, I always have
thunderbird and firefox running on my desktop and either or both seem to be
writing to my ~/ at short intervals and ZFS flushes these tran
Hi,
I just upgraded my machine at home to Solaris 10U2. As I already had a ZFS, I
wanted to migrate my home directories at once to a ZFS from a local UFS
metadisk. Copying and changing the config of the automounter succeeded without
any problems. But when I tried to login to JDS, login suceeded
Hi,
my colleage is just testing ZFS and created a zpool which had a backing store
file on a TMPFS filesystem. After deleting the file everything still worked
normally. But destroying the pool caused an assertion failure and a panic. I
know this is neither a real-live szenario nor a good idea. T
66 matches
Mail list logo