Jesus Cea wrote:
> Would ZFS boot be able to boot from a "copies" boot dataset, when one of
> the disks are failing?. Counting that ditto blocks are spread between
> both disks, of course.
You can not boot from a pool with multiple top-level vdevs (eg, the "copies"
pool you describe). We hope to
Jesus Cea wrote:
> Read performance [when using "zfs set copies=2" vs a mirror] would double,
> and this is very nice
I don't see how that could be the case. Either way, the reads should be able
to fan out over the two disks.
--matt
___
zfs-discuss m
Do you have compression turned on? If so, dd'ing from /dev/zero isn't very
useful as a benchmark. (I don't recall if all-zero blocks are always detected
if checksumming is turned on, but I seem to recall that they are, even if
compression is off.)
This message posted from opensolaris.org
___
Hi Everybody,
From the last one week so many mails are exchanged on this topic.
I have also one similar issue like this. I will appreciate if any one helps
me on this.
I have an IO test tool, which writes the data and reads the data and then
compare the read data with write data
Mastan,
Import/Export are pool level commands whereas mount/unmount are
file system level commands, both serving different purposes.
You would typically 'export' a pool when you want to connect the
storage to a different machine and 'import' the pool there for
subsequent use in that machine, which
MC wrote:
> Re: http://bugs.opensolaris.org/view_bug.do?bug_id=6602947
>
> Specifically this part:
>
> [i]Create zpool /testpool/. Create zfs file system /testpool/testfs.
> Right click on /testpool/testfs (filesystem) in nautilus and rename to
> testfs2.
> Do zfs list. Note that only /testpoo
> 3) Forget PCI-Express -- if you have a free PCI-X (or
> PCI)-slot. Supermicro AOC-SAT2-MV8 (PCI-X cards are
> (usually) plain-PCI-compatible; and this one is). It
> has 8 ports, is natively plug-and-play-suported and
> does not cost more than twice a si3132, and costs
> only a fraction of other >
Hi All,
Can any one explain this ?
-Mashtan D
dudekula mastan <[EMAIL PROTECTED]> wrote:
Hi All,
What exactly import and export commands will do ?
Are they similar to mount and unmount ?
How import differs from mount and how export differs from umount ?
?ukasz wrote:
> I have a huge problem with space maps on thumper. Space maps takes over 3GB
> and write operations generates massive read operations.
> Before every spa sync phase zfs reads space maps from disk.
>
> I decided to turn on compression for pool ( only for pool, not filesystems )
> a
Max,
Glad you figured out where your problem was. Compression does complicate
things. Also, make sure you have the most recent (highest txg) uberblock.
Just for the record, using MDB to print out ZFS data structures is totally
sweet! We have actually been wanting to do that for about 5 years
Solaris wrote:
> Greetings.
>
> I applied the Recommended Patch Cluster including 120012-14 to a U3
> system today. I upgraded my zpool and it seems like we have some very
> strange information coming from zpool list and zfs list...
>
> [EMAIL PROTECTED]:/]# zfs list
> NAMEUSED AVAI
Tim Spriggs wrote:
> I think they are listed in order with "zfs list".
That's correct, they are listed in the order taken, from oldest to newest.
--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
If you haven't resolved this bug with the storage folks, you can file a bug
at http://bugs.opensolaris.org/
--matt
eric kustarz wrote:
> This actually looks like a sd bug... forwarding it to the storage
> alias to see if anyone has seen this...
>
> eric
>
> On Sep 14, 2007, at 12:42 PM, J Du
Thanks a lot for your help everyone :)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
MC wrote:
> With the arrival of ZFS, the "format" command is well on its way to
> deprecation station. But how else do you list the devices that zpool can
> create pools out of?
>
> Would it be reasonable to enhance zpool to list the vdevs that are available
> to it? Perhaps as part of the he
Updated to latest firmware 1.43-70417 ...
same problem..
WARNING: arcmsr0: dma map got 'no resources'
WARNING: arcmsr0: dma allocate fail
WARNING: arcmsr0: dma allocate fail free scsi hba pkt
WARNING: arcmsr0: dma map got 'no resources'
WARNING: arcmsr0: dma allocate fail
WARNING: a
T
Sergiy Kolodka wrote:
> Hello,
>
> We're hitting this bug for few days, however SunSolve isn't really
> helpful in way to fix it, it says "Integrated in Build: snv_36", I think
> its Nevada, but how do I find when it was fixed in Solaris ?
> Should I assume that it was fixed in -36 kernel patch a
Hello,
We're hitting this bug for few days, however SunSolve isn't really helpful in
way to fix it, it says "Integrated in Build: snv_36", I think its Nevada, but
how do I find when it was fixed in Solaris ?
Should I assume that it was fixed in -36 kernel patch as well ? Box is running
6/06 re
Just as I create a ZFS pool and copy the root partition to it the
performance seems to be really good then suddenly the system hangs all my
sesssions and displays on the console:
Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got 'no resources'
Oct 10 00:23:28 sunrise arcmsr: WAR
On Tue, 2007-10-09 at 23:36 +0100, Adam Lindsay wrote:
> Gary Gendel wrote:
> > Norco usually uses Silicon Image based SATA controllers.
>
> Ah, yes, I remember hearing SI SATA multiplexer horror stories when I
> was researching storage possibilities.
>
> However, I just heard back from Norco:
Gary Gendel wrote:
> Norco usually uses Silicon Image based SATA controllers.
Ah, yes, I remember hearing SI SATA multiplexer horror stories when I
was researching storage possibilities.
However, I just heard back from Norco:
> Thank you for interest in Norco products.
> Most of part uses by D
Michael bigfoot.com> writes:
>
> Excellent.
>
> Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING:
> /pci 2,0/pci1022,7458 8/pci11ab,11ab 1/disk 2,0 (sd13):
> Oct 9 13:36:01 zeta1 Error for Command: readError
Level: Retryable
>
> Scrubbing now.
This is on
Hi,
We have implemented a zfs files system for home directories and have enabled
it with quotas+snapshots. However the snapshots are causing an issue with
the user quotas. The default snapshot files go under
~username/.zfs/snapshot, which is a part of the user file system. So if the
quota is 10G an
Title: Signature
Will et al
I added a few extra graphs to the original posting today showing the
work that an individual disk was doing
http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire
and ran the RAID-Z config with fewer disks just to see what happened.
http://blogs.sun.c
for those who are interested in lzo with zfs, i have made a special version of
the patch taken from the zfs-fuse mailinglist:
http://82.141.46.148/tmp/zfs-fuse-lzo.tgz
this file contains the patch in unified diff format and also a broken out
version (i.e. split into single files).
maybe this m
[EMAIL PROTECTED] wrote on 10/09/2007 01:11:16 PM:
> I am using a x4500 with a single "4*( raid2z 9 + 2)+ 2 spare" pool.
> I some bad blocks on one of the disks
> Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
> PROTECTED],
> 0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROT
I am using a x4500 with a single "4*( raid2z 9 + 2)+ 2 spare" pool. I some bad
blocks on one of the disks
Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL
PROTECTED],0 (sd13):
Oct 9 13:36:01 zeta1 Error f
i wanted to test some simultanious sequential writes and wrote this little
snippet:
#!/bin/bash
for ((i=1; i<=20; i++))
do
dd if=/dev/zero of=lala$i bs=128k count=32768 &
done
While the script was running i watched zpool iostat and measured the time
between starting and stopping of the writes
Hi Masthan,
There was a race in the block allocation code which allocates a
single disk block to two consumers. The system will trip when both
the consumers try to free the block.
--
Prabahar.
On Oct 9, 2007, at 4:20 AM, dudekula mastan wrote:
> Hi Jeyaram,
>
> Thanks for your reply. Can yo
On Oct 9, 2007, at 4:25 AM, Thomas Liesner wrote:
> Hi,
>
> i checked with $nthreads=20 which will roughly represent the
> expected load and these are the results:
Note, here is the description of the 'fileserver.f' workload:
"
define process name=filereader,instances=1
{
thread name=filere
hello,
I am having a issue with zpool import.
Only some fs are mounted, some stay unmounted,
see below :
[EMAIL PROTECTED] # zpool import co-e34-dev
cannot mount
'/co-e34-dev/oracle/E34': directory is not empty
cannot mount
'/co-e34-dev/usr/sap': directory is not empty
cannot mount
'/co-e34-d
Excellent.
Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL
PROTECTED],0 (sd13):
Oct 9 13:36:01 zeta1 Error for Command: readError Level:
Retryable
Scrubbing now.
Big thanks gg
On Mon, 8 Oct 2007, Kugutsumen wrote:
> I just tried..
> mount -o rw,remount /
> zpool import -f tank
> mount -F zfs tank/rootfs /a
> zpool status
> ls -l /dev/dsk/c1t0d0s0
> # /[EMAIL PROTECTED],0/pci1000,[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a
> csh
> setenv TERM vt100
> vi /a/boot/solaris/boot
Norco usually uses Silicon Image based SATA controllers. The OpenSolaris driver
for this has caused me enough headaches for me to replace it with a Marvell
based board. I would also imagine that they use a 5 to 1 SATA multiplexer,
which is not supported by any OpenSolaris driver that I've tested
Are there any clues in the logs? I have had a similar problem when a disk bad
block was uncovered by zfs. I've also seen this when using the Silicon Image
driver without the recommended patch.
The former became evident when I ran a scrub. I saw the SCSI timeout errors pop
up in the "kern" sys
Claus Guttesen wrote:
> Hi.
>
> I read the zfs getting started guided at
> http://www.opensolaris.org/os/community/zfs/intro/;jsessionid=A64DABB3DF86B8FDBF8A3E281C30B8B2.
>
> I created zpool disk1 and created disk1/home and assigned /export/home
> to disk1/home as mountpoint. Then I create a user
Hi.
I read the zfs getting started guided at
http://www.opensolaris.org/os/community/zfs/intro/;jsessionid=A64DABB3DF86B8FDBF8A3E281C30B8B2.
I created zpool disk1 and created disk1/home and assigned /export/home
to disk1/home as mountpoint. Then I create a user with 'zfs create
disk1/home/usernam
Hi,
i checked with $nthreads=20 which will roughly represent the expected load and
these are the results:
IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s, 255us cpu/op, 0.2ms
latency
BTW, smpatch is still running and further tests will get done when the system
is rebooted.
The fig
Hi,
i checked with $nthreads=20 which will roughly represent the expected load and
these are the results:
IO Summary: 7989 ops 7914.2 ops/s, (996/979 r/w) 142.7mb/s,255us
cpu/op, 0.2ms latency
BTW, smpatch is still running and further tests will get done when the system
is reboote
Hi Jeyaram,
Thanks for your reply. Can you explain more about this bug ?
Regards
Masthan D
Prabahar Jeyaram <[EMAIL PROTECTED]> wrote:
Your system seem to have hit a variant of BUG :
6458218 - http://bugs.opensolaris.org/view_bug.do?bug_id=6458218
This is fixed in Opensolaris Bui
>[EMAIL PROTECTED] wrote:
>>> If you don't have a 64bit cpu, add more ram(tm).
>>
>>
>> Actually, no; if you have a 32 bit CPU, you must not add too much
>> RAM or the kernel will run out of space to put things.
>
>Hrm. Do you have a working definition of "too much"?
I think it would be someth
[EMAIL PROTECTED] wrote:
>> If you don't have a 64bit cpu, add more ram(tm).
>
>
> Actually, no; if you have a 32 bit CPU, you must not add too much
> RAM or the kernel will run out of space to put things.
Hrm. Do you have a working definition of "too much"?
adam
___
>If you don't have a 64bit cpu, add more ram(tm).
Actually, no; if you have a 32 bit CPU, you must not add too much
RAM or the kernel will run out of space to put things.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.ope
Adam Lindsay wrote:
> Hello, Robert,
>
> Robert Milkowski wrote:
>
>> Because it offers upto 1GB of memory, 32bit shouldn't be an issue.
>
> Sorry, could someone expand on this?
> The only received opinion I've seen on 32-bit is from the ZFS best
> practice wiki, which simply says "Run ZFS on a
Hello, Robert,
Robert Milkowski wrote:
> Because it offers upto 1GB of memory, 32bit shouldn't be an issue.
Sorry, could someone expand on this?
The only received opinion I've seen on 32-bit is from the ZFS best
practice wiki, which simply says "Run ZFS on a system that runs a 64-bit
kernel."
Every day we see pause times of sometime 60 seconds to read 1K of a file for
local reads as well as NFS in a test setup.
We have a x4500 setup as a single 4*( raid2z 9 + 2)+2 spare pool and have the
files system mounted over v5 krb5 NFS and accessed directly. The pool is a
20TB pool and is usi
Hello Adam,
Tuesday, October 9, 2007, 10:15:13 AM, you wrote:
AL> Hey all,
AL> Has anyone else noticed Norco's recently-announced DS-520 and thought
AL> ZFS-ish thoughts? It's a five-SATA, Celeron-based desktop NAS that ships
AL> without an OS.
AL> http://www.norcotek.com/item_detail.php?ca
Hello Pawel,
Monday, October 8, 2007, 9:45:01 AM, you wrote:
PJD> On Fri, Oct 05, 2007 at 08:52:17AM +0100, Robert Milkowski wrote:
>> Hello Eric,
>>
>> Thursday, October 4, 2007, 5:54:06 PM, you wrote:
>>
>> ES> On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote:
>> >> > This bug was re
Hello Richard,
Friday, October 5, 2007, 6:41:10 PM, you wrote:
RE> Robert Milkowski wrote:
>> Hello Richard,
>>
>> Friday, September 28, 2007, 7:45:47 PM, you wrote:
>>
>> RE> Kris Kasner wrote:
>>
> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too,
> because I
Hey all,
Has anyone else noticed Norco's recently-announced DS-520 and thought
ZFS-ish thoughts? It's a five-SATA, Celeron-based desktop NAS that ships
without an OS.
http://www.norcotek.com/item_detail.php?categoryid=8&modelno=ds-520
What practical impact is a 32-bit processor going to hav
Hi Thomas
the point I was making was that you'll see low performance figures
with 100 concurrent threads. If you set nthreads to something closer
to your expected load, you'll get a more accurate figure.
Also, there's a new filebench out now, see
http://blogs.sun.com/erickustarz/entry/filebench
Hi again,
i did not want to compare the filebench test with the single mkfile command.
Still, i was hoping to see similar numbers in the filbench stats.
Any hints what i could do to further improve the performance?
Would a raid1 over two stripes be faster?
TIA,
Tom
This message posted from op
52 matches
Mail list logo