Hi all,
Today a new message has been seen in my system and another freeze has
happen to it.
The message is :
Mar 9 06:20:01 zfs01 failed to configure smp w50016360001e06bf
Mar 9 06:20:01 zfs01 mpt: [ID 201859 kern.warning] WARNING: smp_start
do passthru error 16
Mar 9 06:20:01 zfs01 scsi
> First a little background, I'm running b130, I have a
> zpool with two Raidz1(each 4 drives, all WD RE4-GPs)
> "arrays" (vdev?). They're in a Norco-4220 case
> ("home" server), which just consists of SAS
> backplanes (aoc-usas-l8i ->8087->backplane->SATA
> drives). A couple of the drives are sh
tmpfs lacks features like quota and NFSv4 ACL support. May not be the
best choice if such features are required.
Olga
On Tue, Mar 9, 2010 at 3:31 AM, Bill Sommerfeld wrote:
> On 03/08/10 17:57, Matt Cowger wrote:
>>
>> Change zfs options to turn off checksumming (don't want it or need it),
>> at
On Mar 8, 2010, at 6:31 PM, Bill Sommerfeld wrote:
>
> if you have an actual need for an in-memory filesystem, will tmpfs fit
> the bill?
>
> - Bill
Very good point bill - just ran this test and started to get the numbers I was
expecting (1.3 GB
I don't have an answer to this question, but I can say, I've seen a similar
surprising result. I ran iozone on various raid configurations of spindle
disks . and on a ramdisk. I was surprised to see the ramdisk is only about
50% to 200% faster than the next best competitor in each category. . I d
On Mar 8, 2010, at 6:31 PM, Richard Elling wrote:
>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and mounting the
>> UFS forcedirectio (no point in using a buffer cache memory for something
>> that’s already in memory)
>
> Did you also set primarycache=none?
> -- richard
Good
First a little background, I'm running b130, I have a zpool with two
Raidz1(each 4 drives, all WD RE4-GPs) "arrays" (vdev?). They're in a
Norco-4220 case ("home" server), which just consists of SAS backplanes
(aoc-usas-l8i ->8087->backplane->SATA drives). A couple of the drives are
showing a
Thanks guys,
It's all working perfectly so farand very easy too.
Given that my boot disks (consumer laptop drives) cost only ~$60AUD each, it's
a cheap way to maintain high availability and backup.
ZFS does not seem to mind having one of the 3 offline so I'd recomend this to
others loo
On Mar 8, 2010, at 5:11 AM, Edward Ned Harvey wrote:
>> It all depends on how they are connecting to the storage. iSCSI, CIFS,
>> NFS,
>> database, rsync, ...?
>>
>> The reason I say this is because ZFS will coalesce writes, so just
>> looking at
>> iostat data (ops versus size) will not be appr
On Mar 8, 2010, at 5:57 PM, Matt Cowger wrote:
> Hi Everyone,
>
> It looks like I’ve got something weird going with zfs performance on a
> ramdisk….ZFS is performing not even a 3rd of what UFS is doing.
>
> Short version:
>
> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren’t
It can, but doesn't in the command line shown below.
M
On Mar 8, 2010, at 6:04 PM, "ольга крыжановская" wrote:
> Does iozone use mmap() for IO?
>
> Olga
>
> On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger
> wrote:
>> Hi Everyone,
>>
>>
>>
>> It looks like I’ve got something weird going with zf
On 03/08/10 17:57, Matt Cowger wrote:
Change zfs options to turn off checksumming (don't want it or need it), atime,
compression, 4K block size (this is the applications native blocksize) etc.
even when you disable checksums and compression through the zfs command,
zfs will still compress and
Does iozone use mmap() for IO?
Olga
On Tue, Mar 9, 2010 at 2:57 AM, Matt Cowger wrote:
> Hi Everyone,
>
>
>
> It looks like I’ve got something weird going with zfs performance on a
> ramdisk….ZFS is performing not even a 3rd of what UFS is doing.
>
>
>
> Short version:
>
>
>
> Create 80+ GB ramd
Hi Everyone,
It looks like I've got something weird going with zfs performance on a
ramdiskZFS is performing not even a 3rd of what UFS is doing.
Short version:
Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren't swapping
Create zpool on it (zpool create ram)
Change zfs op
On Sun, Mar 7, 2010 at 19:40, Ethan wrote:
> I want to move my pool (consisting of five 1.5TB sata drives in raidz1) to
> a different computer. I am encountering issues with controllers - the
> motherboard (Asus P5BV-C/4L) has 8 sata ports: 4 on a marvell 88se6145,
> which seems not to be support
On Mon, Mar 8, 2010 at 2:00 PM, Chris Dunbar wrote:
> Hello,
>
> I just found this list and am very excited that you all are here! I have a
> homemade ZFS server that serves as our poor man's Thumper (we named it
> thumpthis) and provides primarily NFS shares for our VMware environment. As
> is o
On Mon, Mar 8, 2010 at 5:47 PM, Miles Nordin wrote:
> > "tc" == Tim Cook writes:
>
>tc> I'm betting its more the fact that zfs-discuss is not
>
> Firstly, there's no need for you to respond on anyone's behalf,
> especially not by ``betting.''
>
>
I'm not betting, I know. It's called bei
On Mon, Mar 8, 2010 at 3:33 PM, Tim Cook wrote:
> Is there a way to manually trigger a hot spare to kick in? Mine doesn't
> appear to be doing so. What happened is I exported a pool to reinstall
> solaris on this system. When I went to re-import it, one of the drives
> refused to come back onl
> "tc" == Tim Cook writes:
tc> I'm betting its more the fact that zfs-discuss is not
Firstly, there's no need for you to respond on anyone's behalf,
especially not by ``betting.''
Secondly, fishworks does run ZFS, and I for one am interested in what
works and what doesn't.
tc> I do
On Mar 8, 2010, at 1:00 AM, Thomas W wrote:
> Hi, it's me again.
>
> First of all, technically slicing the drive worked like it should.
>
> I started to experiment and found some issues I don't really understand.
>
> My base playground setup:
> - Intel D945GCLF2, 2GB ram, Opensolaris from EON
>
On 08 March, 2010 - Bill Sommerfeld sent me these 0,4K bytes:
> On 03/08/10 12:43, Tomas Ögren wrote:
> So we tried adding 2x 4GB USB sticks (Kingston Data
>> Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the
>> snapshot times down to about 30 seconds.
>
> Out of curiosity,
On Mon, Mar 8, 2010 at 2:10 PM, Miles Nordin wrote:
> > "al" == Adam Leventhal writes:
>
>al> As always, we welcome feedback (although zfs-discuss is not
>al> the appropriate forum),
>
> ``Please, you criticize our work in private while we compliment it in
> public.''
>
I'm betting
On 03/08/10 12:43, Tomas Ögren wrote:
So we tried adding 2x 4GB USB sticks (Kingston Data
Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the
snapshot times down to about 30 seconds.
Out of curiosity, how much physical memory does this system have?
On Sat, 6 Mar 2010, Paul B. Henson wrote:
> If you have a Sun support contract, open a support call and ask to be added
> to SR #72456444, which is the case I have open to try and get a better
> solution to chmod/ACL interaction.
CR#6933018 has been created for this issue; for any interested part
On 08 March, 2010 - Miles Nordin sent me these 1,8K bytes:
> > "gm" == Gary Mills writes:
>
> gm> destroys the oldest snapshots and creates new ones, both
> gm> recursively.
>
> I'd be curious if you try taking the same snapshots non-recursively
> instead, does the pause go away?
> "gm" == Gary Mills writes:
gm> destroys the oldest snapshots and creates new ones, both
gm> recursively.
I'd be curious if you try taking the same snapshots non-recursively
instead, does the pause go away?
Because recursive snapshots are special: they're supposed to
atomically s
On Mon, 8 Mar 2010, Svein Skogen wrote:
correct). Backups copy whatever data is on the disks, even if that data
itself is faulty.
Zfs does validate data when it is read from the disk so the only way
that the data itself can be faulty is if it becomes corrupted in RAM
after being read and chec
On 08/03/2010 20:08, Chris Banal wrote:
Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate the
need for a scrub?
No. The reason is that a full backup will use ARC caches and won't read
all copies of each block if pool is redundant. SCRUB assures that all
data and all its cop
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08.03.2010 21:08, Chris Banal wrote:
> Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate the
> need for a scrub?
>
> Thanks,
> Chris
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discus
On 08 March, 2010 - Chris Banal sent me these 0,8K bytes:
> Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate the need
> for a scrub?
No, it won't read redundant copies of the data, which a scrub will.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- St
On Mon, 8 Mar 2010, Chris Banal wrote:
Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate
the need for a scrub?
No. A scrub verifies correctness of all the metadata and file data.
A backup will only read (and verify) only as much as is required for
the backup. Besides faili
> "al" == Adam Leventhal writes:
al> As always, we welcome feedback (although zfs-discuss is not
al> the appropriate forum),
``Please, you criticize our work in private while we compliment it in
public.''
pgpyrrUQeYImd.pgp
Description: PGP signature
Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate the need
for a scrub?
Thanks,
Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Tim Cook wrote:
Is there a way to manually trigger a hot spare to kick in? Mine
doesn't appear to be doing so. What happened is I exported a pool to
reinstall solaris on this system. When I went to re-import it, one of
the drives refused to come back online. So, the pool imported
degraded,
Hi Tim,
I'm not sure why your spare isn't kicking in, but you could manually
replace the failed disk with the spare like this:
# zpool replace fserv c7t5d0 c3t6d0
If you want to run with the spare for awhile, then you can also detach
the original failed disk like this:
# zpool detach fserv c7t
Hi,
I am having a problem where I cannot boot my OSOL 2009.06 laptop, it is stuck
in a reboot loop. I tried booting from a live CD and I am unable to import the
rpool the laptop reboots everytime I issue the force import command. This
happens even though the status of the rpool is ONLINE:
j..
Is there a way to manually trigger a hot spare to kick in? Mine doesn't
appear to be doing so. What happened is I exported a pool to reinstall
solaris on this system. When I went to re-import it, one of the drives
refused to come back online. So, the pool imported degraded, but it doesn't
seem
Cindy,
Thank you very much for your input, what would you recommend as a way
to backup zpools to tape?
Thanks,
Greg
On Mon, Mar 8, 2010 at 8:58 AM, Cindy Swearingen
wrote:
> Greg,
>
> Sure lofiadm should work, but another underlying issue is that, currently,
> building pools on top of other pool
Hi tomwater!
I think this is a great idea and may be the only reasonable way to backup tera
bytes of data with low cost disks. And the idea is quite popular, just google
*split mirror backup* and get lots of results. I also intend to use ZFS like
this.
- 3 way mirror
- 1 disk offsite at all ti
Hi Tony,
Good questions...
Yes, you can assign a spare disk to multiple pools on the same system,
but not shared across systems.
The problem with sharing a spare disk with a root pool is that if the
spare kicks in, a boot block is not automatically applied. The
differences in the labels is prob
(Subscribing to the zfs-auto-snapshot list to avoid moderation hold.)
On Mon, Mar 8, 2010 at 1:47 PM, Tim Foster wrote:
> I usually add SMF instances using 'svccfg -s add myinstance'
I tried that, but there that doesn't create/copy the property groups.
Do I have to manually duplicate the settin
Good catch Eric, I didn't see this problem at first...
The problem here and Richard described it well is that the ctdp* devices
represent the larger fdisk partition, which might also contain a ctds*
device.
This means that in this configuration, c7t0d0p3 and c7t0d0s0, might
share the same block
Can I assign a disk to multiple pools?
The only problem is one pool is "rpool" with an SMI label and the other pool
is a standard ZFS pool?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
Hello,
I just found this list and am very excited that you all are here! I have a
homemade ZFS server that serves as our poor man's Thumper (we named it
thumpthis) and provides primarily NFS shares for our VMware environment. As is
often the case, the server has developed a hardware problem mer
Greg,
Sure lofiadm should work, but another underlying issue is that,
currently, building pools on top of other pools can cause the
system to deadlock or panic.
This kind of configuration is just not supported or recommended
at this time.
Thanks,
Cindy
On 03/05/10 17:38, Gregory Durham w
Paul B. Henson schrieb:
> On Sat, 6 Mar 2010, Ralf Utermann wrote:
>
>> we recently started to look at a ZFS based solution as a possible
>> replacement for our DCE/DFS based campus filesystem (yes, this is still
>> in production here).
>
> Hey, a fellow DFS shop :)... We finally migrated the las
On Sat, 6 Mar 2010, Richard Elling wrote:
On Mar 6, 2010, at 5:38 PM, tomwaters wrote:
My though is this, I remove the 3rd mirror disk and offsite it as a backup.
To do this either:
1. upgrade to a later version where the "zpool split" command is
available
2. zfs send/receiv
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08.03.2010 13:55, Erik Trimble wrote:
> Svein Skogen wrote:
>> Let's say for a moment I should go for this solution, with the rpool
>> tucked away on an usb-stick in the same case as the LTO-3 tapes it
>> "matches" timelinewise (I'm using HP C8017A
On Mon, Mar 8, 2010 at 04:03, Carson Gaspar wrote:
> Khyron wrote:
>
> I believe Richard Elling recommended "cfgadm -v". I'd also suggest
>> "iostat -E", with and without "-n" for good measure.
>>
>> So that's "iostat -E" and "iostat -En". As long as you know the physical
>> drive
>> specifica
On Sun, 7 Mar 2010, Damon Atkins wrote:
The example below shows 28 x 128k writes to the same file before
anything is written to disk and the disk are idle the entire time.
There is no cost to writing to disk if the disk is not doing
anything or is under capacity. (Not a perfect example)
Zfs
I hit the wrong button when moderating the post to zfs-auto-snapshot@
mailing list. I'm forwarding the original mail from Brandon below.
Summarising Brandon's questions:
* how to best create SMF instances on an existing service
* why are there permission errors when running a new instance creat
The recent discussion of backing up ZFS got me thinking about using
the auto snapshot service to do backups.
My current method of doing backups is to send / recv the data pool to
external USB devices, but I haven't been doing backups of the rpool. I
think that doing a send to the data pool, which
> It all depends on how they are connecting to the storage. iSCSI, CIFS,
> NFS,
> database, rsync, ...?
>
> The reason I say this is because ZFS will coalesce writes, so just
> looking at
> iostat data (ops versus size) will not be appropriate. You need to
> look at the
> data flowing between ZF
Svein Skogen wrote:
Let's say for a moment I should go for this solution, with the rpool tucked away on an
usb-stick in the same case as the LTO-3 tapes it "matches" timelinewise (I'm
using HP C8017A kits) as a zfs send -R to a file on the USB stick. (If, and that's a big
if, I get amanda or
On 8 mars 2010, at 11:33, Svein Skogen wrote:
> Let's say for a moment I should go for this solution, with the rpool tucked
> away on an usb-stick in the same case as the LTO-3 tapes it "matches"
> timelinewise (I'm using HP C8017A kits) as a zfs send -R to a file on the
> USB stick. (If, and
Hi Jason, I spent months trying different O/S's for my server and finally
settled on opensolaris.
The o/s is just as easy to install/learn or use than any of the Linux
variants...and ZFS beats mdadm hands down.
I had a server up and sharing files in under an hour. Just do it - (you'll
know s
Let's say for a moment I should go for this solution, with the rpool tucked
away on an usb-stick in the same case as the LTO-3 tapes it "matches"
timelinewise (I'm using HP C8017A kits) as a zfs send -R to a file on the USB
stick. (If, and that's a big if, I get amanda or bacula to do a job I'm
Thomas W wrote:
Hi, it's me again.
First of all, technically slicing the drive worked like it should.
I started to experiment and found some issues I don't really understand.
My base playground setup:
- Intel D945GCLF2, 2GB ram, Opensolaris from EON
- 2 Sata Seagates 500GB
A normal zpool of t
Khyron wrote:
I believe Richard Elling recommended "cfgadm -v". I'd also suggest
"iostat -E", with and without "-n" for good measure.
So that's "iostat -E" and "iostat -En". As long as you know the
physical drive
specification for the drive (ctd which appears to be c9t1d0 from
the other e-m
Hi, it's me again.
First of all, technically slicing the drive worked like it should.
I started to experiment and found some issues I don't really understand.
My base playground setup:
- Intel D945GCLF2, 2GB ram, Opensolaris from EON
- 2 Sata Seagates 500GB
A normal zpool of the two drives to g
I'm imagining that OpenSolaris isn't *too* different from Solaris 10 in this
regard.
I believe Richard Elling recommended "cfgadm -v". I'd also suggest
"iostat -E", with and without "-n" for good measure.
So that's "iostat -E" and "iostat -En". As long as you know the physical
drive
specificat
Slack-Moehrle wrote:
Hi Erik,
I wasn't planning on using the 3Ware hardware RAID as I read that software RAID would be the way to go, I just have the cards for the fact I can plug 8 drives into each. I can't return, I got them used from a guy who did not know what they were used for for $50 each
62 matches
Mail list logo