Hello!
I'm generating two snapshots per day on my zfs pool. I've noticed that
after a while, scrubbing gets very slow, e.g. taking 12 hours and more
on system with cca. 400 snapshots. I think the slowdown is progressive.
When I delete most of the snapshots, things get back to normal, i.e.
s
Hi all,
I'm currently investigating solutions for disaster recovery, and would
like to go with a zfs-based solution. From what I understand, there
are two possible methods of achieving this: an iscsi mirror over a WAN
link, and remote replication with incremental zfs send/recv. Due to
performan
Hello Anton,
Friday, December 22, 2006, 10:55:45 PM, you wrote:
ABR> Do you have more than one snapshot?
ABR> If you have a file system "a", and create two snapshots "[EMAIL PROTECTED]"
ABR> and "[EMAIL PROTECTED]", then any space shared between the two snapshots
does
ABR> not get accounted for
More specifically, if you have the controllers in your array setup in an
active/passive setup, and they have a failover timeout of 30 seconds,
and the hba's have a failover timeout of 20 seconds, when it goes to
failover and cannot write to the disks... I'm sure *bad things* will
happen. Again, I
Hello Jason,
Friday, December 22, 2006, 5:55:38 PM, you wrote:
JJWW> Just for what its worth, when we rebooted a controller in our array
JJWW> (we pre-moved all the LUNs to the other controller), despite using
JJWW> MPXIO ZFS kernel panicked. Verified that all the LUNs were on the
JJWW> correct c
Robert Milkowski wrote On 12/22/06 13:40,:
Hello Torrey,
Friday, December 22, 2006, 9:17:46 PM, you wrote:
TM> Roch - PAE wrote:
The fact that most FS do not manage the disk write caches
does mean you're at risk of data lost for those FS.
TM> Does ZFS? I thought it just turned it on in
Do you have more than one snapshot?
If you have a file system "a", and create two snapshots "[EMAIL PROTECTED]" and
"[EMAIL PROTECTED]", then any space shared between the two snapshots does not
get accounted for anywhere visible. Only once one of those two is deleted, so
that all the space is
Hello Torrey,
Friday, December 22, 2006, 9:17:46 PM, you wrote:
TM> Roch - PAE wrote:
>>
>> The fact that most FS do not manage the disk write caches
>> does mean you're at risk of data lost for those FS.
TM> Does ZFS? I thought it just turned it on in the places where we had
TM> previously tu
Roch - PAE wrote:
The fact that most FS do not manage the disk write caches
does mean you're at risk of data lost for those FS.
Does ZFS? I thought it just turned it on in the places where we had
previously turned if off.
___
zfs-discuss mailing
> We just put together a new system for ZFS use at a
> company, and twice
> in one week we've had the system wedge. You can log
> on, but the zpools
> are hosed, and a reboot never occurs if requested
> since it can't
> unmount the zfs volumes. So, only a power cycle
> works.
>
I've tried to repro
On Dec 22, 2006, at 09:50, Anton B. Rang wrote:
Phantom writes and/or misdirected reads/writes:
I haven't seen probabilities published on this; obviously the disk
vendors would claim zero, but we believe they're slightly
wrong. ;-) That said, 1 in 10^8 bits would mean we’d have an
error
> Unfortunately there are some cases, where the disks lose data,
> these cannot be detected by traditional filesystems but with ZFS:
>
> * bit rot: some bits on the disk gets flipped (~ 1 in 10^11)
> * phantom writes: a disk 'forgets' to write data (~ 1 in 10^8)
> * misdirected reads/writes: disk
[EMAIL PROTECTED] wrote on 12/22/2006 04:50:25 AM:
> Hello Wade,
>
> Thursday, December 21, 2006, 10:15:56 PM, you wrote:
>
>
>
>
>
> WSfc> Hola folks,
>
> WSfc> I am new to the list, please redirect me if I am posting
> to the wrong
> WSfc> location. I am starting to use ZFS in produc
Hi Tim,
One switch environment, two ports going to the host, 4 ports going to
the storage. Switch is a Brocade SilkWorm 3850 and the HBA is a
dual-port QLA2342. Solaris rev is S10 update 3. Array is a StorageTek
FLX210 (Engenio 2884)
The LUNs had moved to the other controller and MPXIO had shown
Always good to hear others experiences J. Maybe I'll try firing up the
Nexan today and downing a controller to see how that affects it vs.
downing a switch port/pulling cable. My first intuition is time-out
values. A cable pull will register differently than a blatant time-out
depending on where
Just for what its worth, when we rebooted a controller in our array
(we pre-moved all the LUNs to the other controller), despite using
MPXIO ZFS kernel panicked. Verified that all the LUNs were on the
correct controller when this occurred. Its not clear why ZFS thought
it lost a LUN but it did. We
On Fri, 22 Dec 2006, Lida Horn wrote:
> > And yes, I would feel better if this driver was open
> > sourced but
> > that
> > is Suns' decision to make.
>
> Well, no. That is Marvell's decision to make. Marvell is
> the one who make the determination that the driver
> could not be open sourced
> And yes, I would feel better if this driver was open
> sourced but
> that
> is Suns' decision to make.
Well, no. That is Marvell's decision to make. Marvell is
the one who make the determination that the driver
could not be open sourced, not Sun. Since Sun
needed information received unde
No,
I have not played with this. As I do not have access to my customer
site. They have tested this themselves. It is unclear if they
implemented this on a MPXIO/SSTM device. I will ask this question.
Thanks,
Shawn
Tim Cook wrote:
This may not be the answer you're looking for, but I don't k
This may not be the answer you're looking for, but I don't know if it's
something you've thought of. If you're pulling a LUN from an expensive
array, with multiple HBA's in the system, why not run mpxio? If you ARE
running mpxio, there shouldn't be an issue with a path dropping. I have
the setup
> Hi there!
>
> I want to build el cheapo ZFS NFS/Samba server for
> storing user files and NFS mail
> storage.
>
> I'm planning to have one 0.5Tb SATA2 ZFS RAID10 pool
> with several filesystems:
> 1) 200 Gb filesystem with ~300K user files, shared
> with Samba, about 10 clients, very light loa
Robert Milkowski writes:
> Hello przemolicc,
>
> Friday, December 22, 2006, 10:02:44 AM, you wrote:
>
> ppf> On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote:
> >> Hello Shawn,
> >>
> >> Thursday, December 21, 2006, 4:28:39 PM, you wrote:
> >>
> >> SJ> All,
> >>
>
OK,
But lets get back to the original question.
Does ZFS provide you with less features than UFS does on one LUN from a SAN
(i.e is it less stable).
>ZFS on the contrary checks every block it reads and is able to find the
>mirror
>or reconstruct the data in a raidz config.
>Therefore ZFS uses o
bash-3.00# lockstat -kgIW sleep 100 | head -30
Profiling interrupt: 38844 events in 100.098 seconds (388 events/sec)
Count genr cuml rcnt nsec Hottest CPU+PILCaller
---
32081 83% 0.00 2432 cpu[1]
Hi.
The problem is getting worse... now even if I destroy all snapshots in a pool
I get performance problem even with zil_disable set to 1.
Despite that I have limit for maximum nfs threads set to 2048 I get only about
1700.
If I want to kill nfsd server it takes 1-4 minutes untill all thread
Ulrich,
in his e-mail Robert mentioned _two_ things regarding ZFS:
[1] ability to detect errors (checksums)
[2] using ZFS didn't caused data lost so far
I completely agree that [1] is wonderful and this is huge advantage. And you
also underlined [1] in you e-mail !
The _only_ thing I mentioned is
Hello Wade,
Thursday, December 21, 2006, 10:15:56 PM, you wrote:
WSfc> Hola folks,
WSfc> I am new to the list, please redirect me if I am posting to the
wrong
WSfc> location. I am starting to use ZFS in production (Solaris x86 10U3 --
WSfc> 11/06) and I seem to be seeing unexpected b
Hello przemolicc,
Friday, December 22, 2006, 10:02:44 AM, you wrote:
ppf> On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote:
>> Hello Shawn,
>>
>> Thursday, December 21, 2006, 4:28:39 PM, you wrote:
>>
>> SJ> All,
>>
>> SJ> I understand that ZFS gives you more error correction w
[EMAIL PROTECTED] wrote:
Robert,
I don't understand why not loosing any data is an advantage of ZFS.
No filesystem should lose any data. It is like saying that an advantage
of football player is that he/she plays football (he/she should do that !)
or an advantage of chef is that he/she cooks (h
On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote:
> Hello Shawn,
>
> Thursday, December 21, 2006, 4:28:39 PM, you wrote:
>
> SJ> All,
>
> SJ> I understand that ZFS gives you more error correction when using
> SJ> two LUNS from a SAN. But, does it provide you with less features
>
30 matches
Mail list logo