Hi,
after working for 1 month with ZFS on 2 external USB drives I have experienced,
that the all new zfs filesystem is the most unreliable FS I have ever seen.
Since working with the zfs, I have lost datas from:
1 80 GB external Drive
1 1 Terrabyte external Drive
It is a shame, that zfs has no
On 09 February, 2009 - D. Eckert sent me these 1,5K bytes:
> Hi,
>
> after working for 1 month with ZFS on 2 external USB drives I have
> experienced, that the all new zfs filesystem is the most unreliable FS I have
> ever seen.
>
> Since working with the zfs, I have lost datas from:
>
> 1 80
>However, I just want to state a warning, that ZFS is far from being that what
>it
>is promising, and so far from my sum of experience I can't recommend at all to
>use zfs on a professional system.
Or, perhaps, you've given ZFS disks which are so broken that they are
really unusable; it is US
Hi Caspar,
thanks for you reply.
I completely disagreed to your opinion, that is USB. And seems as well, that I
am not the only one having this opinion regarding ZFS.
However, the hardware used is:
1 Sun Fire 280R Solaris 10 generic 10-08 latest updates
1 Lenovo T61 Notebook running Solaris 10
D. Eckert wrote:
> Hi Caspar,
>
> thanks for you reply.
>
> I completely disagreed to your opinion, that is USB. And seems as well, that
> I am not the only one having this opinion regarding ZFS.
>
> However, the hardware used is:
>
> 1 Sun Fire 280R Solaris 10 generic 10-08 latest updates
> 1 Len
>However, the hardware used is:
>
>1 Sun Fire 280R Solaris 10 generic 10-08 latest updates
>1 Lenovo T61 Notebook running Solaris 10 genric 10-08 latest updates
>1 Sony VGN-NR38Z
>
>Harddrives in use: Trekstore 1 TB, Seagate momentus 7.200 rpm 2.5" 80 GB.
(Is that the Trekstore with 2x500GB)
>Th
>
> "Unmount" is not sufficient.
>
Well, umount is not the "right" way to do it, so he'd be simulating a
power-loss/system-crash. That still doesn't explain why massive data loss
would occur ? I would understand the last txg being lost, but 90% according
to OP ?!
__
>Well, umount is not the "right" way to do it, so he'd be simulating a
>power-loss/system-crash. That still doesn't explain why massive data loss
>would occur ? I would understand the last txg being lost, but 90% according
>to OP ?!
On USB or? I think he was trying to properly unmount the USB d
ok, so far so good.
but how can I get my pool up and running
Following output:
bash-3.00# zfs get all usbhdd1
NAME PROPERTY VALUE SOURCE
usbhdd1 type filesystem-
usbhdd1 creation Do Dez 25 23:36 2008 -
usbhdd1 used 3
On Mon, 09 Feb 2009 03:10:21 -0800 (PST)
"D. Eckert" wrote:
> ok, so far so good.
>
> but how can I get my pool up and running
I can't help you with this bit
> bash-3.00# zpool status -xv usbhdd1
> Pool: usbhdd1
> Status: ONLINE
> Zustand: Auf mindestens einem Gerät ist ein Fehler
James,
on a UFS ore reiserfs such errors could be corrected.
It is grossly negligent to develop a file system without proper repairing tools.
More and more becomes clear, that it just was a marketing slogan by Sun to
state, that ZFS does not use any repairing tools due to healing itself.
In th
> on a UFS ore reiserfs such errors could be corrected.
I think some of these people are assuming your hard drive is broken. I'm not
sure what you're assuming, but if the hard drive is broken, I don't think ANY
file system can do anything about that.
At best, if the disk was in a RAID 5 array,
> bash-3.00# zfs mount usbhdd1
> cannot mount 'usbhdd1': E/A-Fehler
> bash-3.00#
Why is there an I/O error?
Is there any information logged to /var/adm/messages when this
I/O error is reported? E.g. timeout errors for the USB storage device?
--
This message posted from opensolaris.org
_
>James,
>
>on a UFS ore reiserfs such errors could be corrected.
That's not true. That depends on the nature of the error.
I've seen quite a few problems on UFS with corrupted file contents;
such filesystems always are "clean". Yet the filesystems are corrupted.
And no tool can fix those files
too many words wasted, but not a single word, how to restore the data.
I have read the man pages carefully. But again: there's nothing said, that on
USB drives zfs umount pool is not allowed.
So how on earth should a simple user know that, if he knows that filesystems
properly unmounted using t
Hi Dave,
Having read through the whole thread, I think there are several things
that could all be adding to your problems.
At least some of which are not related to ZFS at all.
You mentioned the ZFS docs not warning you about this, and yet I know
the docs explictly tell you that:
1. While a ZF
Hi everyone,
We are looking at ZFS to use as the back end to a pool of java servers
doing image processing and serving this content over the internet.
Our current solution is working well but the cost to scale and ability
to scale is becoming a problem.
Currently:
- 20TB NFS servers running Fre
First: It sucks to loose data. That's very uncool...BUT
I don't know how ZFS should be able to recover data with no mirror to copy
from. If you have some kind of a RAID level you're easily able to recover your
data. I saw that several times. Without any problems and even with nearly no
performa
>too many words wasted, but not a single word, how to restore the data.
>
>I have read the man pages carefully. But again: there's nothing said, that on
>USB drives zfs umount pool is not allowed.
You cannot unmount a pool.
You can only unmount a filesystem.
That the default name of the pool's
Full of sympathy, I still feel you might as well relax a bit.
It is the XkbVariant that starts X without any chance to return.
But look at the many "boot stops after the third line", and from my side, the
not working network settings, even without nwam.
The worst part was a so-called engineer sta
D. Eckert wrote:
> too many words wasted, but not a single word, how to restore the data.
>
> I have read the man pages carefully. But again: there's nothing said, that on
> USB drives zfs umount pool is not allowed.
>
It is allowed. But it's not enough. You need to read both the 'zpool '
and
> too many words wasted, but not a single word, how to restore the data.
>
> I have read the man pages carefully. But again: there's nothing said,
> that on USB drives zfs umount pool is not allowed.
You misunderstand. This particular point has nothing to do with USB;
it's the same for any ZFS en
Kyle McDonald wrote:
> D. Eckert wrote:
>
>> too many words wasted, but not a single word, how to restore the data.
>>
>> I have read the man pages carefully. But again: there's nothing said, that
>> on USB drives zfs umount pool is not allowed.
>>
>>
> It is allowed. But it's not enoug
All;
There's been some negative post about ZFS recently.
I've been using ZFS for more than 13 months now, my system's gone
through 3 major upgrades, one critical failure and the data's still
fully intact.
I am thoroughly impressed with ZFS. In particular, it's sheer
reliability.
As for flex
On Mon, 9 Feb 2009, D. Eckert wrote:
>
> A good practice would be to care first for a proper documentation.
> There's nothing stated in the man pages, if USB zpools are used,
> that the zfs mount/unmount is NOT recommended and zpool export
> should be used instead.
I have been using USB mirrore
Hi There,
One of my partners asked the question w.r.t. Disk Pool overhead for the
7000 series.
Adam Leventhal put that it was very small (1/64) see below..
Do we have any further info regarding this?
Thanks,
-eric :)
Original Message
Subject:Re: [Fwd: RE: Disk P
On Tue, 10 Feb 2009, Steven Sim wrote:
>
> I had almost used up all the available space and sought a way to
> expand the space without attaching any additional drives.
It is good that you succeeded, but the approach you used seems really
risky. If possible, it is far safer to temporarily add t
On Mon, 9 Feb 2009, John Welter wrote:
> A bit about the workload:
>
> - 99.999% large reads, very small write requirement.
> - Reads average from ~1MB to 60MB.
> - Peak read bandwidth we see is ~180MB/s, with average around 20MB/s
> during peak hours.
This is something that ZFS is particularly go
I believe Tim Foster's zfs backup service (very beta atm) has support
for splitting zfs send backups. Might want to check that out and see
about modifying it for your needs.
On Thu, Feb 5, 2009 at 3:15 PM, Michael McKnight
wrote:
> Hi everyone,
>
> I appreciate the discussion on the practicality
Sorry I wasn't clear that the clients that hit this NFS back end are all
Centos 5.2. FreeBSD is only used for the current NFS servers (a legacy
deal) but that would go away with the new Solaris/ZFS back end.
Dell will sell their boxes with SAS/5e controllers which are just a LSI
1068 board - thes
On Mon, February 9, 2009 11:48, Bob Friesenhahn wrote:
> On Tue, 10 Feb 2009, Steven Sim wrote:
>>
>> I had almost used up all the available space and sought a way to
>> expand the space without attaching any additional drives.
>
> It is good that you succeeded, but the approach you used seems rea
Hello Andras,
Sunday, February 8, 2009, 12:55:20 PM, you wrote:
AS> Hi,
AS> I'm aware that if we talking about DMP on Solaris the preferred
AS> way is to use MPxIO, still I have a question if any of you got any
AS> experience with ZFS on top of Veritas DMP?
AS> Does it work? Is it supported? An
Hello Andrew,
Sunday, February 8, 2009, 8:46:24 PM, you wrote:
AG> Neil Perrin wrote:
>> On 02/08/09 11:50, Vincent Fox wrote:
>>
>>> So I have read in the ZFS Wiki:
>>>
>>> # The minimum size of a log device is the same as the minimum size of
>>> device in
>>> pool, which is 64 Mbytes. The
David Dyer-Bennet wrote:
On Mon, February 9, 2009 11:48, Bob Friesenhahn wrote:
On Tue, 10 Feb 2009, Steven Sim wrote:
I had almost used up all the available space and sought a way to
expand the space without attaching any additional drives.
It is go
I hope this thread catches someone's attention. I've reviewed the root pool
recovery guide as posted. It presupposes a certain level of network support,
for backup and restore, that many opensolaris users may not have.
For an administrator who is working in the context of a data center or a
On Mon, 9 Feb 2009, David Dyer-Bennet wrote:
> Most people run most of their lives with no redundancy in their data,
> though. If you make sure the backups are up-to-date I don't see any
> serious risk in using the swap-one-disk-at-a-time approach for upgrading a
> home server, where you can have
Hi,
I've a somewhat strange configuration here:
[r...@sol9 Mon Feb 09 21:40:26 ~]
$ uname -a
SunOS sol9 5.11 snv_107 sun4u sparc SUNW,Sun-Blade-1000
[r...@sol9 Mon Feb 09 21:30:50 ~]
$ zfs list
NAMEUSED AVAIL REFER MOUNTPOINT
rootpool
Hi Gordon,
We are working toward making the root pool recovery process easier
in the future, for everyone. In the meantime, this is awesome work.
After I run through these steps myself, I would like to add this
procedure to the ZFS t/s wiki.
Thanks,
Cindy
Gordon Johnson wrote:
> I hope this t
Seagate7,
You are not using ZFS correctly. You have misunderstood how it is used. If you
dont follow the manual (which you havent) then any filesystem will cause
problems and corruption, even ZFS or ntfs or FAT32, etc. You must use ZFS
correctly. Start by reading the manual.
For ZFS to be able
Dear All,
I am receiving DEGRAGED for zpool status -v. 3 out of 14 disks are reported as
degraded with 'too many errors'. This is Build 99 running on x4240 with STK SAS
RAID controller. Version of AAC driver is 2.2.5. I am not sure even where to
start. Any advice is very much appreciated. Tryin
* Orvar Korvar (knatte_fnatte_tja...@yahoo.com) wrote:
> Seagate7,
>
> You are not using ZFS correctly. You have misunderstood how it is
> used. If you dont follow the manual (which you havent) then any
> filesystem will cause problems and corruption, even ZFS or ntfs or
> FAT32, etc. You must use
> "ok" == Orvar Korvar writes:
ok> You are not using ZFS correctly.
ok> You have misunderstood how it is used. If you dont follow the
ok> manual (which you havent) then any filesystem will cause
ok> problems and corruption, even ZFS or ntfs or FAT32, etc. You
ok> must use
Leonid Roodnitsky wrote:
> Dear All,
>
> I am receiving DEGRAGED for zpool status -v. 3 out of 14 disks are reported
> as degraded with 'too many errors'. This is Build 99 running on x4240 with
> STK SAS RAID controller. Version of AAC driver is 2.2.5. I am not sure even
> where to start. Any ad
Have tried the procedure in the ZFS TS guide?
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Panic.2FReboot.2FPool_Import_Problems
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
On 9-Feb-09, at 6:17 PM, Miles Nordin wrote:
>> "ok" == Orvar Korvar writes:
>
> ok> You are not using ZFS correctly.
> ok> You have misunderstood how it is used. If you dont follow the
> ok> manual (which you havent) then any filesystem will cause
> ok> problems and corrupti
> There is no substitute for cord-yank tests - many and often. The
> weird part is, the ZFS design team simulated millions of them.
> So the full explanation remains to be uncovered?
We simulated power failure; we did not simulate disks that simply
blow off write ordering. Any disk that you'd e
46 matches
Mail list logo