disks in mirrors and they work well and are
far cheaper than STEC or other enterprise SSD's and have non of the issue
related to trim...
Highly recommended... ;-)
http://www.hyperossystems.co.uk/
Kevin
On 29 December 2010 13:40, Edward Ned Harvey <
opensolarisisdeadlonglive
the smaller disks?
I would assume this would degrade the pool and require it to resilver?
Any advice would be gratefully received.
Kind regards
Kevin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
ed in FreeBSD?
On 15 August 2010 15:13, David Magda wrote:
> On Aug 14, 2010, at 19:39, Kevin Walker wrote:
>
> I once watched a video interview with Larry from Oracle, this ass rambled
>> on
>> about how he hates cloud computing and that everyone was getting into
>&g
I once watched a video interview with Larry from Oracle, this ass rambled on
about how he hates cloud computing and that everyone was getting into cloud
computing and in his opinion no one understood cloud computing, apart from
him... :-| From that day on I felt enlightened about Oracle and how the
empted the old trick of putting the failed drive in the freezer for
an hour and it did spin up, but only for a minute and not long enough to be
recognized by the system.
Not sure what to try next.
~kevin
--
This message posted from opensolaris.org
___
z
dupe.
The suggestions I have read include "playing with" the metadata and this is
something I would need help with as I am just an "informed" user.
I am hoping that as only one drive failed and this is a dual parity raid that
there is someway to recover the pool.
Thanks in advance,
I am trying to recover a raid set, there are only three drives that are part of
the set. I attached a disk and discovered it was bad. It was never part of
the raid set. The disk is now gone and when I try to import the pool I get the
error listed below. Is there a chance to recover? TIA!
S
Hi all,
Just subscribed to the list after a debate on our helpdesk lead me to the
posting about ZFS corruption and the need for a fsck repair tool of some
kind...
Has there been any update on this?
Kind regards,
Kevin Walker
Coreix Limited
DDI: (+44) 0207 183 1725 ext 90
Mobile: (+44
ess using significant CPU load. The system
has 8GB of RAM, vmstat shows nothing interesting.
I have another V245, with the same SCSI/RAID/zfs setup, and a similar
(though a bit less) load of data and users where this problem is NOT
apparent there?
Suggestions?
Kevin
Thu Jan 29 11:32:29 CE
Thanks Sanjeevb,
By the way, this only seems to fail when I set up a volume instead of a file
system. Should I be setting up a volume in this case, or will a file system
suffice?
If I turn off snapshots for this then it should work. I'll try this.
Regards,
Kevin
--
This message posted
Hey all,
I'm setting up a ZFS based fileserver to use both as a shared network drive and
separately to have an iSCSI target to be used as the "Hard disk" of a windows
based VM runninf on another machine.
I've built the machine, installed the OS, created the RAIDZ pool and now have a
couple of
Hi,
Has anyone seen the following problem?
After "lofiadm -d" removes an association, the file is still locked and cannot
be moved or deleted if the file resides in a ZFS mounted with nbmand=on.
There are two ways to remove the lock. (1) remount the zfs by the
unmount+mount; the lock is remov
Excuse me but could you please copy and paste the part of "zfs send -l " ?
I couldn't find it in the link you send me:
http://docs.sun.com/app/docs/doc/819-2240/zfs-1m?a=view
What release is this "send -l " option available ?
--
This message posted from opensolaris.org
_
The closest thing I can find is:
http://bugs.opensolaris.org/view_bug.do?bug_id=6421958
But just like it says: " Incremental +
recursive will be a bit tricker, because how do you specify the multiple
source and dest snaps? "
Let me clarify this more:
Without "send -r" I need do something l
Can you explain more about "zfs send -l " I know "zfs send -i" but didn't
know there is a "-l" option? In which release is this option available?
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
I wonder if there are any equivalent commands in zfs to dump all its associated
snapshots at maximum efficiency (only the changed data blocks among all
snapshots)? I know you can just "zfs send" all snapshots but each one is like
a full dump and if you use "zfs send -i" it is hard to maintain
digg linked to an article related to the apple port of ZFS
(http://www.dell.com/content/products/productdetails.aspx/print_1125?c=us&cs=19&l=en&s=dhss).
I dont have a mac but was interested in ZFS.
The article says that ZFS eliminates the need for a RAID card and is faster
because the stripin
that is my thread and I'm still having issues even after applying that patch.
It just came up again this week.
[locahost] uname -a
Linux dv-121-25.centtech.com 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST
2008 x86_64 x86_64 x86_64 GNU/Linux
[localhost] cat /etc/issue
CentOS release 5 (Final)
new problem. We have patched the system and it has fixed the error creating
dirs/files on the ZFS filesystem. now I am getting permission errors with mv/cp
from one of these ZFS areas to a regular FreeBSD server using UFS. thoughts?
This message posted from opensolaris.org
___
client CentOS 5.1 latest kernel
mount option for zfs filesystem =
rw,nosuid,nodev,remount,noatime,nfsvers=3,udp,intr,bg
,hard,rsize=8192,wsize=8192
directory and parent owned by user and users GID, 775
on client touch /tmp/dummy
cd to zfs area
mv
What does the REFER column represent in zfs list.
Thanks,
kevin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.2. Could this be causing ZFS to get confused when the device
is brought online?
We are willing to try zpool replace -f on the disks that need to be brought
online during the weekend to see what happens.
Here is the system info:
ROOT $ uname -a
SunOS x.x.com 5.10 Generic_120012-14 i86pc i386 i8
On Nov 28, 2007, at 5:38 AM, K wrote:
>
> 1/ Anchor vNic, the equivalent of linux dummy interfaces, we need more
> flexibility in the way we setup xen networking. What is sad is that
> the code is already available in the unreleased crossbow bits... but
> it won't appear in nevada until Q1 2008 :
We'll try running all of the diagnostic tests to rule out any other issues.
But my question is, wouldn't I need to see at least 3 checksum errors on the
individual devices in order for there to be a visible error in the top level
vdev? There doesn't appear to be enough raw checksum errors on the
Here's some additional output from the zpool and zfs tools:
$ zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
tank 10.2T 8.58T 1.64T83% ONLINE -
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank
After a scrub of a pool with 3 raidz2 vdevs (each with 5 disks in them) I see
the following status output. Notice that the raidz2 vdev has 2 checksum errors,
but only one disk inside the raidz2 vdev has a checksum error. How is this
possible? I thought that you would have to have 3 errors in the
(end detector)
pool = tank
pool_guid = 0x347b6d721b99340d
pool_context = 1
__ttl = 0x1
__tod = 0x45cc963a 0x4e744c0
Any ideas/suggestions?
Kevin
This message posted from opensolaris.org
___
zfs-discuss mailing li
ience with nfs. Any ideas?
Kevin
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
k you,
Kevin
--
Kevin C. Abbey
System Administrator
Rutgers University - BioMaPS Institute
Email: [EMAIL PROTECTED]
Hill Center - Room 279
110 Frelinghuysen Road
Piscataway, NJ 08854
Phone and Voice mail: 732-445-3288
Wright-Rieman Laboratories Room 201
610 Taylor Rd.
Piscataway, NJ 08854
work if you use "legacy" mouning of zfs filesytsem (but even then wont
include any ALCs). Or backup via NFS mounts.
(2) IBM's TSM - no current or official support from IBM. Will/wont work?
(3) Veritas NetBackup - client v5.1+ works out of the box, but without full/any
ACL sup
30 matches
Mail list logo