Is it possible to convert a rz2 array to rz1 array?
I have a pool with to rz2 arrays. I would like to convert them to rz1. Would
that be possible?
If not, is it ok to remove one disk from a rz2 array and just let the array
keep running with one disk missing?
Regards,
Lars-Gunnar Persson
cal
Will this be fixed after the scrub process is finished tomorrow or is
this volume lost forever?
Hoping for some quick answers as the data is quite important for us.
Regards,
Lars-Gunnar Persson
___
zfs-discuss mailing list
zfs-discuss@opens
That is correct. It's a raid 6 disk shelf with one volume connected
via fibre.
Lars-Gunnar Persson
Den 2. mars. 2009 kl. 16.57 skrev Blake :
It looks like you only have one physical device in this pool. Is
that correct?
On Mon, Mar 2, 2009 at 9:01 AM, Lars-Gunnar Persson
wrote:
The Linux host can still see the device. I showed you the log from the
Linux host.
I tried the fdisk -l and it listed the iSCSI disks.
Lars-Gunnar Persson
Den 2. mars. 2009 kl. 17.02 skrev "O'Shea, Damien" :
I could be wrong but this looks like an issue on the Linux side
I've turned off iSCSI sharing at the moment.
My first question is: how can zfs report available is larger than
reservation on a zfs volume? I also know that used mshould be larger
than 22.5 K. Isn't this strange?
Lars-Gunnar Persson
Den 3. mars. 2009 kl. 00.38 skrev Rich
3bb1b56400401
Nov 15 2007 10:16:12 ereport.fs.zfs.zpool
0x0533bb1b56400401
Oct 14 09:31:31.6092 ereport.fm.fmd.log_append
0x02eb96a8b6502801
Oct 14 09:31:31.8643 ereport.fm.fmd.mod_init
0x02ec89eadd100401
On 3. mars. 2009, at 08.10, Lars-Gunnar Per
Thank you for your long reply. I don't believe that will help me get
my ZFS volume back though,
From my last reply to this list I confirm that I do understand what
the AVAIL column is reporting when running the zfs list command.
hmm, still confused ...
Regards,
Lars-Gunnar Persson
ting for this process to finish.
On 3. mars. 2009, at 11.18, Lars-Gunnar Persson wrote:
I thought a ZFS file system wouldn't destroy a ZFS volume? Hmm, I'm
not sure what to do now ...
First of all, this zfs volume Data/subversion1 has been working for
a year and suddenly after
hope I've provided enough information for all you ZFS experts out
there.
Any tips or solutions in sight? Or is this ZFS gone completely?
Lars-Gunnar Persson
On 3. mars. 2009, at 13.58, Lars-Gunnar Persson wrote:
I run a new command now zdb. Here is the current output:
-bash-3.00
On 3. mars. 2009, at 14.51, Sanjeev wrote:
Thank you for your reply.
Lars-Gunnar,
On Tue, Mar 03, 2009 at 11:18:27AM +0100, Lars-Gunnar Persson wrote:
-bash-3.00$ zfs list -o
name,type,used,avail,ratio,compression,reserv,volsize Data/
subversion1
NAMETYPE USED AVAIL
te. It is recommended that all host activity is stopped 30
seconds before powering the system off.
Any thoughts about this?
Regards,
Lars-Gunnar Persson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
df. Thats a
difference of 2 TB. Where did they go?
Any explanation would be find.
Regards,
Lars-Gunnar Persson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
09 March, 2009 - Lars-Gunnar Persson sent me these 1,1K bytes:
I've a interesting situation. I've created two pool now and one pool
named "Data" and another named "raid5". Check the details here:
bash-3.00# zpool list
NAMESIZEUSED AVAILC
This was enlightening! Thanks a lot and sorry for the noise.
Lars-Gunnar Persson
On 9. mars. 2009, at 14.27, Tim wrote:
On Mon, Mar 9, 2009 at 7:07 AM, Lars-Gunnar Persson > wrote:
I've a interesting situation. I've created two pool now and one pool
named "Data" a
le" raid 5 or 6 configuration? What about performance?
Regards,
Lars-Gunnar Persson
On 10. mars. 2009, at 00.26, Kees Nuyt wrote:
On Mon, 9 Mar 2009 12:06:40 +0100, Lars-Gunnar Persson
wrote:
1. On the external disk array, I not able to configure JBOD or RAID 0
or 1 with just one
s raidz2 would be no difference in
performance and big difference in disk space available.
On 10. mars. 2009, at 09.13, Lars-Gunnar Persson wrote:
How about this configuration?
On the Nexsan SATABeast, add all disk to one RAID 5 or 6 group. Then
on the Nexsan define several smaller volume
ordin wrote:
"lp" == Lars-Gunnar Persson
writes:
lp> Ignore force unit access (FUA) bit: [X]
lp> Any thoughts about this?
run three tests
(1) write cache disabled
(2) write cache enabled, ignore FUA off
(3) write cache enabled, ignore FUA [X]
if all three are the sam
or suggestions?
Best regards, Lars-Gunnar Persson
On 11. mars. 2009, at 02.39, Bob Friesenhahn wrote:
On Tue, 10 Mar 2009, A Darren Dunham wrote:
What part isn't true? ZFS has a independent checksum for the data
block. But if the data block is spread over multiple disks, then
each
of
18 matches
Mail list logo