Happy you got your stuff back.
On Tuesday, October 09, 2012 11:30 AM, Martin Bochnig wrote:
Ok, to stay correct:
First via rsync or /usr/sbin/tar cEf or cpio for the most important files.
Then I try, if piping zfs send still works.
IT ALL DOESN'T MATTER TO ME.
My DATA MAY BE RESCUED in a few
Ok, to stay correct:
First via rsync or /usr/sbin/tar cEf or cpio for the most important files.
Then I try, if piping zfs send still works.
IT ALL DOESN'T MATTER TO ME.
My DATA MAY BE RESCUED in a few hours )))
)))
___
OpenIndiana-discuss mailing list
On 10/8/12, Martin Bochnig wrote:
> However, this time I have a real problem.
> And it did not happen because of ambigously chosen command names, that
> I misunderstood.
>
> Vbox caused the host to freeze.
> And since then the host's home mirror is no longer mountable.
> And that's just not in lin
On 10/8/12, Richard Elling wrote:
[...]
>> "zpool detach" suggests, that you could still use this disk as a
>> reserve backup copy of the pool you were detaching it from.
>
> No it doesn't -- there is no documentation that suggests this usage.
To non-native speakers of English it sounds like t
On Oct 8, 2012, at 2:07 PM, Roel_D wrote:
> I still think this whole discussion is like renting a 40 meter long truck to
> move your garden hose.
>
> We all know that it is possible to rent such a truck but nobody tries to role
> up the hose
>
> SSD's are good for fast reads and occasi
Good point on split vs detach. Unfortunately this particular misinformation
seems widespread :(
-Original Message-
From: Richard Elling [mailto:richard.ell...@richardelling.com]
Sent: Monday, October 08, 2012 8:39 PM
To: Discussion list for OpenIndiana
Subject: Re: [OpenIndiana-discuss]
On Oct 8, 2012, at 4:07 PM, Martin Bochnig wrote:
> Marilio,
>
>
> at first a reminder: never ever detach a disk before you have a third
> disk that already completed resilvering.
> The term "detach" is misleading, because it detaches the disk from the
> pool. Afterwards you cannot access the d
Thanks for the feedback, they will be either Samsung 830s or if the timing is
right 840s. I am sorta leaning toward the zfs equivalent of raid 10 at this
point. Do you guys see an issue using all of them in 1 pool/vdev in that
scenario?
-Original Message-
From: Roy Sigurd Karlsbakk [m
On 10/8/12, Dan Swartzendruber wrote:
>
> Wow, Martin, that's a shocker. I've been doing exactly this to 'backup' my
> rpool :(
My full sympathy.
This naming and lack of warnings is just brain-dead.
I cannot understand, how smart engineers like them could name and
implement it that stupid way
It seems, Gmail corrupted the previous mail's attachment.
Here again, this time as a plain text file:
<>
--
regards
%martin
Oct 7 08:40:35 sun4me zfs: [ID 249136 kern.info] imported version 33 pool
wonderhome using 33
Oct 7 08:43:03 sun4me unix: [ID 836849 kern.notice]
Oct 7 08:43:03 s
> Unfortunately, this is not the case.
> Well, you can of course attach it again. Like any new or empty disk.
> But only if and only if you have enough replicas, and that's not what
> one wanted if one fell in this misunderstanding trap.
> And there are no warnings in the zpool/zfs man pages.
>
>
>
Wow, Martin, that's a shocker. I've been doing exactly this to 'backup' my
rpool :(
___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
Marilio,
at first a reminder: never ever detach a disk before you have a third
disk that already completed resilvering.
The term "detach" is misleading, because it detaches the disk from the
pool. Afterwards you cannot access the disk's previous contents
anymore. Your "detached" half of a mirror
I still think this whole discussion is like renting a 40 meter long truck to
move your garden hose.
We all know that it is possible to rent such a truck but nobody tries to role
up the hose
SSD's are good for fast reads and occasional writes. So don't use them for
datastorage of fast cha
An SLC SSD would probably be substansially slower than an array of MLC SSDs,
and would be likely to slow down the system for sync writes.
- Opprinnelig melding -
> Hi,
> from what I understood from negative experience with a 12-drive SSD
> RAID
> set build with MDRaid on linux, and from a
True, but I'm talking about the native SVR4 packages for NoMachine NX
on SPARC Solaris, already bundles for Solaris 8-10. It may work with
minimal hacking on OpenIndiana for SPARC.
Jonathan, could you maybe give me some pointers on how you compiled it
from source? I was trying to read the document
> Also, keep in mind the problems with certain (or most?) SATA units
> connected to a SAS expander. I've seen pretty bad things happen with
> WD2001FASS drives in such a configuration (we had to replace about 160
> drives and replace them with hitachis to solve that problem - not too
> much data wa
Hi,
I have scripts running "devfsadm -r alt-root", and they can't run inside a
zone, complaining that
devfsadm can be run on global-zone only.
Actually, with -r, the command is not working on real machine devices.
Is there any way I can let it work inside a zone?
This is nice to run distro_const i
> I feel bad asking this question because I generally know what raid
> type to pick.
>
> I am about to configure 24 256gig ssd drives in a ZFS/Comstar
> deployment. This will serve as the datastore for a vmware deployment.
> Does anyone know what raid level would be best. I know the workload
> wil
Hi,
from what I understood from negative experience with a 12-drive SSD RAID
set build with MDRaid on linux, and from answers to a related question I
raised recently in this list, it is not so easy to engineer a
configuration using a large count of SSDs anyhow. The budget option,
using SATA SS
Dan Swartzendruber wrote:
> I'm not understanding your problem. If you add a 3rd temporary disk, wait
> for it to resilver, then replace c1t5d0, let the new disk resilver, then
> detach the temporary disk, you will never have less than 2 up to date disks
> in the mirror. What am I missing?
>
Dan
I'm not understanding your problem. If you add a 3rd temporary disk, wait
for it to resilver, then replace c1t5d0, let the new disk resilver, then
detach the temporary disk, you will never have less than 2 up to date disks
in the mirror. What am I missing?
-Original Message-
From: Maurili
Hi all,
I have a zpool on an oi_147 host system which is made up of 3 mirror sets,
tank
mirror-0
c11t5d0
c11t4d1
mirror-1
c11t3d0
c11t2d0
mirror-2
c11t1d0
c11t0d0
both c11t5d0 and c11t4d0 (SATA 1Tb disks, ST31000528AS) are developing errors,
both dis
23 matches
Mail list logo