10GB of memory + 5 days later. The pool was imported.
this file server is a virtual machine. I allocated 2GB of memory and 2 CPU
cores assume this was enough to mange 6 TB (6x 1TB disks). While the pool I am
try to recover is only 700 GB and not the 6TB pool I am try to migrate.
So I decided t
Are the indicated devices actually under /pseudo or are they really
under /devices/pseudo ?
Also, have you tried a 'devfsadm -C' to re-configure the /dev links?
this might allow you to recognize the new vpath devices...
-Erik
On 5/29/2010 4:53 PM, morris hooten wrote:
I have 6 zfs po
On May 28, 2010, at 10:35 AM, Bob Friesenhahn wrote:
> On Fri, 28 May 2010, Gregory J. Benscoter wrote:
>> I’m primarily concerned with in the possibility of a bit flop. If this
>> occurs will the stream be lost? Or will the file that that bit flop occurred
>> in be the only degraded file? Lastly
Also, the zpool.cache may be out of date. To clear its entries,
zpool export poas43m01
and ignore any errors.
Then
zpool import
and see if the pool is shown as importable, perhaps with new device names.
If not, then try the zpool import -d option that Mark described.
-- richar
Can you find the devices in /dev/rdsk? I see there is a path in /pseudo at
least, but the zpool import command only looks in /dev. One thing you can try
is doing this:
# mkdir /tmpdev
# ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a
And then see if 'zpool import -d /tmpdev' finds the pool.
On 2
On 30 maj 2010, at 01.53, morris hooten wrote:
> I have 6 zfs pools and after rebooting init 6 the vpath device path names
> have changed for some unknown reason. But I can't detach, remove and reattach
> to the new device namesANY HELP! please
>
> pjde43m01 - - - - FA
I have 6 zfs pools and after rebooting init 6 the vpath device path names have
changed for some unknown reason. But I can't detach, remove and reattach to the
new device namesANY HELP! please
pjde43m01 - - - - FAULTED -
pjde43m02 - - - - FAULTED -
Op Sat, 29 May 2010 20:34:54 +0200 schreef Kees Nuyt :
On Thu, 20 May 2010 11:53:17 -0700, John Andrunas
wrote:
Can I make a pool not mount on boot? I seem to recall reading
somewhere how to do it, but can't seem to find it now.
As Tomas said, export the pool before shutdown.
Why don't y
On Thu, 20 May 2010 11:53:17 -0700, John Andrunas
wrote:
>Can I make a pool not mount on boot? I seem to recall reading
>somewhere how to do it, but can't seem to find it now.
As Tomas said, export the pool before shutdown.
If you have a pool which causes unexpected trouble at boot
time and you
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Gregory J. Benscoter
>
> After looking through the archives I havent been able to assess the
> reliability of a backup procedure which employs zfs send and recv.
If there's data corruption in
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Cassandra Pugh
>
> I was wondering if there is a special option to share out a set of
> nested
> directories? Currently if I share out a directory with
> /pool/mydir1/mydir2
> on a sy
On 5/29/2010 12:48 AM, schatten wrote:
Yep, that is correct. The rpool also has stuff like swap and 1-2 other
mountpoints I forgot. Just the default installation layout.
I am really not sure if I did something wrong or if there is a bug. But if it
is a bug, why do only I see it?
Hmm
C
On Sat, May 29, 2010 at 12:54 AM, Matt Connolly
wrote:
> But with one of the drives unplugged, the system hangs at boot. On both
> drives (with the other unplugged) grub loads, and the system starts to boot.
> However, it gets stuck at the "Hostname: Vault" line and never gets to
> "reading ZFS
Hi,
I'm running snv_134 on 64-bit x86 motherboard, with 2 SATA drives. The zpool
"rpool" uses whole disk of each drive. I've installed grub on both discs, and
mirroring seems to be working great.
I just started testing what happens when a drive fails. I kicked off some
activities and unplugged
Yep, that is correct. The rpool also has stuff like swap and 1-2 other
mountpoints I forgot. Just the default installation layout.
I am really not sure if I did something wrong or if there is a bug. But if it
is a bug, why do only I see it?
> OpenSolaris only recognizes 1 Solaris fdisk partitio
On 5/29/2010 12:22 AM, schatten wrote:
Okay.
I had/have a running snv134 install on one half of my disk. I created a zfs
(zfs create rpool/VB) for my virtualbox. Then zfs set
mountpoint=/export/home/schatten/VirtulBox rpool/VB. Then a reboot and it hangs
right before the login should appear.
Upsala.
And another note: A shutdown brings the same results. Hangs before the login
screen. So no matter if I do a reboot or a powercycle.
I also can't revert to OSOL 2009.06 as my hardware is not recognized. 2009.06
won't find my two SLI graphiccards.
--
This message posted from opensolaris.or
I should note that all of it works. I have access to the ZFS/zpool while
running OSOL. I can create files and stuff in the newly created zfs but the
reboot hangs. Looks like the reboot has a flaw.
Not to mention the reboot is no real reboot. 2009.06 had a reboot that powered
off the PC. snv134 r
Okay.
I had/have a running snv134 install on one half of my disk. I created a zfs
(zfs create rpool/VB) for my virtualbox. Then zfs set
mountpoint=/export/home/schatten/VirtulBox rpool/VB. Then a reboot and it hangs
right before the login should appear.
I removed the zfs with an OSOL livecd and
On 5/28/2010 1:24 PM, schatten wrote:
Hi,
whenever I create a new zfs my PC hangs at boot. Basically where the login
screen should appear. After booting from livecd and removing the zfs the boot
works again.
This also happened when I created a new zpool for the other half of my HDD.
Any idea w
20 matches
Mail list logo