>hi Jan (and all)
>
>My failure was when running
>
># swap -d /dev/zvol/dsk/rpool/swap
>
>I saw this in my truss output.
>
>uadmin(16, 3, -2748781172232)Err#12 ENOMEM
>
That sounds like "too much memory in use: can't remove swap".
Casper
> "ab" == Arthur Bundo writes:
ab> I have a "/export/home/x" directory
something like:
umount /export/home/x
rmdir /export/home/x
rmdir /export/home <-- not needed. but it should give no error if
/export/home is really empty, so it's a good
David Bryan wrote:
Sorry if the question has been discussed before...did a pretty extensive
search, but no luck...
Preparing to build my first raidz pool. Plan to use 4 identical drives in a 3+1
configuration.
My question is -- what happens if one drive dies, and when I replace it, design
ha
On Mon, Jun 8, 2009 at 9:38 PM, Richard Lowe wrote:
> Brent Jones writes:
>
>
> I've had similar issues with similar traces. I think you're waiting on
> a transaction that's never going to come.
>
> I thought at the time that I was hitting:
> CR 6367701 "hang because tx_state_t is inconsistent
Brent Jones writes:
>>
>> I haven't figured out a way to identify the problem, still trying to
>> find a 100% way to reproduce this problem.
>> Seemingly the more snapshots I send at a given time, the likelihood of
>> this happening goes up, but, correlation is not causation :)
>>
>> I might try
>
> I haven't figured out a way to identify the problem, still trying to
> find a 100% way to reproduce this problem.
> Seemingly the more snapshots I send at a given time, the likelihood of
> this happening goes up, but, correlation is not causation :)
>
> I might try to open a support case with
On Mon, Jun 8 at 20:28, David Bryan wrote:
Sorry if the question has been discussed before...did a pretty extensive
search, but no luck...
Preparing to build my first raidz pool. Plan to use 4 identical drives in a 3+1
configuration.
My question is -- what happens if one drive dies, and when
Sorry if the question has been discussed before...did a pretty extensive
search, but no luck...
Preparing to build my first raidz pool. Plan to use 4 identical drives in a 3+1
configuration.
My question is -- what happens if one drive dies, and when I replace it, design
has changed slightly an
hi Jan (and all)
My failure was when running
# swap -d /dev/zvol/dsk/rpool/swap
I saw this in my truss output.
uadmin(16, 3, -2748781172232)Err#12 ENOMEM
Another email recommended that I reboot and try again and that seems to
have worked. I was actually running Solaris 10 u7 wi
I have a "/export/home/x" directory (my home directory) which can not be
mounted during boot time and i have a lot of stuff there, too critical for me
to experiment there, i can login at failsafe session and i open nautilus as
root and on the snapshots on / i see no export/home/x but on export/
On Mon, 8 Jun 2009, Marius van Vuuren wrote:
The j4200 and the x4150 connected to it were powered off
and then moved to another building with the utmost care. When powered on
again 'zpool status' revealed " corrupted data" on 3 of the disks.
This could be as simple an issue as SAS cables conne
On 08.06.09 15:50, Marius van Vuuren wrote:
A description of the problem
- Description
The j4200 and the x4150 connected to it were powered off
and then moved to another building with the utmost care. When powered on
again 'zpool status' revealed " corrupted data" on 3 of the disks.
Outputs:
On Sun, Jun 7, 2009 at 3:50 AM, Ian Collins wrote:
> Ian Collins wrote:
>>
>> Tim Haley wrote:
>>>
>>> Brent Jones wrote:
On the sending side, I CAN kill the ZFS send process, but the remote
side leaves its processes going, and I CANNOT kill -9 them. I also
cannot reboot the rec
On Sun, Jun 07, 2009 at 10:38:29AM -0700, Leonid Zamdborg wrote:
> Out of curiosity, would destroying the zpool and then importing the
> destroyed pool have the effect of recognizing the size change? Or
> does 'destroying' a pool simply label a pool as 'destroyed' and make
> no other changes...
I
Leonid Zamdborg wrote:
George,
Is there a reasonably straightforward way of doing this partition table edit
with existing tools that won't clobber my data? I'm very new to ZFS, and
didn't want to start experimenting with a live machine.
Leonid,
What you could do is to write a program whi
A description of the problem
- Description
The j4200 and the x4150 connected to it were powered off
and then moved to another building with the utmost care. When powered on
again 'zpool status' revealed " corrupted data" on 3 of the disks.
Outputs:
zpool status
pool: pool2
state: FAULTED
statu
Hi Richard,
I ran into some quirks resizing swap last week.
If you are seeing out of space when trying to remove a swap area, then a
reboot clear this up. I think the bugs are already filed, but I would
like to see your scenario as well.
Can you restate your steps?
Thanks,
Cindy
Jan Dambo
Hi Richard,
Richard Robinson wrote:
I should add that I also used truss and saw the same ENOMEM error. I am on a
4Gb system with swap -l reporting
swapfile dev swaplo blocks free
/dev/zvol/dsk/rpool/swap 181,1 8 4194296 4194296
and I was trying to follow the directions
On Mon, 2009-06-08 at 05:38 -0700, Maurilio Longo wrote:
> Now, from the error it seems that T1 needs all the snapshots which
> were active at the time it was created, which is not what I would
> expect from a snapshot.
>From the man page, -R tries to replicate everything, including any
existing
Hi,
I'm trying to send a pool (its filesystems) from a pc to another, so I first
created a recursive snapshot:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
nas 840G 301G 3,28G /nas
nas/drivers 12,6G 301G 12,6G /nas/driver
20 matches
Mail list logo