Ian Collins wrote:
> Andrew Gabriel wrote:
>> Ian Collins wrote:
>>> Andrew Gabriel wrote:
Given the issue described is slow zfs recv over network, I suspect
this is:
6729347 Poor zfs receive performance across networks
This is quite easily worked around by putting a b
Not too sure if it's much help. I enabled kernel pages and curproc.. Let me
know if I need to enable "all" then.
solaria crash # echo "::status" | mdb -k
debugging live kernel (64-bit) on solaria
operating system: 5.11 snv_98 (i86pc)
solaria crash # echo "::stack" | mdb -k
solaria crash # echo ":
Andrew Gabriel wrote:
> Ian Collins wrote:
>> Andrew Gabriel wrote:
>>> Ian Collins wrote:
>>>
Brent Jones wrote:
> Theres been a couple threads about this now, tracked some bug
> ID's/ticket:
>
> 6333409
> 6418042
>
I see these are fixed in build
Ian Collins wrote:
> Andrew Gabriel wrote:
>> Ian Collins wrote:
>>
>>> Brent Jones wrote:
>>>
Theres been a couple threads about this now, tracked some bug ID's/ticket:
6333409
6418042
>>> I see these are fixed in build 102.
>>>
>>> Are they targeted to get
Andrew Gabriel wrote:
> Ian Collins wrote:
>
>> Brent Jones wrote:
>>
>>> Theres been a couple threads about this now, tracked some bug ID's/ticket:
>>>
>>> 6333409
>>> 6418042
>>>
>> I see these are fixed in build 102.
>>
>> Are they targeted to get back to Solaris 10 via a patch?
Ian Collins wrote:
> Brent Jones wrote:
>> Theres been a couple threads about this now, tracked some bug ID's/ticket:
>>
>> 6333409
>> 6418042
> I see these are fixed in build 102.
>
> Are they targeted to get back to Solaris 10 via a patch?
>
> If not, is it worth escalating the issue with supp
Andrew пишет:
> hey Victor,
>
> Where would i find that? I'm still somewhat getting used to the
> Solaris environment. /var/adm/messages doesn't seem to show any Panic
> info.. I only have remote access via SSH, so I hope I can do
> something with dtrace to pull it.
Do you have anything in /var/c
> "pb" == Peter Bridge <[EMAIL PROTECTED]> writes:
pb> I really need a step-by-step 'how to' to access this box from
pb> my OSX Leopard
What you need for NFS on a laptop is a good automount daemon and a
'umount -f' command that actually does what the man page claims.
The automounter
hey Victor,
Where would i find that? I'm still somewhat getting used to the Solaris
environment. /var/adm/messages doesn't seem to show any Panic info.. I only
have remote access via SSH, so I hope I can do something with dtrace to pull it.
Thanks,
Andrew
--
This message posted from opensolari
> "a" == <[EMAIL PROTECTED]> writes:
> "c" == Miles Nordin <[EMAIL PROTECTED]> writes:
> "n" == none <[EMAIL PROTECTED]> writes:
n> Unfortunately the USED column is of little help since it only
n> shows you the data unique to that snapshot. In my case almost
n> all da
Hi Miles,
Thanks for your reply.
My zfs situation is a little different from your test. I have a few early
snapshots which I believe would still be sharing most of their data with the
current filesystem. Then later I have snapshots which would still be holding
onto data that is now deleted.
So
Just as a follow up. I went ahead with the original hardware purchase, it was
so much cheaper than the alternatives it was hard to resist.
Anyway, OS 2008-05 installed very nicely. Although it mentions 32bit while
booting, so I need to investigate that at some point. The actual hardware
seem
The compress on-write behavior is what I expected, but I wanted to validate
that for sure. Thank you.
On the 2nd question, the obvious answer is that I'm doing work where knowing
how large the total file sizes tells me how much work has been completed, and I
don't have any other feedback whic
Andrew,
Andrew wrote:
> Thanks a lot! Google didn't seem to cooperate as well as I had hoped.
>
>
> Still no dice on the import. I only have shell access on my
> Blackberry Pearl from where I am, so it's kind of hard, but I'm
> managing.. I've tried the OP's exact commands, and even trying to
>
Ross Becker wrote:
> I'm about to enable compression on my ZFS filesystem, as most of the data I
> intend to store should be highly compressible.
>
> Before I do so, I'd like to ask a couple of newbie questions
>
> First - if you were running a ZFS without compression, wrote some files to
>
I'm about to enable compression on my ZFS filesystem, as most of the data I
intend to store should be highly compressible.
Before I do so, I'd like to ask a couple of newbie questions
First - if you were running a ZFS without compression, wrote some files to it,
then turned compression on,
On 11/07/08 11:24, Kumar, Amit H. wrote:
Is ZFS already the default file System for Solaris 10?
If yes has anyone tested it on Thumper ??
Yes. Formal Sun support is for Thumper running s10. For the latest
ZFS bug fixes, it is important to run the most recent s10 update release.
Right now, th
I decided to do some more test situations to try to figure out how
adding/removing snapshots changes the space used reporting.
First I setup a test area, a new zfs file system and created some test
files and then created snapshots removing the files one by one.
> mkfile 1m 0
> mkfile 1m 1
> mkfil
Thomas Kloeber wrote:
> This is the 2nd attempt, so my apologies, if this mail got to you
> already...
>
> Folkses,
>
> I'm in an absolute state of panic because I lost about 160GB of data
> which were on an external USB disk.
> Here is what happened:
> 1. I added a 500GB USB disk to my Ultra25/So
Brent Jones wrote:
> Theres been a couple threads about this now, tracked some bug ID's/ticket:
>
> 6333409
> 6418042
I see these are fixed in build 102.
Are they targeted to get back to Solaris 10 via a patch?
If not, is it worth escalating the issue with support to get a patch?
--
Ian.
I really think there is something wrong with how space is being reported
by zfs list in terms of snapshots.
Stealing for the example earlier where a new file system was created, 10
1MB files were created and then do snap, remove a file, snap, remove a
file, until they are all gone and you are left
On Fri, Nov 7, 2008 at 9:11 AM, Jacob Ritorto <[EMAIL PROTECTED]> wrote:
> I have a PC server running Solaris 10 5/08 which seems to frequently become
> unable to share zfs filesystems via the shareiscsi and sharenfs options. It
> appears, from the outside, to be hung -- all clients just freeze,
On Fri, 7 Nov 2008, Kumar, Amit H. wrote:
> Is ZFS already the default file System for Solaris 10?
ZFS isn't the default file system for Solaris 10, but it is
selectable as the root file system with the most recent update.
--
Rich Teer, SCSA, SCNA, SCSECA
CEO,
My Online Home Inventory
URLs: h
Do you guys have any more information about this? I've tried the offset
methods, zfs_recover, aok=1, mounting read only, yada yada, with still 0 luck.
I have about 3TBs of data on my array, and I would REALLY hate to lose it.
Thanks!
--
This message posted from opensolaris.org
_
Is ZFS already the default file System for Solaris 10?
If yes has anyone tested it on Thumper ??
Thank you,
Amit
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks a lot! Google didn't seem to cooperate as well as I had hoped.
Still no dice on the import. I only have shell access on my Blackberry Pearl
from where I am, so it's kind of hard, but I'm managing.. I've tried the OP's
exact commands, and even trying to import array as ro, yet the system s
> "n" == none <[EMAIL PROTECTED]> writes:
n> snapshots referring to old data which has been deleted from
n> the current filesystem and I'd like to find out which
n> snapshots refer to how much data
Imagine you have a filesystem containing ten 1MB files,
zfs create root/expo
Hi ZFS team,
when testing installation with recent OpenSolaris builds,
we have been encountering that in some cases, people end up
in GRUB prompt after the installation - it seems that menu.lst
can't be accessed for some reason. At least two following bugs
seems to be describing the same manifest
I have a PC server running Solaris 10 5/08 which seems to frequently become
unable to share zfs filesystems via the shareiscsi and sharenfs options. It
appears, from the outside, to be hung -- all clients just freeze, and while
they're able to ping the host, they're not able to transfer nfs or
Andrew,
> I woke up yesterday morning, only to discover my system kept
> rebooting..
>
> It's been running fine for the last while. I upgraded to snv 98 a
> couple weeks back (from 95), and had upgraded my RaidZ Zpool from
> version 11 to 13 for improved scrub performance.
>
> After some res
Off the lists, someone suggested to me that the "Inconsistent
filesystem" may be the boot archive and not the ZFS filesystem (though I
still don't know what's wrong with booting b99).
Regardless, I tried rebuilding the boot_archive with bootadm
update-archive -vf and verified it by mounting it
FYI, here are the link to the 'labelfix' utility.
It an attachment to one of Jeff Bonwick's posts on this thread:
http://www.opensolaris.org/jive/thread.jspa?messageID=229969
or here:
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-May/047267.html
http://mail.opensolaris.org/pipermail/zfs
I woke up yesterday morning, only to discover my system kept rebooting..
It's been running fine for the last while. I upgraded to snv 98 a couple weeks
back (from 95), and had upgraded my RaidZ Zpool from version 11 to 13 for
improved scrub performance.
After some research it turned out that, o
I was wondering if this ever made to zfs as a fix for bad labels?
On Wed, 7 May 2008, Jeff Bonwick wrote:
> Yes, I think that would be useful. Something like 'zpool revive'
> or 'zpool undead'. It would not be completely general-purpose --
> in a pool with multiple mirror devices, it could only
That's exactly what I was looking for. Hopefully Sun will see fit to include
this functionality in the OS soon.
Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/ma
This is the 2nd attempt, so my apologies, if this mail got to you already...
Folkses,
I'm in an absolute state of panic because I lost about 160GB of data
which were on an external USB disk.
Here is what happened:
1. I added a 500GB USB disk to my Ultra25/Solaris 10
2. I created a zpool and a zf
River Tarnell wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> hi,
>
> i have two systems, A (Solaris 10 update 5) and B (Solaris 10 update 6). i'm
> using 'zfs send -i' to replicate changes on A to B. however, the 'zfs recv'
> on
> B is running extremely slowly.
I'm sorry, I didn'
River Tarnell wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Ian Collins:
>
>> That's very slow. What's the nature of your data?
>>
>
> mainly two sets of mid-sized files; one of 200KB-2MB in size and other under
> 50KB. they are organised into subdirectories, A/B/C/. eac
38 matches
Mail list logo