> The situation now is I have dd'd the drives onto a NAS. These images are
> shared via NFS to a VM running Oracle Solaris 11 11/11 X86.
You should probably also try to use a current OpenIndiana or some
other Illumos distribution.
___
zfs-discuss mailin
> root@solaris-01:/mnt# zpool import -d /dev/lofi
> pool: ZP-8T-RZ1-01
> id: 9952605666247778346
> state: FAULTED
> status: One or more devices contains corrupted data.
> action: The pool cannot be imported due to damaged devices or data.
> see: http://www.sun.com/msg/ZFS-8000-5E
> config:
Indeed they are there, shown with 1 second interval. So, it is the
client's fault after all. I'll have to see whether it is somehow
possible to get the server to write cached data sooner (and hopefully
asynchronous), and the client to issue commits less often. Luckily I
can live with the current
On 14 Jun 2012, at 23:15, Timothy Coalson wrote:
>> The client is using async writes, that include commits. Sync writes do not
>> need commits.
>
> Are you saying nfs commit operations sent by the client aren't always
> reported by that script?
They are not reported in your case because the com
> The client is using async writes, that include commits. Sync writes do not
> need commits.
Are you saying nfs commit operations sent by the client aren't always
reported by that script?
> What happens is that the ZFS transaction group commit occurs at more-or-less
> regular intervals, likely 5
> The client is using async writes, that include commits. Sync writes do
> not need commits.
>
> What happens is that the ZFS transaction group commit occurs at more-
> or-less regular intervals, likely 5 seconds for more modern ZFS
> systems. When the commit occurs, any data that is in the ARC bu
Hi Tim,
On Jun 14, 2012, at 12:20 PM, Timothy Coalson wrote:
> Thanks for the script. Here is some sample output from 'sudo
> ./nfssvrtop -b 512 5' (my disks are 512B-sector emulated and the pool
> is ashift=9, some benchmarking didn't show much difference with
> ashift=12 other than giving up 8
Thanks for the script. Here is some sample output from 'sudo
./nfssvrtop -b 512 5' (my disks are 512B-sector emulated and the pool
is ashift=9, some benchmarking didn't show much difference with
ashift=12 other than giving up 8% of available space) during a copy
operation from 37.30 with sync=stan
2012-06-14 19:11, tpc...@mklab.ph.rhul.ac.uk wrote:
In message <201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk>, tpc...@mklab.ph.r
hul.ac.uk writes:
Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
My WAG is that your "zpool history" is hanging due to lack of
RAM.
Interes
>
> In message <201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk>,
> tpc...@mklab.ph.r
> hul.ac.uk writes:
> >Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
>
>
> My WAG is that your "zpool history" is hanging due to lack of
> RAM.
Interesting. In the problem state the sys
In message <201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk>, tpc...@mklab.ph.r
hul.ac.uk writes:
>Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
My WAG is that your "zpool history" is hanging due to lack of
RAM.
John
groenv...@acm.org
__
On Jun 13, 2012, at 4:51 PM, Daniel Carosone wrote:
> On Wed, Jun 13, 2012 at 05:56:56PM -0500, Timothy Coalson wrote:
>> client: ubuntu 11.10
>> /etc/fstab entry: :/mainpool/storage /mnt/myelin nfs
>> bg,retry=5,soft,proto=tcp,intr,nfsvers=3,noatime,nodiratime,async 0
>>0
>
>
>
> Offlist/OT - Sheer guess, straight out of my parts - maybe a cronjob to
> rebuild the locate db or something similar is hammering it once a week?
In the problem condition, there appears to be very little going on on the
system. eg.,
root@server5:/tmp# /usr/local/bin/top
last pid: 3828; l
On Thu, Jun 14, 2012 at 09:56:43AM +1000, Daniel Carosone wrote:
> On Tue, Jun 12, 2012 at 03:46:00PM +1000, Scott Aitken wrote:
> > Hi all,
>
> Hi Scott. :-)
>
> > I have a 5 drive RAIDZ volume with data that I'd like to recover.
>
> Yeah, still..
>
> > I tried using Jeff Bonwick's labelfix bi
14 matches
Mail list logo