Hi folks,
I was trying to load a large file in /tmp so that a process that
parses it wouldn't be limited by a disk throughput bottleneck. My rig
here has only 12GB of RAM and the file I copied is about 12GB as well.
Before the copied finished, my system restarted.
I'm pretty up to date, the sy
> Wait...whoah, hold
> on.If snapshots reside within the confines of the
> pool, are you saying that dedup will also count
> what's contained inside the snapshots? I'm
> not sure why, but that thought is vaguely disturbing
> on some level.
>
> Then again (not sure how gurus feel on this
> point) b
On Sun, Dec 20, 2009 at 16:23, Nick wrote:
>
> IMHO, snapshots are not a replacement for backups. Backups should
> definitely reside outside the system, so that if you lose your entire array,
> SAN, controller, etc., you can recover somewhere else. Snapshots, on the
> other hand, give you the a
Ok, dump uploaded!
Thanks for your upload
Your file has been stored as "/cores/redshirt-vmdump.0" on the Supportfiles
service.
Size of the file (in bytes) : 1743978496.
The file has a cksum of : 2878443682 .
#
The zfs best practices page (and all the experts in general) talk about
MTTDL and raidz2 is better than raidz and so on.
Has anyone here ever actually experienced data loss in a raidz that
has a hot spare? Of course, I mean from disk failure, not from bugs
or admin error, etc.
-frank
__
There seems to be a rash of posts lately where people are resetting or
rebooting without getting any data, so I thought I'd post a quick
overview on collecting crash dumps. If you think you've got a hang
problem with ZFS and you want to gather data for someone to look at,
then here are a few s
I've been using OpenSolaris for my home file server for a little over a year
now. For most of that time I have used smb to share files out to my other
systems. I also have a Windows Server 2003 DC and all my client systems are
joined to the domain. Most of that time was a permissions nightmar
> I have already run into one little snag that I don't see any way of
> overcoming with my chosen method. I've upgraded to snv_129 with high hopes
> for getting the most out of deduplication. But using iSCSI volumes I'm not
> sure how I can gain any benefit from it. The volumes are a set size
arnaud wrote:
Hi folks,
I was trying to load a large file in /tmp so that a process that
parses it wouldn't be limited by a disk throughput bottleneck. My rig
here has only 12GB of RAM and the file I copied is about 12GB as well.
Before the copied finished, my system restarted.
I'm pretty up
Cool thx, sounds like exactly what I'm looking for.
I did a bit of reading on the subject and to my understanding I should...
Create a volume of a size as large as I could possibly need. So, siding on the
optimistic, "zfs create -s -V 4000G tank/iscsi1". Then in Windows initialize
and quick
On Sun, Dec 20, 2009 at 9:24 PM, Chris Scerbo
wrote:
> Cool thx, sounds like exactly what I'm looking for.
>
> I did a bit of reading on the subject and to my understanding I should...
> Create a volume of a size as large as I could possibly need. So, siding on
> the optimistic, "zfs create -s -
I've got an opensolaris snv_118 machine that does nothing except serve
up NFS and ISCSI.
The machine has 8G of ram, and I've got an 80G SSD as L2ARC.
The ARC on this machine is currently sitting at around 2G, the kernel is
using around 5G, and I've got about 1G free. I've pulled this from a
co
> The (one and only) point that I was making was that - like backups -
> snapshots should be kept "elsewhere" whether by using zfs-send, or zipping
> up the whole shebang and ssh'ing it someplace"elsewhere" meaning beyond
> the pool. Rolling 15 minute and hourly snapshotsno, they stay local
On Sat, Dec 19, 2009 at 3:56 AM, Steven Sim wrote:
> r...@sunlight:/root# zfs list -r myplace/Docs
> NAME USED AVAIL REFER MOUNTPOINT
> myplace/Docs 3.37G 1.05T 3.33G
> /export/home/admin/Docs/e/Docs <--- *** Here is the extra "e/Docs"..
I saw a sim
On Dec 20, 2009, at 12:25 PM, Tristan Ball wrote:
I've got an opensolaris snv_118 machine that does nothing except
serve up NFS and ISCSI.
The machine has 8G of ram, and I've got an 80G SSD as L2ARC.
The ARC on this machine is currently sitting at around 2G, the
kernel is using around 5G,
On Sun, 20 Dec 2009, Richard Elling wrote:
Given that I don't believe there is any other memory pressure on the
system, why isn't the ARC using that last 1G of ram?
Simon says, "don't do that" ? ;-)
Yes, primarily since if there is no more memory immediately available,
performance when sta
Bob Friesenhahn wrote:
On Sun, 20 Dec 2009, Richard Elling wrote:
Given that I don't believe there is any other memory pressure on the
system, why isn't the ARC using that last 1G of ram?
Simon says, "don't do that" ? ;-)
Yes, primarily since if there is no more memory immediately availa
Oops, should have sent to the list...
Richard Elling wrote:
>
> On Dec 20, 2009, at 12:25 PM, Tristan Ball wrote:
>
>> I've got an opensolaris snv_118 machine that does nothing except
>> serve up NFS and ISCSI.
>>
>> The machine has 8G of ram, and I've got an 80G SSD as L2ARC.
>> The ARC on this
18 matches
Mail list logo