Hi,
Anyway, are there other devices out there that you would recommend to use as
a slog device, other than this nvram card, that would present similar
performance gains?
Thanks
Gilberto
On 7/8/08 9:40 PM, "[EMAIL PROTECTED]"
<[EMAIL PROTECTED]> wrote:
>
> Ross wrote:
>> Hi Gilberto,
>>
>> I
Turns out zfs mount -a will pick up the file system.
Fun question is why the OS can't mount the disk by itself.
gnome-mount is what puts up the "Can't access the disk"
and whines to stdout (/dev/null in this case) about:
> ** (gnome-mount:1050): WARNING **: Mount failed for
> /org/freedesktop/Hal
Ross wrote:
> Hi Gilberto,
>
> I bought a Micro Memory card too, so I'm very likely going to end up in the
> same boat.
> I saw Neil Perrin's blog about the MM-5425 card, found that Vmetro don't seem
> to want
> to sell them, but then then last week spotted five of those cards on e-bay so
>
Mike Gerdts wrote:
[I agree with the comments in this thread, but... I think we're still being
old fashioned...]
>> Imagine if university students were allowed to use as much space as
>> they wanted but had to pay a per megabyte charge every two weeks or
>> their account is terminated? This wou
Ok, this is not a OpenSolaris question, but it is a Solaris and ZFS
question.
I have a pool with three mirrored vdevs. I just got an error message
from FMD that read failed from one on the disks,(c1t6d0). All with
instructions on how to handle the problem and replace the devices, so
far ev
Moore, Joe wrote:
>
> On ZFS, sequential files are rarely sequential anyway. The SPA tries to
> keep blocks nearby, but when dealing with snapshotted sequential files
> being rewritten, there is no way to keep everything in order.
>
In some cases, a d11p system could actually speed up data r
Justin Stringfellow wrote:
>
>> Does anyone know a tool that can look over a dataset and give
>> duplication statistics? I'm not looking for something incredibly
>> efficient but I'd like to know how much it would actually benefit our
>>
>
> Check out the following blog..:
>
> http://b
Tim Spriggs wrote:
> Does anyone know a tool that can look over a dataset and give
> duplication statistics? I'm not looking for something incredibly
> efficient but I'd like to know how much it would actually benefit our
> dataset: HiRISE has a large set of spacecraft data (images) that could
i removed the files that were corrupted,scrubbed the datatank mirror and the
did status -v datatank and i got this :
pool: datatank
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected
On Tue, Jul 8, 2008 at 1:26 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> Something else came to mind which is a negative regarding
> deduplication. When zfs writes new sequential files, it should try to
> allocate blocks in a way which minimizes "fragmentation" (disk seeks).
> Disk seeks are t
On Tue, Jul 8, 2008 at 12:25 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Tue, 8 Jul 2008, Richard Elling wrote:
>> [donning my managerial accounting hat]
>> It is not a good idea to design systems based upon someone's managerial
>> accounting whims. These are subject to change in illogical
On Tue, 8 Jul 2008, Moore, Joe wrote:
>
> On ZFS, sequential files are rarely sequential anyway. The SPA tries to
> keep blocks nearby, but when dealing with snapshotted sequential files
> being rewritten, there is no way to keep everything in order.
I think that rewriting files (updating existin
On Tue, Jul 8, 2008 at 2:56 PM, BG <[EMAIL PROTECTED]> wrote:
> Hi everyone,
>
> i did a nice install of opensolaris and i pulled 2x500 gig sata disk in a
> zpool mirror.
> Everything went well and i got it so that my mirror called datatank got
> shared by using CIFS. I can access it from my macbo
Hi everyone,
i did a nice install of opensolaris and i pulled 2x500 gig sata disk in a zpool
mirror.
Everything went well and i got it so that my mirror called datatank got shared
by using CIFS. I can access it from my macbook and pc.
So with this nice setup i started to put my files on but now
Bob Friesenhahn wrote:
> Something else came to mind which is a negative regarding
> deduplication. When zfs writes new sequential files, it
> should try to
> allocate blocks in a way which minimizes "fragmentation"
> (disk seeks).
It should, but because of its copy-on-write nature, fragment
[EMAIL PROTECTED] wrote on 07/08/2008 01:26:15 PM:
> Something else came to mind which is a negative regarding
> deduplication. When zfs writes new sequential files, it should try to
> allocate blocks in a way which minimizes "fragmentation" (disk seeks).
> Disk seeks are the bane of existing sto
Hmmn, you might want to look at Andrew Tridgell's' thesis (yes,
Andrew of Samba fame), as he had to solve this very question
to be able to select an algorithm to use inside rsync.
--dave
Darren J Moffat wrote:
> [EMAIL PROTECTED] wrote:
>
>>[EMAIL PROTECTED] wrote on 07/08/2008 03:08:26 AM:
>>
Something else came to mind which is a negative regarding
deduplication. When zfs writes new sequential files, it should try to
allocate blocks in a way which minimizes "fragmentation" (disk seeks).
Disk seeks are the bane of existing storage systems since they come
out of the available IOPS b
On Tue, 8 Jul 2008, Richard Elling wrote:
> [donning my managerial accounting hat]
> It is not a good idea to design systems based upon someone's managerial
> accounting whims. These are subject to change in illogical ways at
> unpredictable intervals. This is why managerial accounting can be so
On Jul 8, 2008, at 11:00 AM, Richard Elling wrote:
> much fun for people who want to hide costs. For example, some bright
> manager decided that they should charge $100/month/port for ethernet
> drops. So now, instead of having a centralized, managed network with
> well defined port mappings, e
Justin Stringfellow wrote:
>> Raw storage space is cheap. Managing the data is what is expensive.
>>
>
> Not for my customer. Internal accounting means that the storage team gets
> paid for each allocated GB on a monthly basis. They have
> stacks of IO bandwidth and CPU cycles to spare outs
[EMAIL PROTECTED] wrote:
>
> [EMAIL PROTECTED] wrote on 07/08/2008 03:08:26 AM:
>
>>
>>> Does anyone know a tool that can look over a dataset and give
>>> duplication statistics? I'm not looking for something incredibly
>>> efficient but I'd like to know how much it would actually benefit our
>>
[EMAIL PROTECTED] wrote on 07/08/2008 03:08:26 AM:
>
>
> > Does anyone know a tool that can look over a dataset and give
> > duplication statistics? I'm not looking for something incredibly
> > efficient but I'd like to know how much it would actually benefit our
>
> Check out the following blog
> Even better would be using the ZFS block checksums (assuming we are only
> summing the data, not it's position or time :)...
>
> Then we could have two files that have 90% the same blocks, and still
> get some dedup value... ;)
Yes, but you will need to add some sort of highly collision resista
Pete Hartman wrote:
> I'm curious which enclosures you've had problems with?
>
> Mine are both Maxtor One Touch; the 750 is slightly different in that it has
> a FireWire port as well as USB.
I've had VERY bad experiences with the Maxtor One Touch and ZFS. To the
point that we gave up trying t
I'm curious which enclosures you've had problems with?
Mine are both Maxtor One Touch; the 750 is slightly different in that it has a
FireWire port as well as USB.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
Just going to make a quick comment here. It's a good point about wanting
backup software to support this, we're a much smaller company but it's already
more difficult to manage the storage needed for backups than our live storage.
However, we're actively planning that over the next 12 months, Z
Ted Carr wrote:
> Hello All,
>
> Is there a way I can remove my old boot environments? Is it as simple as
> performing a 'zfs destroy' on the older entries, followed by removing the
> entry from the menu.lst?? I have been searching, but have not found
> anything... Any help would be much app
Hello All,
Is there a way I can remove my old boot environments? Is it as simple as
performing a 'zfs destroy' on the older entries, followed by removing the entry
from the menu.lst?? I have been searching, but have not found anything... Any
help would be much appreciated!!
Here is what my
Enda O'Connor wrote:
> Hi
> S10_u5 has version 4, latest in opensolaris is version 10
>
> see
>
> http://opensolaris.org/os/community/zfs/version/10/
Actually as of yesterday 11 is the latest in the source tree. All going
well that will be snv_94.
http://opensolaris.org/os/community/zfs/versi
Justin Stringfellow wrote:
>> Raw storage space is cheap. Managing the data is what is expensive.
>>
>
> Not for my customer. Internal accounting means that the storage team gets
> paid for each allocated GB on a monthly basis. They have
> stacks of IO bandwidth and CPU cycles to spare outs
James,
May I ask what kind of USB enclosures and hubs you are using? I've had some
very bad experiences over the past month with not so cheap enclosures.
Wrt esata, I found the following chipsets on the SHCL. Any others you can
recommend?
Silicon Image 3112A
intel S5400
Intel S5100
Silicon Image
> Does anyone know a tool that can look over a dataset and give
> duplication statistics? I'm not looking for something incredibly
> efficient but I'd like to know how much it would actually benefit our
Check out the following blog..:
http://blogs.sun.com/erickustarz/entry/how_dedupalicious_
Hi Gilberto,
I bought a Micro Memory card too, so I'm very likely going to end up in the
same boat. I saw Neil Perrin's blog about the MM-5425 card, found that Vmetro
don't seem to want to sell them, but then then last week spotted five of those
cards on e-bay so snapped them up.
I'm still wa
> Raw storage space is cheap. Managing the data is what is expensive.
Not for my customer. Internal accounting means that the storage team gets paid
for each allocated GB on a monthly basis. They have
stacks of IO bandwidth and CPU cycles to spare outside of their daily busy
period. I can't t
Matt Harrison genestate.com> writes:
>
> Aah, excellent, just did an export/import and its now showing the
> expected capacity increase. Thanks for that, I should've at least tried
> a reboot :)
More recent OpenSolaris builds don't even need the export/import anymore when
expanding a raidz thi
36 matches
Mail list logo