On Apr 3, 2010, at 5:47 PM, Ragnar Sundblad wrote:
> On 2 apr 2010, at 22.47, Neil Perrin wrote:
>
>>> Suppose there is an application which sometimes does sync writes, and
>>> sometimes async writes. In fact, to make it easier, suppose two processes
>>> open two files, one of which always writes
On Apr 1, 2010, at 9:41 PM, Abdullah Al-Dahlawi wrote:
> Hi all
>
> I ran a workload that reads & writes within 10 files each file is 256M, ie,
> (10 * 256M = 2.5GB total Dataset Size).
>
> I have set the ARC max size to 1 GB on etc/system file
>
> In the worse case, let us assume that the
On Apr 3, 2010, at 8:00 PM, Tim Cook wrote:
> On Sat, Apr 3, 2010 at 9:52 PM, Richard Elling
> wrote:
> On Apr 3, 2010, at 5:56 PM, Tim Cook wrote:
> >
> > On Sat, Apr 3, 2010 at 7:50 PM, Tim Cook wrote:
> >> Your experience is exactly why I suggested ZFS start doing some "right
> >> sizing" if
On Apr 1, 2010, at 9:34 PM, Roy Sigurd Karlsbakk wrote:
>> You can estimate the amount of disk space needed for the deduplication
>> table
>> and the expected deduplication ratio by using "zdb -S poolname" on
>> your existing
>> pool.
>
> This is all good, but it doesn't work too well for planni
On Sat, Apr 3, 2010 at 9:52 PM, Richard Elling wrote:
> On Apr 3, 2010, at 5:56 PM, Tim Cook wrote:
> >
> > On Sat, Apr 3, 2010 at 7:50 PM, Tim Cook wrote:
> >> Your experience is exactly why I suggested ZFS start doing some "right
> sizing" if you will. Chop off a bit from the end of any disk s
On Apr 2, 2010, at 2:05 PM, Edward Ned Harvey wrote:
> Momentarily, I will begin scouring the omniscient interweb for information,
> but I’d like to know a little bit of what people would say here. The
> question is to slice, or not to slice, disks before using them in a zpool.
>
> One reason
On Apr 3, 2010, at 5:56 PM, Tim Cook wrote:
>
> On Sat, Apr 3, 2010 at 7:50 PM, Tim Cook wrote:
>> Your experience is exactly why I suggested ZFS start doing some "right
>> sizing" if you will. Chop off a bit from the end of any disk so that we're
>> guaranteed to be able to replace drives fro
Hello,
Maybe this question should be put on another list, but since there
are a lot of people here using all kinds of HBAs, this could be right
anyway;
I have a X4150 running snv_134. It was shipped with a "STK RAID INT"
adaptec/intel/storagetek/sun SAS HBA.
When running the card in copyback wr
On Sat, Apr 3, 2010 at 7:50 PM, Tim Cook wrote:
>
>
> On Sat, Apr 3, 2010 at 6:53 PM, Robert Milkowski wrote:
>
>> On 03/04/2010 19:24, Tim Cook wrote:
>>
>>
>>
>> On Fri, Apr 2, 2010 at 4:05 PM, Edward Ned Harvey <
>> guacam...@nedharvey.com> wrote:
>>
>>> Momentarily, I will begin scouring t
On Sat, Apr 3, 2010 at 6:53 PM, Robert Milkowski wrote:
> On 03/04/2010 19:24, Tim Cook wrote:
>
>
>
> On Fri, Apr 2, 2010 at 4:05 PM, Edward Ned Harvey > wrote:
>
>> Momentarily, I will begin scouring the omniscient interweb for
>> information, but I’d like to know a little bit of what peopl
On 2 apr 2010, at 22.47, Neil Perrin wrote:
>> Suppose there is an application which sometimes does sync writes, and
>> sometimes async writes. In fact, to make it easier, suppose two processes
>> open two files, one of which always writes asynchronously, and one of which
>> always writes synchr
On 1 apr 2010, at 06.15, Stuart Anderson wrote:
> Assuming you are also using a PCI LSI HBA from Sun that is managed with
> a utility called /opt/StorMan/arcconf and reports itself as the amazingly
> informative model number "Sun STK RAID INT" what worked for me was to run,
> arcconf delete (to d
On 03/04/2010 19:24, Tim Cook wrote:
On Fri, Apr 2, 2010 at 4:05 PM, Edward Ned Harvey
mailto:guacam...@nedharvey.com>> wrote:
Momentarily, I will begin scouring the omniscient interweb for
information, but I’d like to know a little bit of what people
would say here. The questio
> Well, I did look at it but at that time there was no Solaris support yet.
> Right now it
> seems there is only a beta driver?
Correct, we just completed functional validation of the OpenSolaris driver.
Our
focus has now turned to performance tuning and benchmarking. We expect to
formally
Hi Tomas
Thanks for the clarification. If I understood you right , you mean that 6
GB (including my 2.5GB files) has been written to the device and still
occupy space on the device !!!
This is fair enough for this case since most of my files ended up in L2ARC
Great ...
But this brings two
Hi Al,
> Have you tried the DDRdrive from Christopher George
> ?
> Looks to me like a much better fit for your application than the F20?
>
> It would not hurt to check it out. Looks to me like
> you need a product with low *latency* - and a RAM based cache
> would be a much better performer than
On 02 April, 2010 - Abdullah Al-Dahlawi sent me these 128K bytes:
> Hi all
>
> I ran a workload that reads & writes within 10 files each file is 256M, ie,
> (10 * 256M = 2.5GB total Dataset Size).
>
> I have set the ARC max size to 1 GB on etc/system file
>
> In the worse case, let us assume
On Fri, Apr 2, 2010 at 4:05 PM, Edward Ned Harvey
wrote:
> Momentarily, I will begin scouring the omniscient interweb for
> information, but I’d like to know a little bit of what people would say
> here. The question is to slice, or not to slice, disks before using them in
> a zpool.
>
>
>
> One
On Sat, 3 Apr 2010, Edward Ned Harvey wrote:
I would return the drive to get a bigger one before doing something as
drastic as that. There might have been a hichup in the production line,
and that's not your fault.
Yeah, but I already have 2 of the replacement disks, both doing the same
thing.
> Your original zpool status says that this pool was last accessed on
> another system, which I believe is what caused of the pool to fail,
> particularly if it was accessed simultaneously from two systems.
The message "last accessed on another system" is the normal behavior if the
pool is ungrace
> I would return the drive to get a bigger one before doing something as
> drastic as that. There might have been a hichup in the production line,
> and that's not your fault.
Yeah, but I already have 2 of the replacement disks, both doing the same
thing. One has a firmware newer than my old disk
> On Apr 2, 2010, at 2:29 PM, Edward Ned Harvey wrote:
> > I've also heard that the risk for unexpected failure of your pool is
> higher if/when you reach 100% capacity. I've heard that you should
> always create a small ZFS filesystem within a pool, and give it some
> reserved space, along with t
> Oh, I managed to find a really good answer to this question. Several
> sources all say to do precisely the same procedure, and when I did it
> on a
> test system, it worked perfectly. Simple and easy to repeat. So I
> think
> this is the gospel method to create the slices, if you're going to
>
>> And finally, if anyone has experience doing this, and process
>> recommendations? That is
My next task is to go read documentation
>> again, to refresh my memory from years ago, about the difference
>> between format, partition, label, fdisk, because those terms
>> dont have the same
- "Edward Ned Harvey" skrev:
> > What build were you running? The should have been addressed by
> > CR6844090
> > that went into build 117.
>
> I'm running solaris, but that's irrelevant. The storagetek array
> controller
> itself reports the new disk as infinitesimally smaller than the one
> > One reason to slice comes from recent personal experience. One disk
> of
> > a mirror dies. Replaced under contract with an identical disk. Same
> > model number, same firmware. Yet when it's plugged into the system,
> > for an unknown reason, it appears 0.001 Gb smaller than the old disk,
> >
Momentarily, I will begin scouring the omniscient interweb for information, but
I'd like to know a little bit of what people would say here. The question is
to slice, or not to slice, disks before using them in a zpool.
One reason to slice comes from recent personal experience. One disk of a
On 04/02/10 08:24, Edward Ned Harvey wrote:
The purpose of the ZIL is to act like a fast "log" for synchronous
writes. It allows the system to quickly confirm a synchronous write
request with the minimum amount of work.
Bob and Casper and some others clearly know a lot here. But I'm he
Patrick,
I'm happy that you were able to recover your pool.
Your original zpool status says that this pool was last accessed on
another system, which I believe is what caused of the pool to fail,
particularly if it was accessed simultaneously from two systems.
It is important that the cause of
Have not the ZFS data corruption researchers been in touch with Jeff Bonwick
and the ZFS team?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 02/04/2010 05:45, Roy Sigurd Karlsbakk wrote:
Hi all
> From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
I read
"Avoid creating a RAIDZ, RAIDZ-2, RAIDZ-3, or a mirrored configuration with one
logical device of 40+ devices. See the sections below for examples of r
>The only way to guarantee consistency in the snapshot is to always
>(regardless of ZIL enabled/disabled) give priority for sync writes to get
>into the TXG before async writes.
>
>If the OS does give priority for sync writes going into TXG's before async
>writes (even with ZIL disabled), then af
Hi all
>From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide I
>read
"Avoid creating a RAIDZ, RAIDZ-2, RAIDZ-3, or a mirrored configuration with one
logical device of 40+ devices. See the sections below for examples of redundant
configurations."
What do they mean by th
> You can estimate the amount of disk space needed for the deduplication
> table
> and the expected deduplication ratio by using "zdb -S poolname" on
> your existing
> pool.
This is all good, but it doesn't work too well for planning. Is there a rule of
thumb I can use for a general overview? Sa
> > I might add some swap I guess. I will have to try it on another
> > machine with more RAM and less pool, and see how the size of the
> zdb
> > image compares to the calculated size of DDT needed. So long as
> zdb
> > is the same or a little smaller than the DDT it predicts, the
> tool's
> > s
35 matches
Mail list logo