On 23.01.2013 00:22, Artem Belevich wrote:
On Mon, Jan 21, 2013 at 1:06 PM, Pawel Jakub Dawidek wrote:
On Fri, Jan 18, 2013 at 08:26:04AM -0800, m...@freebsd.org wrote:
Should it be set to a larger initial value based on min(physical,KVM) space
available?
It needs to be smaller than the
> > never underestimate the human stupidity (mine in this case) nor of the boot.
> > pmbr will load the whole partition, which was 1M, instead of the size of
> > gptboot :-(
> >
> > reducing the size of the slice/partition fixed the issue.
>
> pmbr doesn't have room to be but so smart. It can't
On Tuesday, January 22, 2013 5:40:55 pm Sushanth Rai wrote:
> Hi,
>
> Does freebsd have some functionality similar to Linux's NMI watchdog ? I'm
aware of ichwd driver, but that depends to WDT to be available in the
hardware. Even when it is available, BIOS needs to support a mechanism to
trigg
On Wednesday, January 23, 2013 2:25:00 am Mikolaj Golub wrote:
> On Tue, Jan 22, 2013 at 02:17:39PM -0800, Stanislav Sedov wrote:
> >
> > On Jan 22, 2013, at 1:48 PM, John Baldwin wrote:
> > >
> > > Well, you could make procstat open a kvm handle in both cases (open a
> > > "live"
> > > handle
On 1/23/2013 7:25 AM, John Baldwin wrote:
On Tuesday, January 22, 2013 5:40:55 pm Sushanth Rai wrote:
Hi,
Does freebsd have some functionality similar to Linux's NMI watchdog ? I'm
aware of ichwd driver, but that depends to WDT to be available in the
hardware. Even when it is available, BIOS
On Wed, 2013-01-23 at 08:47 -0800, Matthew Jacob wrote:
> On 1/23/2013 7:25 AM, John Baldwin wrote:
> > On Tuesday, January 22, 2013 5:40:55 pm Sushanth Rai wrote:
> >> Hi,
> >>
> >> Does freebsd have some functionality similar to Linux's NMI watchdog ? I'm
> > aware of ichwd driver, but that depe
On 23 January 2013 11:57, Ian Lepore wrote:
>
> But adding a real hardware watchdog that fires on a slightly longer
> timeout than the NMI watchdog gives you the best of everything: you get
> information if it's possible to produce it, and you get a real hardware
> reset shortly thereafter if prod
While RAID-Z is already a king of bad performance,
I don't believe RAID-Z is any worse than RAID5. Do you have any actual
measurements to back up your claim?
it is clearly described even in ZFS papers. Both on reads and writes it
gives single drive random I/O performance.
__
This is because RAID-Z spreads each block out over all disks, whereas RAID5
(as it is typically configured) puts each block on only one disk. So to
read a block from RAID-Z, all data disks must be involved, vs. for RAID5
only one disk needs to have its head moved.
For other workloads (especially
On 23 Jan 2013 20:23, "Wojciech Puchar"
wrote:
>>>
>>> While RAID-Z is already a king of bad performance,
>>
>>
>> I don't believe RAID-Z is any worse than RAID5. Do you have any actual
>> measurements to back up your claim?
>
>
> it is clearly described even in ZFS papers. Both on reads and writ
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
So we have to take your word for it?
Provide a link if you're going to make assertions, or they're no more
than
your own opinion.
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen any proof/papers on
On Wed, Jan 23, 2013 at 12:22 PM, Wojciech Puchar
wrote:
>>> While RAID-Z is already a king of bad performance,
>>
>>
>> I don't believe RAID-Z is any worse than RAID5. Do you have any actual
>> measurements to back up your claim?
>
>
> it is clearly described even in ZFS papers. Both on reads an
On Wed, Jan 23, 2013 at 1:09 PM, Mark Felder wrote:
> On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
>
>>
>> So we have to take your word for it?
>> Provide a link if you're going to make assertions, or they're no more than
>> your own opinion.
>
>
> I've heard this same thing -- every vde
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen any proof/papers on it though.
read original ZFS papers.
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To un
gives single drive random I/O performance.
For reads - true. For writes it's probably behaves better than RAID5
yes, because as with reads it gives single drive performance. small writes
on RAID5 gives lower than single disk performance.
If you need higher performance, build your pool out
On 23 January 2013 21:24, Wojciech Puchar
wrote:
>>
>> I've heard this same thing -- every vdev == 1 drive in performance. I've
>> never seen any proof/papers on it though.
>
> read original ZFS papers.
No, you are making the assertion, provide a link.
Chris
_
"1 drive in performance" only applies to number of random i/o
operations vdev can perform. You still get increased throughput. I.e.
5-drive RAIDZ will have 4x bandwidth of individual disks in vdev, but
unless your work is serving movies it doesn't matter.
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
>
> So we have to take your word for it?
> Provide a link if you're going to make assertions, or they're no more
> than
> your own opinion.
I've heard this same thing -- every vdev == 1 drive in performance. I've
never seen any proof/pape
On Wed, Jan 23, 2013 at 1:25 PM, Wojciech Puchar
wrote:
>>> gives single drive random I/O performance.
>>
>>
>> For reads - true. For writes it's probably behaves better than RAID5
>
>
> yes, because as with reads it gives single drive performance. small writes
> on RAID5 gives lower than single d
On Wed, Jan 23, 2013 at 11:31:43AM -0500, John Baldwin wrote:
> On Wednesday, January 23, 2013 2:25:00 am Mikolaj Golub wrote:
> > IMHO, after adding procstat_getargv and procstat_getargv, the usage of
> > kvm_getargv() and kvm_getenvv() (at least in the new code) may be
> > deprecated. As this is
On Jan 23, 2013, at 11:09 PM, Mark Felder wrote:
> On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
>
>>
>> So we have to take your word for it?
>> Provide a link if you're going to make assertions, or they're no more than
>> your own opinion.
>
> I've heard this same thing -- every vde
On 23 Jan 2013 21:45, "Michel Talon" wrote:
>
> On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote:
>
> >
> > So we have to take your word for it?
> > Provide a link if you're going to make assertions, or they're no more
> > than
> > your own opinion.
>
> I've heard this same thing -- every vde
associated with mirroring.
Thanks for the link, but I could have done that; I am attempting to
explain to Wojciech that his habit of making bold assertions and
as you can see it is not a bold assertion, just you use something without
even reading it's docs.
Not mentioning doing any more resea
even you need normal performance use gmirror and UFS
I've no objection. If it works for you -- go for it.
both "works". For todays trend of solving everything by more hardware ZFS
may even have "enough" performance.
But still it is dangerous for a reasons i explained, as well as it
promot
On 01/23/13 14:27, Wojciech Puchar wrote:
>>
>
> both "works". For todays trend of solving everything by more hardware
> ZFS may even have "enough" performance.
>
> But still it is dangerous for a reasons i explained, as well as it
> promotes bad setups and layouts like making single filesystem out
25 matches
Mail list logo