Re: kmem_map auto-sizing and size dependencies

2013-01-23 Thread Andre Oppermann
On 23.01.2013 00:22, Artem Belevich wrote: On Mon, Jan 21, 2013 at 1:06 PM, Pawel Jakub Dawidek wrote: On Fri, Jan 18, 2013 at 08:26:04AM -0800, m...@freebsd.org wrote: Should it be set to a larger initial value based on min(physical,KVM) space available? It needs to be smaller than the

Re: solved: pmbr: Boot loader too large

2013-01-23 Thread Daniel Braniss
> > never underestimate the human stupidity (mine in this case) nor of the boot. > > pmbr will load the whole partition, which was 1M, instead of the size of > > gptboot :-( > > > > reducing the size of the slice/partition fixed the issue. > > pmbr doesn't have room to be but so smart. It can't

Re: NMI watchdog functionality on Freebsd

2013-01-23 Thread John Baldwin
On Tuesday, January 22, 2013 5:40:55 pm Sushanth Rai wrote: > Hi, > > Does freebsd have some functionality similar to Linux's NMI watchdog ? I'm aware of ichwd driver, but that depends to WDT to be available in the hardware. Even when it is available, BIOS needs to support a mechanism to trigg

Re: libprocstat(3): retrieve process command line args and environment

2013-01-23 Thread John Baldwin
On Wednesday, January 23, 2013 2:25:00 am Mikolaj Golub wrote: > On Tue, Jan 22, 2013 at 02:17:39PM -0800, Stanislav Sedov wrote: > > > > On Jan 22, 2013, at 1:48 PM, John Baldwin wrote: > > > > > > Well, you could make procstat open a kvm handle in both cases (open a > > > "live" > > > handle

Re: NMI watchdog functionality on Freebsd

2013-01-23 Thread Matthew Jacob
On 1/23/2013 7:25 AM, John Baldwin wrote: On Tuesday, January 22, 2013 5:40:55 pm Sushanth Rai wrote: Hi, Does freebsd have some functionality similar to Linux's NMI watchdog ? I'm aware of ichwd driver, but that depends to WDT to be available in the hardware. Even when it is available, BIOS

Re: NMI watchdog functionality on Freebsd

2013-01-23 Thread Ian Lepore
On Wed, 2013-01-23 at 08:47 -0800, Matthew Jacob wrote: > On 1/23/2013 7:25 AM, John Baldwin wrote: > > On Tuesday, January 22, 2013 5:40:55 pm Sushanth Rai wrote: > >> Hi, > >> > >> Does freebsd have some functionality similar to Linux's NMI watchdog ? I'm > > aware of ichwd driver, but that depe

Re: NMI watchdog functionality on Freebsd

2013-01-23 Thread Ed Maste
On 23 January 2013 11:57, Ian Lepore wrote: > > But adding a real hardware watchdog that fires on a slightly longer > timeout than the NMI watchdog gives you the best of everything: you get > information if it's possible to produce it, and you get a real hardware > reset shortly thereafter if prod

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
While RAID-Z is already a king of bad performance, I don't believe RAID-Z is any worse than RAID5. Do you have any actual measurements to back up your claim? it is clearly described even in ZFS papers. Both on reads and writes it gives single drive random I/O performance. __

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
This is because RAID-Z spreads each block out over all disks, whereas RAID5 (as it is typically configured) puts each block on only one disk. So to read a block from RAID-Z, all data disks must be involved, vs. for RAID5 only one disk needs to have its head moved. For other workloads (especially

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Chris Rees
On 23 Jan 2013 20:23, "Wojciech Puchar" wrote: >>> >>> While RAID-Z is already a king of bad performance, >> >> >> I don't believe RAID-Z is any worse than RAID5. Do you have any actual >> measurements to back up your claim? > > > it is clearly described even in ZFS papers. Both on reads and writ

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Mark Felder
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: So we have to take your word for it? Provide a link if you're going to make assertions, or they're no more than your own opinion. I've heard this same thing -- every vdev == 1 drive in performance. I've never seen any proof/papers on

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Artem Belevich
On Wed, Jan 23, 2013 at 12:22 PM, Wojciech Puchar wrote: >>> While RAID-Z is already a king of bad performance, >> >> >> I don't believe RAID-Z is any worse than RAID5. Do you have any actual >> measurements to back up your claim? > > > it is clearly described even in ZFS papers. Both on reads an

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Artem Belevich
On Wed, Jan 23, 2013 at 1:09 PM, Mark Felder wrote: > On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: > >> >> So we have to take your word for it? >> Provide a link if you're going to make assertions, or they're no more than >> your own opinion. > > > I've heard this same thing -- every vde

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
I've heard this same thing -- every vdev == 1 drive in performance. I've never seen any proof/papers on it though. read original ZFS papers. ___ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To un

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
gives single drive random I/O performance. For reads - true. For writes it's probably behaves better than RAID5 yes, because as with reads it gives single drive performance. small writes on RAID5 gives lower than single disk performance. If you need higher performance, build your pool out

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Chris Rees
On 23 January 2013 21:24, Wojciech Puchar wrote: >> >> I've heard this same thing -- every vdev == 1 drive in performance. I've >> never seen any proof/papers on it though. > > read original ZFS papers. No, you are making the assertion, provide a link. Chris _

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
"1 drive in performance" only applies to number of random i/o operations vdev can perform. You still get increased throughput. I.e. 5-drive RAIDZ will have 4x bandwidth of individual disks in vdev, but unless your work is serving movies it doesn't matter.

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Michel Talon
On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: > > So we have to take your word for it? > Provide a link if you're going to make assertions, or they're no more > than > your own opinion. I've heard this same thing -- every vdev == 1 drive in performance. I've never seen any proof/pape

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Artem Belevich
On Wed, Jan 23, 2013 at 1:25 PM, Wojciech Puchar wrote: >>> gives single drive random I/O performance. >> >> >> For reads - true. For writes it's probably behaves better than RAID5 > > > yes, because as with reads it gives single drive performance. small writes > on RAID5 gives lower than single d

Re: libprocstat(3): retrieve process command line args and environment

2013-01-23 Thread Mikolaj Golub
On Wed, Jan 23, 2013 at 11:31:43AM -0500, John Baldwin wrote: > On Wednesday, January 23, 2013 2:25:00 am Mikolaj Golub wrote: > > IMHO, after adding procstat_getargv and procstat_getargv, the usage of > > kvm_getargv() and kvm_getenvv() (at least in the new code) may be > > deprecated. As this is

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Nikolay Denev
On Jan 23, 2013, at 11:09 PM, Mark Felder wrote: > On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: > >> >> So we have to take your word for it? >> Provide a link if you're going to make assertions, or they're no more than >> your own opinion. > > I've heard this same thing -- every vde

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Chris Rees
On 23 Jan 2013 21:45, "Michel Talon" wrote: > > On Wed, 23 Jan 2013 14:26:43 -0600, Chris Rees wrote: > > > > > So we have to take your word for it? > > Provide a link if you're going to make assertions, or they're no more > > than > > your own opinion. > > I've heard this same thing -- every vde

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
associated with mirroring. Thanks for the link, but I could have done that; I am attempting to explain to Wojciech that his habit of making bold assertions and as you can see it is not a bold assertion, just you use something without even reading it's docs. Not mentioning doing any more resea

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread Wojciech Puchar
even you need normal performance use gmirror and UFS I've no objection. If it works for you -- go for it. both "works". For todays trend of solving everything by more hardware ZFS may even have "enough" performance. But still it is dangerous for a reasons i explained, as well as it promot

Re: ZFS regimen: scrub, scrub, scrub and scrub again.

2013-01-23 Thread matt
On 01/23/13 14:27, Wojciech Puchar wrote: >> > > both "works". For todays trend of solving everything by more hardware > ZFS may even have "enough" performance. > > But still it is dangerous for a reasons i explained, as well as it > promotes bad setups and layouts like making single filesystem out