Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-05 Thread Artem Belevich
On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot wrote: > Well actually... > > raidz2: > - 7x 1.5 tb = 10.5tb > - 2 parity drives > > raidz1: > - 3x 1.5 tb = 4.5 tb > - 4x 1.5 tb = 6 tb , total 10.5tb > - 2 parity drives in split thus different raidz1 arrays > > So really, in both cases 2 different

Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-07 Thread Artem Belevich
On Fri, Jan 7, 2011 at 3:16 AM, Matthew D. Fuller wrote: > On Thu, Jan 06, 2011 at 03:45:04PM +0200 I heard the voice of > Daniel Kalchev, and lo! it spake thus: >> >> You should also know that having large L2ARC requires that you also >> have larger ARC, because there are data pointers in the ARC

Re: drives >2TB on mpt device

2011-04-04 Thread Artem Belevich
2011/4/4 Gerrit Kühn : > On Mon, 4 Apr 2011 14:36:25 +0100 Bruce Cran wrote > about Re: drives >2TB on mpt device: > > Hi Bruce, > > BC> It looks like a known issue: > BC> http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/147572 > > Hm, I don't know if this is exactly what I'm seeing here (although t

Re: Kernel memory leak in 8.2-PRERELEASE?

2011-04-04 Thread Artem Belevich
On Mon, Apr 4, 2011 at 1:56 PM, Boris Kochergin wrote: > The problem persists, I'm afraid, and seems to have crept up a lot more > quickly than before: > > # uname -a > FreeBSD exodus.poly.edu 8.2-STABLE FreeBSD 8.2-STABLE #3: Sat Apr  2 > 11:48:43 EDT 2011     sp...@exodus.poly.edu:/usr/obj/usr/s

Re: ZFS vs OSX Time Machine

2011-04-28 Thread Artem Belevich
On Thu, Apr 28, 2011 at 6:08 PM, Jeremy Chadwick wrote: > I will note something, however: your ARC max is set to 3072MB, yet Wired > is around 4143MB.  Do you have something running on this box that takes > up a lot of RAM?  mysqld, etc..?  I'm trying to account for the "extra > gigabyte" in Wired

Re: resilvering takes ages on 8.2 (amd64 from 18.04.2011)

2011-05-06 Thread Artem Belevich
On Fri, May 6, 2011 at 5:43 AM, Holger Kipp wrote: > Resilvering a disk in raidz2 ZFS is taking ages. Any ideas? I had replaced a > different disk this morning (da7) and it took only about 1 hour alltogether. > Any ideas? Or did I do something very wrong (tm)? Don't believe everything you see.

Re: PCIe SATA HBA for ZFS on -STABLE

2011-05-31 Thread Artem Belevich
On Tue, May 31, 2011 at 7:31 AM, Freddie Cash wrote: > On Tue, May 31, 2011 at 5:48 AM, Matt Thyer wrote: > >> What do people recommend for 8-STABLE as a PCIe SATA II HBA for someone >> using ZFS ? >> >> Not wanting to break the bank. >> Not interested in SATA III 6GB at this time... though it co

Re: Fileserver panic - FreeBSD 8.1-stable and zfs

2011-06-02 Thread Artem Belevich
On Thu, Jun 2, 2011 at 12:31 PM, Torfinn Ingolfsen wrote: > FYI, in case it is interesting > my zfs fileserver[1] just had a panic: (transcribed from screen) > panic: kmem_malloc(131072): kmem_map too small: 1324613632 total allocated > cpuid = 1 It's probably one of the most frequently reported

Re: recover file from destroyed zfs snapshot - is it possible?

2011-06-09 Thread Artem Belevich
On Thu, Jun 9, 2011 at 1:00 PM, Greg Bonett wrote: > Hi all, > I know this is a long shot, but I figure it's worth asking. Is there > anyway to recover a file from a zfs snapshot which was destroyed? I know > the name of the file and a unique string that should be in it. The zfs > pool is on geli

Re: recover file from destroyed zfs snapshot - is it possible?

2011-06-09 Thread Artem Belevich
On Thu, Jun 9, 2011 at 3:43 PM, Greg Bonett wrote: > One question though, you say it's necessary that "appropriate >  disk blocks have not been reused by more recent transactions" > Is it not possible for me to just read all the disk blocks looking for > the filename and string it contained? How b

Re: "log_sysevent: type 19 is not implemented" messages during boot

2011-06-17 Thread Artem Belevich
On Fri, Jun 17, 2011 at 6:06 AM, Bartosz Stec wrote: > W dniu 2011-06-11 18:43, Sergey Kandaurov pisze: >> >> On 11 June 2011 20:01, Rolf Nielsen  wrote: >>> >>> Hi all, >>> >>> After going from 8.2-RELEASE to 8-STABLE (to get ZFS v28), I get >>> >>> log_sysevent: type 19 is not implemented >>> >>

Re: disable 64-bit dma for one PCI slot only?

2011-07-19 Thread Artem Belevich
On Tue, Jul 19, 2011 at 6:31 AM, John Baldwin wrote: > The only reason it might be nice to stick with two fields is due to the line > length (though the first line is over 80 cols even in the current format).   > Here > are two possible suggestions: > > old: > > hostb0@pci0:0:0:0:      class=0x060

Re: can not boot from RAIDZ with 8-STABLE

2011-08-17 Thread Artem Belevich
2011/8/17 Daniel Kalchev : > On 17.08.11 16:35, Miroslav Lachman wrote: >> >> I tried mfsBSD installation on Dell T110 with PERC H200A and 4x 500GB SATA >> disks. If I create zpool with RAIDZ, the boot immediately hangs with >> following error: >> > May be it that the BIOS does not see all drives a

Re: can not boot from RAIDZ with 8-STABLE

2011-08-17 Thread Artem Belevich
On Wed, Aug 17, 2011 at 12:40 PM, Miroslav Lachman <000.f...@quip.cz> wrote: > Thank you guys, you are right. The BIOS provides only 1 disk to the loader! > I checked it from loader prompt by lsdev (booted from USB external HDD). > > So I will try to make a small zpool mirror for root and boot (if

Re: "High" cpu usage when using ZFS cache device

2011-10-11 Thread Artem Belevich
On Tue, Oct 11, 2011 at 2:34 AM, Steven Hartland wrote: > - Original Message - From: "Mickaël Maillot" > > > >> same problem here after ~ 30 days with a production server and 2 SSD Intel >> X25M as L2. >> so we update and reboot the 8-STABLE server every month. > > Old thread but also see

Re: "High" cpu usage when using ZFS cache device

2011-10-11 Thread Artem Belevich
On Tue, Oct 11, 2011 at 10:21 AM, Steven Hartland wrote: > Thanks for the confirmation there Artem, we currently can't use 8-STABLE > due to the serious routing issue, seem like every packet generates a > RTM_MISS routing packet to be sent, which causes high cpu load. > > Thread: "Re: serious pack

Re: "High" cpu usage when using ZFS cache device

2011-10-11 Thread Artem Belevich
On Tue, Oct 11, 2011 at 1:17 PM, Steven Hartland wrote: >> It's a bummer. If you can build your own kernel cherry-picking >> following revisions may help with long-term stability: >> r218429 - fixes original overflow causing CPU hogging by l2arc feeding >> thread. It will keep you up and running f

Re: 8.1 xl + dual-speed Netgear hub = yoyo

2011-10-23 Thread Artem Belevich
On Sun, Oct 23, 2011 at 8:54 AM, Matthew Seaman wrote: > On the other hand, for anything Gb capable nowadays connected to a > switch autoneg pretty much just works -- em(4), bce(4) are excellent, > and even re(4) gets this stuff right. There are still cases of incompatibility. I've got a cheap D-

Re: ld: kernel.debug: Not enough room for program headers (allocated 5, need 6)

2011-11-17 Thread Artem Belevich
On Thu, Nov 17, 2011 at 6:41 AM, David Wolfskill wrote: > MAKE=/usr/obj/usr/src/make.i386/make sh /usr/src/sys/conf/newvers.sh GENERIC > cc -c -O -pipe  -std=c99 -g -Wall -Wredundant-decls -Wnested-externs > -Wstrict-prototypes  -Wmissing-prototypes -Wpointer-arith -Winline > -Wcast-qual  -Wunde

Re: Using mmap(2) with a hint address

2011-12-20 Thread Artem Belevich
Hi, On Tue, Dec 20, 2011 at 7:03 AM, Andriy Gapon wrote: > on 20/12/2011 16:31 Ganael LAPLANCHE said the following: >> On Tue, 20 Dec 2011 15:02:01 +0100 (CET), Ganael LAPLANCHE wrote >> >>> But there is still something I don't understand : on the Linux >>> machine where I ran my test program, th

Re: Performance problems with pagedaemon

2012-01-02 Thread Artem Belevich
On Mon, Jan 2, 2012 at 5:41 AM, Victor Balada Diaz wrote: ... > System wide totals computed every five seconds: (values in kilobytes) > === > Processes:              (RUNQ: 2 Disk Wait: 0 Page Wait: 0 Sleep: 51) > Virtual Memory:         (Total: 10980171

Re: zfs arc and amount of wired memory

2012-02-08 Thread Artem Belevich
On Wed, Feb 8, 2012 at 4:28 PM, Jeremy Chadwick wrote: > On Thu, Feb 09, 2012 at 01:11:36AM +0100, Miroslav Lachman wrote: ... >> ARC Size: >>          Current Size:             1769 MB (arcsize) >>          Target Size (Adaptive):   512 MB (c) >>          Min Size (Hard Limit):    512 MB (zfs_arc

Re: Can't read a full block, only got 8193 bytes.

2012-02-19 Thread Artem Belevich
On Sat, Feb 18, 2012 at 10:10 PM, Ask Bjørn Hansen wrote: > Hi everyone, > > We're recycling an old database server with room for 16 disks as a backup > server (our old database servers had 12-20 15k disks; the new ones one or two > SSDs and they're faster). > > We have a box running FreeBSD 8.2

Re: zpool - low speed write

2010-08-05 Thread Artem Belevich
On Wed, Aug 4, 2010 at 9:47 PM, Alex V. Petrov wrote: ... >> > vfs.zfs.cache_flush_disable=1 >> > vfs.zfs.zil_disable=1 >> >> I question both of these settings, especially the latter.  Please remove >> them both and re-test your write performance. > > I removed all settings of zfs. > Now it defaul

Re: zpool - low speed write

2010-08-07 Thread Artem Belevich
On Sat, Aug 7, 2010 at 4:51 AM, Ivan Voras wrote: > On 5.8.2010 6:47, Alex V. Petrov wrote: > >> camcontrol identify ada2 >> pass2: ATA-8 SATA 2.x device > > Aren't those 4k sector drives? EADS drives use regular 512-byte sectors AFAIK. It's EA*R*S models that use 4K sectors. --Artem __

Re: kernel MCA messages

2010-08-24 Thread Artem Belevich
IMHO the key here is whether hardware is broken or not. The only case where correctable ECC errors are OK is when a bit gets flipped by a high-energy particle. That's a normal but fairly rare event. If you get bit flips often enough that you can recall details of more then one of them on the same h

Re: Still getting kmem exhausted panic

2010-09-28 Thread Artem Belevich
On Tue, Sep 28, 2010 at 3:22 PM, Andriy Gapon wrote: > BTW, have you seen my posts about UMA and ZFS on hackers@ ? > I found it advantageous to use UMA for ZFS I/O buffers, but only after > reducing > size of per-CPU caches for the zones with large-sized items. > I further modified the code in my

Re: zfs send/receive: is this slow?

2010-09-29 Thread Artem Belevich
On Wed, Sep 29, 2010 at 11:04 AM, Dan Langille wrote: > It's taken about 15 hours to copy 800GB.  I'm sure there's some tuning I > can do. > > The system is now running: > > # zfs send storage/bac...@transfer | zfs receive storage/compressed/bacula Try piping zfs data through mbuffer (misc/mbuffe

Re: zfs send/receive: is this slow?

2010-10-01 Thread Artem Belevich
Hmm. It did help me a lot when I was replicating ~2TB worth of data over GigE. Without mbuffer things were roughly in the ballpark of your numbers. With mbuffer I've got around 100MB/s. Assuming that you have two boxes connected via ethernet, it would be good to check that nobody generates PAUSE f

Re: zfs send/receive: is this slow?

2010-10-01 Thread Artem Belevich
fer over network. If you're running send/receive locally just pipe the data through mbuffer -- zfs send|mbuffer|zfs receive --Artem > > -- > Dan Langille > http://langille.org/ > > > On Oct 1, 2010, at 5:56 PM, Artem Belevich wrote: > >> Hmm. It did help me a lo

Re: zfs send/receive: is this slow?

2010-10-01 Thread Artem Belevich
> As soon as I opened this email I knew what it would say. > > > # time zfs send storage/bac...@transfer | mbuffer | zfs receive > storage/compressed/bacula-mbuffer > in @  197 MB/s, out @  205 MB/s, 1749 MB total, buffer   0% full ... > Big difference.  :) I'm glad it helped. Does anyone know wh

Re: zfs send/receive: is this slow?

2010-10-02 Thread Artem Belevich
I've just tested on my box and loopback interface does not seem to be the bottleneck. I can easily push through ~400MB/s through two instances of mbuffer. --Artem On Fri, Oct 1, 2010 at 7:51 PM, Sean wrote: > > On 02/10/2010, at 11:43 AM, Artem Belevich wrote: > >>> A

Re: zfs send/receive: is this slow?

2010-10-03 Thread Artem Belevich
On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille wrote: > I'm rerunning my test after I had a drive go offline[1].  But I'm not > getting anything like the previous test: > > time zfs send storage/bac...@transfer | mbuffer | zfs receive > storage/compressed/bacula-buffer > > $ zpool iostat 10 10 >    

Re: VirtualBox OpenSolaris guest

2010-10-10 Thread Artem Belevich
On Sun, Oct 10, 2010 at 5:25 PM, Alex Goncharov wrote: > (It only www/opera stopped crashing on File/Exit now...) I think I've accidentally stumbled on a workaround for this crash on exit issue. Once you've started opera and opened a page (any page), turn on print preview on and off (Menu->Print

Re: VirtualBox OpenSolaris guest

2010-10-11 Thread Artem Belevich
On Mon, Oct 11, 2010 at 4:32 AM, Jakub Lach wrote: > Remedy for ugly file dialogs is skin with skinned ones. > > e.g. http://my.opera.com/community/customize/skins/info/?id=10071 That may make them look better, but the main issue was that file open/save dialogs turned into a simple text input fie

Re: Degraded zpool cannot detach old/bad drive

2010-10-27 Thread Artem Belevich
Are you interested in what's wrong or in how to fix it? If fixing is the priority, I'd boot from OpenSolaris live CD and would try importing the array there. Just make sure you don't upgrade ZFS to a version that is newer than the one FreeBSD supports. Opensolaris may be able to fix the array. On

Re: Degraded zpool cannot detach old/bad drive

2010-10-28 Thread Artem Belevich
> but only those 3 devices in /dev/gpt and absolutely nothing in /dev/gptid/ > So is there a way to bring all the gpt labeled partitions back into the pool > instead of using the mfidXX devices? Try re-importing the pool with "zpool import -d /dev/gpt". This will tell ZFS to use only devices found

Re: Degraded zpool cannot detach old/bad drive

2010-10-29 Thread Artem Belevich
On Thu, Oct 28, 2010 at 10:51 PM, Rumen Telbizov wrote: > Hi Artem, everyone, > > Thanks for your quick response. Unfortunately I already did try this > approach. > Applying -d /dev/gpt only limits the pool to the bare three remaining disks > which turns > pool completely unusable (no mfid devices

Re: Degraded zpool cannot detach old/bad drive

2010-10-29 Thread Artem Belevich
On Fri, Oct 29, 2010 at 11:34 AM, Rumen Telbizov wrote: > The problem I think comes down to what I have written in the zpool.cache > file. > It stores the mfid path instead of the gpt/disk one. >       children[0] >              type='disk' >              id=0 >              guid=16413940568249554

Re: Degraded zpool cannot detach old/bad drive

2010-10-29 Thread Artem Belevich
On Fri, Oct 29, 2010 at 2:19 PM, Rumen Telbizov wrote: > You're right. zpool export tank seems to remove the cache file so import has > nothing to consult so doesn't make any difference. > I guess my only chance at this point would be to somehow manually edit > the zpool configuration, via the zpo

Re: Degraded zpool cannot detach old/bad drive

2010-10-29 Thread Artem Belevich
On Fri, Oct 29, 2010 at 4:42 PM, Rumen Telbizov wrote: > FreeBSD 8.1-STABLE #0: Sun Sep  5 00:22:45 PDT 2010 > That's when I csuped and rebuilt world/kernel. There were a lot of ZFS-related MFCs since then. I'd suggest updating to the most recent -stable and try again. I've got another idea that

Re: How to tell whether ECC (memory) is enabled?

2010-11-06 Thread Artem Belevich
On Sat, Nov 6, 2010 at 9:09 AM, Thomas Zander wrote: > This means for now I have to trust the BIOS that ECC is enabled and I > should see MCA reports in the dmesg output once a bit error is > detected? Well, you don't have to take BIOS' word for that and test whether ECC really works. All you nee

Re: How to tell whether ECC (memory) is enabled?

2010-11-06 Thread Artem Belevich
) worked well enough for me. It didn't get damaged by the slot connector and it didn't leave any residue. YMMV, caveat emptor, beware, use at your own risk, you know the drill.. --Artem On Sat, Nov 6, 2010 at 5:00 PM, wrote: > Artem Belevich wrote: > >> All you need is

Re: DTrace (or other monitor) access to LBA of a block device

2010-12-03 Thread Artem Belevich
On Thu, Dec 2, 2010 at 1:33 PM, Thomas Zander wrote: > Hi, > > do we have any way to monitor which LBAs of which block device are > read/written at a given time? > > I stumbled upon this, > http://southbrain.com/south/2008/02/fun-with-dtrace-and-zfs-mirror.html > > which is pretty intriguing. Unfo

Re: DTrace (or other monitor) access to LBA of a block device

2010-12-05 Thread Artem Belevich
> GEOM sounds like a good candidate for probing of that kind. > > sudo dtrace -n 'fbt:kernel:g_io_deliver:entry { printf("%s %d %d > %d\n",stringof(args[0]->bio_from->geom->name), args[0]->bio_cmd, > args[0]->bio_offset, args[0]->bio_length); }' By the way, in order for this to work one would need

Re: DTrace (or other monitor) access to LBA of a block device

2010-12-05 Thread Artem Belevich
On Sun, Dec 5, 2010 at 11:31 AM, Andriy Gapon wrote: >> By the way, in order for this to work one would need r207057 applied >> to -8. Any chance that could be MFC'ed? >> >> http://svn.freebsd.org/viewvc/base?view=revision&revision=207057 > > Nice catch. > > Alexander, > can that commit be trivial

Re: New ZFSv28 patchset for 8-STABLE

2011-01-01 Thread Artem Belevich
On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote: > What I see: > - increased CPU load > - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased > hard disk load (IOPS graph) > ... > Any ideas on what could cause these? I haven't upgraded the pool version and > nothing was chang

Re: constant zfs data corruption

2008-10-20 Thread Artem Belevich
> all right and understood but shouldn't something as fsck should correct the > error? Seems kind of problematic to me mounting zfs in single user mode, > deleting the file and restarting the OS ? According to Sun's documents, removing corrupted file seems to be 'official' way to get rid of the pr

Re: Western Digital hard disks and ATA timeouts

2008-11-07 Thread Artem Belevich
> Note that Western Digital's "RAID edition" drives claim to take up to 7 > seconds to reallocate sectors, using something they call TLER, which > force-limits the amount of time the drive can spend reallocating. TLER > cannot be disabled: TLER can be enabled/disabled on recent WD drives (SE16/RE

Re: Support for SAS/SATA non-RAID adapters

2009-11-17 Thread Artem Belevich
>  LSI SAS 3080X-R    8-port SATA/SATA PCI-X This one uses LSI1068 chip which is supported by mpt driver. I'm using motherboard with an on-board equivalent of this and don't have much to complain about. I did see some CRC errors with SATA drives in 3Gbps mode, but those went away after updating fi

Re: Support for SAS/SATA non-RAID adapters

2009-11-17 Thread Artem Belevich
--Artem On Tue, Nov 17, 2009 at 6:12 PM, Artem Belevich wrote: >>  LSI SAS 3080X-R    8-port SATA/SATA PCI-X > > This one uses LSI1068 chip which is supported by mpt driver. I'm using > motherboard with an on-board equivalent of this and don't have much to > complain

Re: Support for SAS/SATA non-RAID adapters

2009-11-17 Thread Artem Belevich
Hi, > If that one uses the LSI1068 chipset, do you know which one uses the > LSI1078 chipset? Supermicro's AOC-USAS-H8iR uses LSI1078: http://www.supermicro.com/products/accessories/addon/AOC-USAS-H8iR.cfm Dell PERC 6/i is based on LSI1078 as well. http://www.dell.com/content/topics/topic.aspx/g

Re: ZFS performance degradation over time

2010-01-08 Thread Artem Belevich
Keep an eye on ARC size and on active/inactive/cache/free memory lists: sysctl kstat.zfs.misc.arcstats.size sysctl vm.stats.vm.v_inactive_count sysctl vm.stats.vm.v_active_count sysctl vm.stats.vm.v_cache_count sysctl vm.stats.vm.v_cache_count ZFS performance does degrade a lot if ARC becomes too

Re: 8.0-RELEASE/amd64 - full ZFS install - low read and write disk performance

2010-01-25 Thread Artem Belevich
aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068 controllers when I tried it with 6 and 8 disks. I think the problem is that MV8 only does 32K per transfer and that does seem to matter when you have 8 drives hooked up to it. I don't have hard numbers, but peak throughput of MV8 with 8-d

Re: ZFS panic on RELENG_7/i386

2010-01-26 Thread Artem Belevich
> will do, thank you. is fletcher4 faster? Not necessarily. But it does work as a checksum much better. See following link for the details. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6740597 --Artem ___ freebsd-stable@freebsd.org mailing

Re: ATA_CAM + ZFS gives short 1-2 seconds system freeze on disk load

2010-02-08 Thread Artem Belevich
> I'd like a technical explanation of exactly what this loader.conf > tunable does.  The sysctl -d explanation is useful if one has insight to > what purpose it serves.  I can find mention on Solaris lists about "txg > timeout", but the context is over my head (intended for those very > familiar wi

Re: ZFS ARC being limited below what is defined in /boot/loader.conf

2010-02-12 Thread Artem Belevich
Check your vm.kmem_size. Default setting is way too low. Set it to at least double of desired arc size. --Artem On Fri, Feb 12, 2010 at 10:31 AM, Steve Polyack wrote: > Has anyone had an issue with the ZFS ARC max being limited below what has > been defined in /boot/loader.conf?  I just upgrad

Re: ZFS ARC being limited below what is defined in /boot/loader.conf

2010-02-12 Thread Artem Belevich
: > On 02/12/10 13:47, Artem Belevich wrote: >> >> On Fri, Feb 12, 2010 at 10:31 AM, Steve Polyack >>  wrote: >> >> >>> >>> Has anyone had an issue with the ZFS ARC max being limited below what has >>> been defined in /boot/loader.con

Re: More zfs benchmarks

2010-02-14 Thread Artem Belevich
Can you check if kstat.zfs.misc.arcstats.memory_throttle_count sysctl increments during your tests? ZFS self-throttles writes if it thinks system is running low on memory. Unfortunately on FreeBSD the 'free' list is a *very* conservative indication of available memory so ZFS often starts throttlin

Re: hardware for home use large storage

2010-02-14 Thread Artem Belevich
> your ZFS pool of SATA disks has 120gb worth of L2ARC space Keep in mind that housekeeping of 120G L2ARC may potentially require fair amount of RAM, especially if you're dealing with tons of small files. See this thread: http://www.mail-archive.com/zfs-disc...@opensolaris.org/msg34674.html --Ar

Re: hardware for home use large storage

2010-02-15 Thread Artem Belevich
>> * vm.kmem_size >> * vm.kmem_size_max > > I tried kmem_size_max on -current (this year), and I got a panic during use, > I changed kmem_size to the same value I have for _max and it didn't panic > anymore. It looks (from mails on the lists) that _max is supposed to give a > max value for auto-enh

Re: hardware for home use large storage

2010-02-15 Thread Artem Belevich
> How much ram are you running with? 8GB on amd64. kmem_size=16G, zfs.arc_max=6G > In a latest test with 8.0-R on i386 with 2GB of ram, an install to a ZFS > root *will* panic the kernel with kmem_size too small with default > settings. Even dropping down to Cy Schubert's uber-small config will p

Re: puc(4) timedia baudrate problem

2010-04-27 Thread Artem Belevich
I've got another PCI UART card based on OX16PCI952 that needs its clock multiplied by 8 in order to work correctly. It was some el-cheapo card I've got at Fry's. p...@pci0:1:0:0:class=0x070006 card=0x00011415 chip=0x95211415 rev=0x00 hdr=0x00 vendor = 'Oxford Semiconductor Ltd'

Re: Freebsd 8.0 kmem map too small

2010-05-10 Thread Artem Belevich
> On this specific system, it has 32 GB physical memory and has > vfs.zfs.arc_max="2G" and vm.kmem_size="64G" in /boot/loader.conf.  The > latter was added per earlier suggestions on this list, but appears to be > ignored as "sysctl vm.kmem_size" returns about 2 GB (2172452864) anyway. Set vm.kmem

Re: Freebsd 8.0 kmem map too small

2010-05-10 Thread Artem Belevich
vm.kmem_size limitation has been this way for a pretty long time. What's changed recently is that ZFS ARC now uses UMA for its memory allocations. If I understand it correctly, this would make ARC's memory use more efficient as allocated chunks will end up in a zone tuned for allocations of partic

Re: Freebsd 8.0 kmem map too small

2010-05-10 Thread Artem Belevich
You can try disabling ZIO_USE_UMA in sys/modules/zfs/Makefile Comment out following line in that file: CFLAGS+=-DZIO_USE_UMA This should revert memory allocation method back to its previous mode. Let us know whether it helps or not. --Artem On Mon, May 10, 2010 at 4:14 PM, Richard Perini wro

Re: ZFS corruption due to lack of space?

2012-10-31 Thread Artem Belevich
On Wed, Oct 31, 2012 at 10:55 AM, Steven Hartland wrote: > At that point with the test seemingly successful I went > to delete test files which resulted in:- > rm random* > rm: random1: Unknown error: 122 ZFS is a logging filesystem. Even removing a file apparently requires some space to write a

Re: how to destroy zfs parent filesystem without destroying children - corrupted file causing kernel panick

2012-12-28 Thread Artem Belevich
On Fri, Dec 28, 2012 at 12:46 PM, Greg Bonett wrote: > However, I can't figure out how to destroy the /tank filesystem without > destroying /tank/tempfs (and the other /tank children). Is it possible to > destroy a parent without destroying the children? Or, create a new parent > zfs file system

Re: how to destroy zfs parent filesystem without destroying children - corrupted file causing kernel panick

2012-12-29 Thread Artem Belevich
On Sat, Dec 29, 2012 at 12:35 AM, Greg Bonett wrote: > > > > > Does: > > > > cat /dev/null > bad.file > > > > Cause a kernel panic? > > > > > > > ah, sadly that does cause a kernel panic. I hadn't tried it though, thanks > for the suggestion. It's probably a long shot, but you may try removing ba

Re: FreeBSD history

2013-06-24 Thread Artem Belevich
On Sun, Jun 16, 2013 at 10:00 AM, Andy Farkas wrote: > On 16/06/13 20:30, Jeremy Chadwick wrote: > > * Output from: strings /boot/kernel/kernel | egrep ^option Thanks. > > I stumbled across this one about a week ago: > > strings /boot/kernel/kernel | head -1 > > and was wondering about the histo

Re: stopping amd causes a freeze

2013-07-22 Thread Artem Belevich
On Mon, Jul 22, 2013 at 2:50 AM, Dominic Fandrey wrote: > Occasionally stopping amd freezes my system. It's a rare occurrence, > and I haven't found a reliable way to reproduce it. > > It's also a real freeze, so there's no way to get into the debugger > or grab a core dump. I only can perform the

Re: stopping amd causes a freeze

2013-07-23 Thread Artem Belevich
On Tue, Jul 23, 2013 at 10:43 AM, Dominic Fandrey wrote: > > Don't use KILL or make sure that nobody tries to use amd mountpoints > until > > new instance starts. Manually unmounting them before killing amd may > help. > > Why not let amd do it itself with "/etc/rc.d/amd stop" ? > > That was a typ

Re: stopping amd causes a freeze

2013-07-26 Thread Artem Belevich
On Fri, Jul 26, 2013 at 10:10 AM, Dominic Fandrey wrote: > Amd exhibits several very strange behaviours. > > a) > During the first start it writes the wrong PID into the pidfile, > it however still reacts to SIGTERM. > > b) > After starting it again, it no longer reacts to SIGTERM. > amd does blo

Re: current zfs tuning in RELENG_7 (AMD64) suggestions ?

2009-05-02 Thread Artem Belevich
>> This information is outdated.  The current max in RELENG_7 for amd64 is >> ~3.75GB. Technically, RELENG_7 should allow kmem_size of up to 6G, but the sysctl variables used for tuning are 32-bit and *that* limits kmem_size to ~4G. It's been fixed in -current and can easily be fixed in RELENG_7 (

Re: ZFS MFC heads down

2009-05-27 Thread Artem Belevich
I had the same problem on -current. Try attached patch. It may not apply cleanly on -stable, but should be easy enough to make equivalent changes on -stable. --Artem On Wed, May 27, 2009 at 3:00 AM, Henri Hennebert wrote: > Kip Macy wrote: >> >> On Wed, May 20, 2009 at 2:59 PM, Kip Macy wrote

Re: ZFS MFC heads down

2009-05-27 Thread Artem Belevich
Did you by any chance do that from single-user mode? ZFS seems to rely on hostid being set. Try running "/etc/rc.d/hostid start" and then re-try your zfs commands. --Artem On Wed, May 27, 2009 at 1:06 PM, Henri Hennebert wrote: > Artem Belevich wrote: >> >> I had the

Re: zpool scrub hangs on 7.2-stable

2009-09-20 Thread Artem Belevich
Do you have ZIL disabled? I think I saw the same scrub stall on -7 when I had vfs.zfs.zil_disable=1. After re-enabling ZIL scrub proceeded normally. --Artem On Sun, Sep 20, 2009 at 2:42 PM, Christof Schulze wrote: > Hello, > > currently I am running a 7.2 stable with zfs v13. > Things work nic

Re: 8.0-RC1 ZFS-root installscript

2009-10-06 Thread Artem Belevich
> - untested support for raidz, I can't test this because virtualbox only > provides one BIOS disk to the bootloader and raidz needs at least two > disks for booting :-/ You can try creating raidz from multiple GPT slices on the same disk. --Artem ___ f

Re: whats best pracfive for ZFS on a whole disc these days ?

2009-10-26 Thread Artem Belevich
> Unfortunately it appears ZFS doesn't search for GPT partitions so if you > have them and swap the drives around you need to fix it up manually. When I used raw disk or GPT partitions, if disk order was changed the pool would come up in 'DEGRADED' or UNAVAILABLE state. Even then all that had to b