a bunch of dumb questions about freebsd installing

2011-05-11 Thread Eugene M. Zheganin
Hi. I have an IBM xSeries server, its ip-kvm and different FreeBSD images. The goal is to perform a remote installation of FreeBSD using server ip-kvm and USB devices it emulates. I can perform a non-remote installation in a wariety of ways but this post is about a remote one. 1) Since USB gi

Re: a bunch of dumb questions about freebsd installing

2011-05-11 Thread Eugene M. Zheganin
On 11.05.2011 15:14, Eugene M. Zheganin wrote: Hi. I have an IBM xSeries server, its ip-kvm and different FreeBSD images. The goal is to perform a remote installation of FreeBSD using server ip-kvm and USB devices it emulates. I can perform a non-remote installation in a wariety of ways but

VOP_WRITE is not exclusive locked but should be

2012-02-06 Thread Eugene M. Zheganin
Hi. I have a server with an 8.2-RELEASE/amd64. It's primary use for routing/ipsec+gre tunneling. It's also running zfs. Sometimes it's locking up: it stops to respond to the network, but the console is still alive, though it doesn't log me in - the 'password:' prompt never comes up (I only can

zfs arc and amount of wired memory

2012-02-07 Thread Eugene M. Zheganin
Hi. I have a server with 9.0/amd64 and 4 Gigs of RAM. Today's questions are about the amount of memory in 'wired' state and the ARC size. If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: ===Cut=== ARC Size: 12.50% 363.14 MiB Ta

Re: zfs arc and amount of wired memory

2012-02-07 Thread Eugene M. Zheganin
Hi. On 07.02.2012 16:46, Andriy Gapon wrote: on 07/02/2012 10:36 Eugene M. Zheganin said the following: If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: ===Cut=== ARC Size: 12.50% 363.14 MiB Target Size: (Adaptive

Re: zfs arc and amount of wired memory

2012-02-07 Thread Eugene M. Zheganin
Hi. On 07.02.2012 17:43, Andriy Gapon wrote: on 07/02/2012 13:34 Eugene M. Zheganin said the following: Hi. On 07.02.2012 16:46, Andriy Gapon wrote: on 07/02/2012 10:36 Eugene M. Zheganin said the following: If I use the script from http://wiki.freebsd.org/ZFSTuningGuide , it says: ===Cut

Re: zfs arc and amount of wired memory

2012-02-07 Thread Eugene M. Zheganin
Hi. On 07.02.2012 21:51, Andriy Gapon wrote: I am not sure that these conclusions are correct. Wired is wired, it's not free. BTW, are you reluctant to share the full zfs-stats -a output? You don't have to place it inline, you can upload it somewhere and provide a link. Well... nothing secr

Re: zfs arc and amount of wired memory

2012-02-08 Thread Eugene M. Zheganin
Hi. On 08.02.2012 02:17, Andriy Gapon wrote: [output snipped] Thank you. I don't see anything suspicious/unusual there. Just case, do you have ZFS dedup enabled by a chance? I think that examination of vmstat -m and vmstat -z outputs may provide some clues as to what got all that memory wired

Re: zfs arc and amount of wired memory

2012-02-08 Thread Eugene M. Zheganin
Hi. On 08.02.2012 18:15, Alexander Leidinger wrote: I can't remember to have seen any mention of SWAP on ZFS being safe now. So if nobody can provide a reference to a place which tells that the problems with SWAP on ZFS are fixed: 1. do not use SWAP on ZFS 2. see 1. 3. check if you see the

Re: zfs arc and amount of wired memory

2012-02-08 Thread Eugene M. Zheganin
Hi. On 09.02.2012 02:29, Andriy Gapon wrote: on 08/02/2012 12:31 Eugene M. Zheganin said the following: Hi. On 08.02.2012 02:17, Andriy Gapon wrote: [output snipped] Thank you. I don't see anything suspicious/unusual there. Just case, do you have ZFS dedup enabled by a chance? I

Re: zfs arc and amount of wired memory

2012-02-09 Thread Eugene M. Zheganin
Hi. On 09.02.2012 14:35, Andriy Gapon wrote: And please take the reports after discrepancy between ARC size an wired size is large enough, like e.g. 1GB. That's when they are useful. Okay, I wrote a short script capturing sequence of top -b/zfs-stats -a/vmstat -m/vmstat -z in a timestamped fi

Re: zfs arc and amount of wired memory

2012-02-09 Thread Eugene M. Zheganin
Hi. On 09.02.2012 14:35, Andriy Gapon wrote: And please take the reports after discrepancy between ARC size an wired size is large enough, like e.g. 1GB. That's when they are useful. One more thing - this machine is running a debug/ddb kernel, so just in order to save two weeks - when/if it w

zfs, 1 gig of RAM and periodic weekly

2012-02-26 Thread Eugene M. Zheganin
Hi. I'm haunted by a weird bug. Some of my servers (IBM x3250) hang periodically. And this is always saturday morning. Different servers in different cities, all with zfs and one gig of RAM. And yeah, it's periodic weekly. I can say more - it's repeatable. 25 minutes ago I typed 'periodic wee

Re: zfs, 1 gig of RAM and periodic weekly

2012-02-27 Thread Eugene M. Zheganin
Hi. On 27.02.2012 13:40, Peter Maloney wrote: 8.2-RELEASE is highly unstable with ZFS in my opinion. For example, my system with 48 GB of RAM would hang or crash for no apparent reason in random intervals. Upgrading in September fixed most of it, except 1 random hang possibly related to NFS, and

Re: zfs, 1 gig of RAM and periodic weekly

2012-02-27 Thread Eugene M. Zheganin
Hi. On 28.02.2012 01:02, Nenhum_de_Nos wrote: regardless of the pool size ? I was planning on making an atom board a file server for my home, and I have two options: soekris net6501 2GB RAM and intel board powered by the 330 atom (says 2GB limited as well). My plans are to use from 4 up to 8

Re: zfs, 1 gig of RAM and periodic weekly

2012-02-27 Thread Eugene M. Zheganin
Hi. On 27.02.2012 20:42, Johannes Totz wrote: You could try to narrow it down to one specific script. My first guess is that 310.locate brings the machine down as it traverses the whole tree. You're absolutely right. Eugene. ___ freebsd-stable@freeb

krb5 and clock skew

2010-11-12 Thread Eugene M. Zheganin
Hi. Panic on em(4) in vlan environment (after upgrade from 7.2-RELEASE to 8.1-RELEASE) forced me to use 8.1-STABLE (built 2 days ago) on one of my productions. Almost all is fine now except of two things, which I decided to split in two letters. This one is about my kerberos setup. I have a

netgraph and interface nodes

2010-11-12 Thread Eugene M. Zheganin
Hi. I'm using 8.1-STABLE on one of my production servers. I wrote about krb5 problem some time ago. Second trouble is netgraph-related. I'm using dot1q on an em(4) card. It's loaded as a module atm (however earlier it was in kernel). I have ng_ether/ng_iface in kernel, if_vlan loaded as a mo

Re: netgraph and interface nodes

2010-11-12 Thread Eugene M. Zheganin
Hi. On 12.11.2010 17:27, Milan Obuch wrote: Slightly related... how about em0.100 type nodes? # ngctl list There are 3 total nodes: Name: em0 Type: ether ID: 0001 Num hooks: 0 Name: em1 Type: ether ID: 0002 Num hooks: 0 Name: ng

8.1 livelock/hangup: possible actions

2010-12-11 Thread Eugene M. Zheganin
Hi. I'm having problems with 8.1-REL/zfs/amd64. It's a IMB x3250 m2 system, 1Gb RAM, dualcore intel e3110, two bge(4) and LSI1064e disk controller. Suddenly it can stop answering to network requests and nothing works except of the keyborad, so I guess the system isn't really dead. No trap

GEOM_RAID in GENERIC 9.1

2012-07-29 Thread Eugene M. Zheganin
Hi. I am aware about how this thing works and what it does. However, every time I upgrade new server I got hit by it again and again, simply forgetting to remove it from the kernel's config. I'm afraid this thing will hit lots of FreeBSD installations after the release; it may be easily remo

Re: GEOM_RAID in GENERIC 9.1

2012-07-29 Thread Eugene M. Zheganin
Hi. On 30.07.2012 11:04, Eugene M. Zheganin wrote: I am aware about how this thing works and what it does. However, every time I upgrade new server I got hit by it again and again, simply forgetting to remove it from the kernel's config. I'm afraid this thing will hit lots

base kerberos

2012-08-01 Thread Eugene M. Zheganin
Hi. I don't see kgetcred binary, but looks like Heimdal version should have it (or it will lack some functionality) plus I see it in the source tree... Do I miss something ? Thanks. Eugene. ___ freebsd-stable@freebsd.org mailing list http://lists.fr

console in graphic mode

2012-08-06 Thread Eugene M. Zheganin
Hi. I'm trying to play with a console in graphic mode and with UTF-8 in it. Nowadays device sc and options SC_PIXEL_MODE are in GENERIC, so I added only the options TEKEN_UTF8 and built/installed the new kernel. Now I'm stuck in a pretty stupid situation: my monitor refuses to work

Re: console in graphic mode

2012-08-06 Thread Eugene M. Zheganin
Hi. On 06.08.2012 20:46, nickolas...@gmail.com wrote: You need VESA and X86BIOS support in your kernel (for i386/amd64 archs). options VESA is already included in GENERIC, you need to add options X86BIOS to kernel config file or load modules during boot process (man loader.conf) Didn't help

Re: GEOM_RAID in GENERIC 9.1

2012-08-17 Thread Eugene M. Zheganin
Hi. On 17.08.2012 14:44, Gabor Radnai wrote: Sorry the content of original mail I replied to was missed. For clarity here it is: [...] My reply again then: Unfortunately I am a less experienced user so no clue how to disable GEOM_RAID but i am hit by this issue. My zfs setup is totally messed

NMI button

2012-08-17 Thread Eugene M. Zheganin
Hi. Guys, I have a IBM system x 3560 server, it hangs, I was told to use the NMI button when it hangs, I found it and pressed, nothing happened. I reset the server, and pressed a NMI button after entering a multiuser. Nothing happened. FreeBSD 9.1-PRERELEASE Now questions: - what should h

Re: VLAN and ARP table

2012-09-01 Thread Eugene M. Zheganin
Hi. On 01.09.2012 1:04, Brad Plank wrote: VLAN interfaces no longer show up in "arp -an", in FreeBSD 9.x, however, the VLAN appears to be fully functional. Any ideas? They do. arp -an ? (86.109.196.1) at 00:21:1b:d1:14:1b on vlan818 expires in 1192 seconds [vlan] ? (89.250.213.121) at d4:c

Re: GEOM_RAID in GENERIC is harmful

2012-09-13 Thread Eugene M. Zheganin
Hi. On 13.09.2012 15:51, Alexander Motin wrote: Problem of on-disk metadata garbage is not limited to GEOM_RAID. For example, I had case where remainders of old UFS file system were found by GEOM_LABEL and ZFS incorrectly attached to it instead of proper GPT partition, making other partition

zfs v28 solaris compatibility

2013-02-07 Thread Eugene M. Zheganin
Hi. Is the FreeBSD v28 zfs fully compatible with solaris zfs ? I need to switch disks between servers, these disks are SAN disks, and it's about 20T of data. I don't want to lose them. I am aware that our zfs is compatible with Solaris, but I just want to be sure, like really really sure. Of cours

watchdogs

2013-02-20 Thread Eugene M. Zheganin
Hi. I have a bunch of FreeBSDs that hangs (and I really want to do something to fight this). May be it's the zfs or may be it's the pf (I also have a bunch of really stable ones, so it's hard to isolate and tell). Since 9.x hang more often I suppose it's pf. I use ichwd.ko and watchdogd to re

Re: watchdogs

2013-02-26 Thread Eugene M. Zheganin
Hi. On 22.02.2013 22:47, Mark Atkinson wrote: I just want to /metoo that I have 32bit/i386 box running zfs, pf and - -current that is hardlocking randomly (usually has an uptime for a few days to a couple weeks). SW_WATCHDOG won't fire when it locks so it must be locking pretty fast. I just n

Re: kern/165903: mbuf leak

2013-04-10 Thread Eugene M. Zheganin
Hi. On 11.04.2013 01:39, Chris Forgeron wrote: > I do not experience the error if I load up vmware tools and use the vmx3f0 > adapter, it's just with em0. > > I have set the mbufs to very high numbers (322144) to buy more time between > lockups/crashes. Most often the systems stay functional,

zpool on a zvol inside zpool

2013-07-22 Thread Eugene M. Zheganin
Hi. I'm moving some of my geli installation to a new machine. On an old machine it was running UFS. I use ZFS on a new machine, but I don't have an encrypted main pool (and I don't want to), so I'm kinda considering a way where I will make a zpool on a zvol encrypted by geli. Would it be completel

ipsec broken again

2015-07-14 Thread Eugene M. Zheganin
Hi. As soon as I upgraded one of my ipsec routers to recent stable (10.2-BETA1 #0 r285524) it stopped working as a security gateway. Ipsec traffic is passed out and receiving in, SA are in place, but nothing happens upon receipt (I run gre over ipsec, so gre interface doesn't see any incoming pack

Re: ipsec broken again

2015-07-15 Thread Eugene M. Zheganin
Hi. On 15.07.2015 09:44, Glen Barber wrote: > On Wed, Jul 15, 2015 at 09:42:05AM +0500, Eugene M. Zheganin wrote: >> As soon as I upgraded one of my ipsec routers to recent stable >> (10.2-BETA1 #0 r285524) it stopped working as a security gateway. Ipsec >> traffic is passed

zfs dataset and hanging getdirentries() on it

2015-07-15 Thread Eugene M. Zheganin
Hi. I have a funny zfs dataset on a recent stable, which listing takes several minutes. In the same time - it runs last available pool version with all available feature flags enabled (however, problem manifested itself on a one year old stable) - it's an old /tmp directory - it used to hold

ipsec on recent STABLE

2015-08-19 Thread Eugene M. Zheganin
Hi. Recently I built an i386 nanobsd image from a recent STABLE, r285595M (seems like some patch laso wasn't overwritten correctly in my tree), and I cannot get it to work. In the same time same revision on amd64 works fine. Symptoms are - nanobsd sends traffic just fine, it's seen on the remote e

Re: [POSSIBLE BUG] 10-STABLE CARP erroneously becomes master on boot

2015-08-21 Thread Eugene M. Zheganin
Hi. On 20.08.2015 14:51, Damien Fleuriot wrote: > > Hello list, > > > > We've managed to find the source of the bug, if it is indeed a bug. > > It all comes down to the order in which the IP addresses are assigned to > the interface from /etc/rc.conf. > > > When using the following syntax, the phy

when the sshd hits the fan

2015-09-23 Thread Eugene M. Zheganin
Hi. I'm trying to understand why the sshd still starts after local daemons, out-of-the-box, and what it takes to make this extremely vital service to start before non-system (local) ones. I bet I'm not the first one to ask, so why isn't this already done ? Seems quite easy for me. Eugene. ___

Re: when the sshd hits the fan

2015-09-23 Thread Eugene M. Zheganin
Hi. On 23.09.2015 15:11, Miroslav Lachman wrote: > Eugene M. Zheganin wrote on 09/23/2015 10:44: >> Hi. >> >> I'm trying to understand why the sshd still starts after local daemons, >> out-of-the-box, and what it takes to make this extremely vital service >

Re: when the sshd hits the fan

2015-09-23 Thread Eugene M. Zheganin
Hi. On 23.09.2015 18:32, Dag-Erling Smørgrav wrote: "Eugene M. Zheganin" writes: I'm trying to understand why the sshd still starts after local daemons, out-of-the-box, and what it takes to make this extremely vital service to start before non-system (local) ones. I bet I'

Re: when the sshd hits the fan

2015-09-23 Thread Eugene M. Zheganin
Hi. On 23.09.2015 20:35, Glenn English wrote: Mildly OT from a profound BSD noob: Why is it necessary to have SSH working before the system has finished booting? That 'Welcome' menu times out, so I can't think of a reason, or find one from Goggle, for needing console access after a power fail

Re: ZFS: Can't find pool by guid

2018-10-24 Thread Eugene M. Zheganin
Hello. On 28.04.2018 17:46, Willem Jan Withagen wrote: Hi, I upgraded a server from 10.4 to 11.1 and now al of a sudden the server complains about: ZFS: Can't find pool by guid And I end up in the boot prompt: lsdev gives disk0 withe on p1 the partion that the zroot is/was. This is an a

plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin
Hello, I have a recent FreeBSD 11-STABLE which is mainly used as an iSCSI target. The system has 64G of RAM but is swapping intensively. Yup, about of half of the memory is used as ZFS ARC (isn't capped in loader.conf), and another half is eaten by the kernel, but it oly uses only about half

Re: plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin
Hello, On 20.11.2018 15:12, Trond Endrestøl wrote: On freebsd-hackers the other day, https://lists.freebsd.org/pipermail/freebsd-hackers/2018-November/053575.html, it was suggested to set vm.pageout_update_period=0. This sysctl is at 600 initially. ZFS' ARC needs to be capped, otherwise it will

Re: plenty of memory, but system us intensively swapping

2018-11-20 Thread Eugene M. Zheganin
Hello, On 20.11.2018 16:22, Trond Endrestøl wrote: I know others have created a daemon that observe the ARC and the amount of wired and free memory, and when these values exceed some threshold, the daemon will allocate a number of gigabytes, writing zero to the first byte or word of every page,

Re: Where is my memory on 'fresh' 11-STABLE? It should be used by ARC, but it is not used for it anymore.

2018-11-20 Thread Eugene M. Zheganin
Hello, 20.11.2018 15:42, Lev Serebryakov пишет: I have server which is mostly torrent box. It uses ZFS and equipped with 16GiB of physical memory. It is running 11-STABLE (r339914 now). I've updated it to r339914 from some 11.1-STABLE revision 3 weeks ago. I was used to see 13-14GiB of m

11-STABLE, gstat and swap: uneven mirror disk usage

2018-11-23 Thread Eugene M. Zheganin
Hello, Am I right concluding that there's something wrong in either how freebsd works with swap partition, or with how gstat reports its activity ? Because on the consistently woring mirror the situation when only one disk member is used and the other is not for both reads and writes just ca

ipsec/gif(4) tunnel not working: traffic not appearing on the gif(4) interface after deciphering

2019-03-26 Thread Eugene M. Zheganin
Hello, I have a FreeBSD 11.1 box with 2 public IPs that has two tunnels to another FreeBSD box with 1 public IP. One of these tunnels is working, the other isn't. Long story short: I have some experience in ipsec tunnels setup. and I supposed that have configured everything properly, and to

zfs receive -s: transfer got interrupted, but no token on the receiving side.

2020-05-27 Thread Eugene M. Zheganin
Hello, I have a ZFS dataset about 10T of actual size (may be more) that I need to send over a very laggy connection. So I'm sending it from the shell-script that reattempts to send it after a short timeout, retrieving the send token first. Like that: ===Cut=== #!/bin/sh exitstatus=1 token

CARP under Hyper-V: weird things happen

2020-05-31 Thread Eugene M. Zheganin
Hello, I'm Running 12.0-REL in a VM under W2016S with CARP enabled and paired to a baremetal FreeBSD server. All of a sudden I realized that thjis machine is unable to become a CARP MASTER - because it sees it's own ACRP announces, but instead of seeing them from a CARP synthetic MAC address

running out of ports: every client port is used only once in outgoing connection

2020-08-27 Thread Eugene M. Zheganin
Hello, I have a situation where I'm running out of client ports on a huge reverse-proxy. Say I have an nginx upstream like this: upstream geoplatform {     hash $hashkey consistent;     server 127.0.0.1:4079 fail_timeout=10s;     server 127.0.0.1:4080 fail_timeout=10s;     se

Re: running out of ports: every client port is used only once in outgoing connection

2020-08-27 Thread Eugene M. Zheganin
Hello, 27.08.2020 23:01, Eugene M. Zheganin wrote: And as soon as I'm switching to it from DNS RR I'm starting to get get "Can't assign outgoing address when connecting to ...". The usual approach would be to assign multiple IP aliases to the destination backen

spa_namespace_lock and concurrent zfs commands

2020-09-09 Thread Eugene M. Zheganin
Hello, I'm using sort of FreeBSD ZFS appliance with custom API, and I'm suffering from huge timeouts when large (dozens, actually) of concurrent zfs/zpool commands are issued (get/create/destroy/snapshot/clone mostly). Are there any tunables that could help mitigate this ? Once I took part i

Re: spa_namespace_lock and concurrent zfs commands

2020-09-09 Thread Eugene M. Zheganin
On 09.09.2020 17:29, Eugene M. Zheganin wrote: Hello, I'm using sort of FreeBSD ZFS appliance with custom API, and I'm suffering from huge timeouts when large (dozens, actually) of concurrent zfs/zpool commands are issued (get/create/destroy/snapshot/clone mostly). Are there an

pf and hnX interfaces

2020-10-13 Thread Eugene M. Zheganin
Hello, I'm running a FreeBSD 12.1 server as a VM under Hyper-V. And although this letter will make an impression of another lame post blaming FreeBSD for all of the issues while the author should blame himselm, I'm atm out of another explanation. The thing is: I'm getting loads of sendmail er

Re: pf and hnX interfaces

2020-10-13 Thread Eugene M. Zheganin
Hello, On 13.10.2020 14:19, Kristof Provost wrote: Are these symptoms of a bug ? Perhaps. It can also be a symptom of resource exhaustion. Are there any signs of memory allocation failures, or incrementing error counters (in netstat or in pfctl)? Well, the only signs of resource exhausti

FreeBSD 12.x, virtio and alicloud (aliyun.com)

2020-11-04 Thread Eugene M. Zheganin
Hello, Guys, does anyone have VM running in AliCloud Chinese provider (one of the biggest, if not the biggest one) ? They seem to provide stock FreeBSD 11.x images on some Redhat-based Linux with VirtIO which run just fine (at least I take a look at their kernel and it seem to be a stock GEN

Re: FreeBSD 12.x, virtio and alicloud (aliyun.com)

2020-11-05 Thread Eugene M. Zheganin
below: https://enazadev.ru/stub-data/freebsd12-patched-trap.png 05.11.2020 11:06, Cevin пишет: The problem seems to have been fixed, but the code is still in the review. For more details, see https://reviews.freebsd.org/D26915#601420 Eugene M. Zheganin 于2020年11月5日周四 下午12:35写道: Hello, Guys,

unable to boot a healthy zfs pool: all block copies unavailable

2015-11-05 Thread Eugene M. Zheganin
Hi. Today one of my zfs pool disks dies, I was unable to change it on the fly (video board was blocking it) so I powered off, changed disk (not in root pool) and all of a sudden I realized that i cannot boot: ZFS: i/o error - all block copies unavailable ZFS: can't read MOS of pool zroot gptzfsbo

Re: unable to boot a healthy zfs pool: all block copies unavailable

2015-11-05 Thread Eugene M. Zheganin
Hi. On 06.11.2015 02:58, Andriy Gapon wrote: > > It could be that your BIOS is not able to read past 1TB (512 * INT_MAX). That > seems to be a rather common problem for consumer motherboards. > Here is an example of how it looked for me: > https://people.freebsd.org/~avg/IMAG1099.jpg > Fortunately

Re: unable to boot a healthy zfs pool: all block copies unavailable

2015-11-08 Thread Eugene M. Zheganin
Hi. On 06.11.2015 21:00, Alan Somers wrote: > I notice that my 10.2-RELEASE VM prints the same message about "all > block copies unavailable" and then continues to boot just fine. So I > wonder if that part is just red herring. There is another possibility > here: I have seen a bug where ZFS att

zfs, mc, mcview and files opening

2015-11-10 Thread Eugene M. Zheganin
Hi. my midnight commander is terribly slow at vieweing files with mcview. Opening of a file of approximately 10 megabytes takes about 30-40 seconds. This isn't related with the compression setting, %busy wait or pool properties- I tested on various machines, it's fully reproducible. This lag appea

Re: zfs, mc, mcview and files opening

2015-11-10 Thread Eugene M. Zheganin
Hi, on 10.11.2015 15:05, Trond Endrestøl wrote: > I blame file(1), it's hopelessly slow. mcview uses file(1) to deduce > if it should just display the damn file or run the file through some > filter. Maybe an option in mc/mcview to disable the use of file(1) is > an acceptable compromise. Yeah,

Re: zfs, mc, mcview and files opening

2015-11-10 Thread Eugene M. Zheganin
Hi, on 10.11.2015 15:29, Trond Endrestøl wrote: > > A workaround is to navigate to the file you wish to view. Hit M-! or > ESC ! to activate the filter command. Hit the Home key, type in c a t > followed by a space and hit enter. > My guys just told me this whole file(2)/libmagic issueis fixed i

poudriere and chflags

2015-11-13 Thread Eugene M. Zheganin
Hi. I'm trying to build a tree and ports for a raspberry pie, I'm doing it on a FreeBAS 10.2-STABLE with poudriere and qemu. During a jail setup I receive an error: [...] --- configlexer.l --- cp -p /usr/local/poudriere/jails/freebsd-10-2-stable-armv6/usr/src/lib/libunbound/../../contrib/unbound/

high disk %busy, while almost nothing happens

2015-11-26 Thread Eugene M. Zheganin
Hi. I'm using FreeBSD 10.1-STABLE as an application server, last week I've noticed that disks are always busy while gstat shows that the activity measured in iops/reads/writes is low, form my point of view: L(q) ops/sr/s kBps ms/rw/s kBps ms/w %busy Name 8 56 50

Re: high disk %busy, while almost nothing happens

2015-11-26 Thread Eugene M. Zheganin
Hi. On 26.11.2015 14:19, Eugene M. Zheganin wrote: > Hi. > > I'm using FreeBSD 10.1-STABLE as an application server, last week I've > noticed that disks are always busy while gstat shows that the activity > measured in iops/reads/writes is low, form my point of view: &

Re: high disk %busy, while almost nothing happens

2015-11-26 Thread Eugene M. Zheganin
Hi. On 27.11.2015 01:37, Ivan Klymenko wrote: > On Thu, 26 Nov 2015 14:19:18 +0500 > "Eugene M. Zheganin" wrote: > >> Hi. >> >> I'm using FreeBSD 10.1-STABLE as an application server, last week I've >> noticed that disks are always busy wh

FreeBSD and UDF

2016-02-25 Thread Eugene M. Zheganin
Hi, recenlty I needed to mount the Windows 2012 R2 iso image, which pappened to ba an UDF image. After mdconfiging it and attempting to mount I got: # mount -t udf /dev/md1 cdrom01 mount_udf: /dev/md1: Invalid argument udf is in kernel. Is UDF filesystem supported in FreeBSD ? I run 10.3-PRERELE

booting from separate zfs pool

2016-04-28 Thread Eugene M. Zheganin
Hi. So, I'm still struggling with my problem when I cannot boot from a big zfs 2T pool (I have written some messages about a year ago, the whole story is too long and irrelevant to retell it, I'll only notice that I took the path where I'm about to boot from a separate zfs pool closer to the begin

HAST, zfs and local mirroring

2016-05-31 Thread Eugene M. Zheganin
Hi. I wat to start using HAST, I have two nodes and a pair of disk on each node. So I want to use HASt in an environment where each HAST resource would be mirrored. What is the preferred approach if I want to use ZFS on an end-device to avoid exsessive fscking, and, in the same time, I want t

Re: HAST, zfs and local mirroring

2016-06-01 Thread Eugene M. Zheganin
Hi. On 01.06.16 02:49, Freddie Cash wrote: > On Tue, May 31, 2016 at 11:18 AM, Eugene M. Zheganin > mailto:e...@norma.perm.ru>>wrote: > > I wat to start using HAST, I have two nodes and a pair of disk on > each node. So I want to use HASt in an environment where eac

Re: HAST, zfs and local mirroring

2016-06-02 Thread Eugene M. Zheganin
Hi. On 01.06.16 18:23, Slawa Olhovchenkov wrote: > > Only FS support changed data bypass FS layer is Files-11 ODS-2 level, > may be hardware support required. > > Can you use ZFS mirror with one vdev local and other vdev by iSCSI? > Every node using separate ZFS pool in this case. If you mean that

vt(4) and HP iLO

2016-06-02 Thread Eugene M. Zheganin
Hi. Recently I've updates some of my HP DL g6 servers to vt(4) console driver. These servers are equipped with lo100i IPMI management device (basically it mimics IPMI). After some time I connected to it's web-interface and opened the Virtual KVM tab, Then I saw this: http://static.enaza.ru/userup

Re: HAST, zfs and local mirroring

2016-06-03 Thread Eugene M. Zheganin
Hi. On 02.06.16 19:50, Slawa Olhovchenkov wrote: > I am suggesting next setup: > > node0: > own pool zroot0: mirror-0: local_disk0 >remote-iscsi_disk1/1 > local_disk1: exported by iscsi as remote-iscsi_disk0/1 to node1 > > node1: > own pool zroot1: mirror-

Re: HAST, zfs and local mirroring

2016-06-03 Thread Eugene M. Zheganin
Hi, On 02.06.16 19:50, Slawa Olhovchenkov wrote: > > I am suggesting next setup: > > node0: > own pool zroot0: mirror-0: local_disk0 >remote-iscsi_disk1/1 > local_disk1: exported by iscsi as remote-iscsi_disk0/1 to node1 > > node1: > own pool zroot1: mirro

cannot delete on-interface route in FIB

2016-06-08 Thread Eugene M. Zheganin
Hi. (first part of the message is describing why I need this, so impatient people can proceed to th 'setfib 2 route delete' part directly). I have a FreeBSD router connected to the ISP network, which is organized according to the rfc3069 (you know, when all of the clients think they have /24. but

Re: cannot delete on-interface route in FIB

2016-06-08 Thread Eugene M. Zheganin
Hi. On 08.06.2016 19:37, Alan Somers wrote: What is the value of "sysctl net.add_addr_allfibs"? In your case, it sounds like you want to set it to 0. Thanks a lot, looks like it, will try. Eugene. ___ freebsd-stable@freebsd.org mailing list https:

FreeBSD as iSCSI target and VMWare ESX

2016-06-20 Thread Eugene M. Zheganin
Hi. Guys, does someone have experience with multiple LUNs on a target in ctld ? Recently I was installing ESX on a bunch of diskless hosts, connected to FreeBSD ctld. I was organizing them inside one target, multiple LUNs. As soon as the _count_ of LUNS went over 9, the whole thing went craz

freebsd loses cd-drive during installation

2016-07-20 Thread Eugene M. Zheganin
Hi, Is there some hack to install FreeBSD when it loses the cd-drive during installation ? I have a couple of ole Sun Fire X2270 without service contract, so I would install FreeBSD onto them, but the thing is that the BIOS does see the CD-drive, and the kernel doesn't. Linux does boot and in

mrsas(4)

2016-07-29 Thread Eugene M. Zheganin
Hi. I'm experiencing some weird troubles with an LSI MegaRAID SAS9341-4i controller: sometimes, all of a sudden, it reports that disk was reattached. I'm using zfs, and a redundant pool, so, besides the fact that it's bad by itself, everything should continue to work, but instead my FreeBSD s

Re: mrsas(4)

2016-08-03 Thread Eugene M. Zheganin
Hi. On 29.07.2016 20:14, Mike Tancsa wrote: On 7/29/2016 9:41 AM, Eugene M. Zheganin wrote: Hi. I'm experiencing some weird troubles with an LSI MegaRAID SAS9341-4i - has anyone seen anything similar on mrsas ? I'm kinda open to the ideas. To be honest, I suspect the controll

FreeBSD doesn't boot automatically from UEFI

2016-08-14 Thread Eugene M. Zheganin
Hi. Recently I've installed FreeBSD 10.3 as an experiment to get the UEFI-enabled system, on one of mine Supermicro servers. The installer worked just fine, but after insttallation I got myself a problem: it cannot boot. If I select to boot the old way, froma disk, BIOS says there's no valid

cannot destroy '': dataset is busy vs iSCSI

2016-08-18 Thread Eugene M. Zheganin
Hi. I'm using zvol clones with iSCSI. Perdiodically I renew them and destroy the old ones, but sometimes the clone gets stuck and refuses to be destroyed: (I'm showing the full sequence so it's self explanatory who is who's parent) [root@san2:/etc]# zfs destroy esx/games-reference1@ver5_6

zfs/raidz and creation pause/blocking

2016-09-22 Thread Eugene M. Zheganin
Hi. Recently I spent a lot of time setting up various zfs installations, and I got a question. Often when creating a raidz on disks considerably big (>~ 1T) I'm seeing a weird stuff: "zpool create" blocks, and waits for several minutes. In the same time system is fully responsive and I can see in

zvol clone diffs

2016-09-22 Thread Eugene M. Zheganin
Hi. I should mention from the start that this is a question about an engineering task, not a question about FreeBSD issue. I have a set of zvol clones that I redistribute over iSCSI. Several Windows VMs use these clones as disks via their embedded iSCSI initiators (each clone represents a disk wi

zfs/raidz: seems like I'm failing with math

2016-10-16 Thread Eugene M. Zheganin
Hi. FreeBSD 11.0-RC1 r303979, zfs raidz1: ===Cut=== # zpool status gamestop pool: gamestop state: ONLINE scan: none requested config: NAMESTATE READ WRITE CKSUM gamestopONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 da

Re: zfs/raidz: seems like I'm failing with math

2016-10-16 Thread Eugene M. Zheganin
Hi. On 16.10.2016 22:06, Alan Somers wrote: It's raw size, but the discrepancy is between 1000 and 1024. Smartctl is reporting base 10 size, but zpool is reporting base 1024.. 960197124096.0*6/1024**4 = 5.24 TB, which is pretty close to what zpool says. Thanks ! It does explain it. But then aga

Re: zfs/raidz: seems like I'm failing with math

2016-10-16 Thread Eugene M. Zheganin
Hi. On 16.10.2016 23:42, Gary Palmer wrote: You're confusing disk manufacturer gigabytes with real (power of two) gigabytes. The below turns 960 197 124 096 into real gigabytes Yup, I thought that smartctl is better than that and already displayed the size with base 1024. :) Thanks. Eugene

Re: I'm upset about FreeBSD

2016-10-16 Thread Eugene M. Zheganin
Hi. On 17.10.2016 5:44, Rostislav Krasny wrote: Hi, I've been using FreeBSD for many years. Not as my main operating system, though. But anyway several bugs and patches were contributed and somebody even added my name into the additional contributors list. That's pleasing but today I tried to i

zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. I have FreeBSD 10.2-STABLE r289293 (but I have observed this situation on different releases) and a zfs. I also have one directory that used to have a lot of (tens of thousands) files. I surely takes a lot of time to get a listing of it. But now I have 2 files and a couple of dozens direc

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. On 20.10.2016 18:54, Nicolas Gilles wrote: Looks like it's not taking up any processing time, so my guess is the lag probably comes from stalled I/O ... bad disk? Well, I cannot rule this out completely, but first time I've seen this lag on this particular server about two months ago, and I

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. On 20.10.2016 19:03, Miroslav Lachman wrote: What about snapshots? Are there any snapshots on this filesystem? Nope. # zfs list -t all NAMEUSED AVAIL REFER MOUNTPOINT zroot 245G 201G 1.17G legacy zroot/tmp 10.1M 201G

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi, On 20.10.2016 19:12, Pete French wrote: Have ignored this thread untiul now, but I observed the same behaviour on mysystems over the last week or so. In my case its an exim spool directory, which was hugely full as some point (thousands of files) and now takes an awfully long time to open an

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. On 20.10.2016 19:18, Dr. Nikolaus Klepp wrote: I've the same issue, but only if the ZFS resides on a LSI MegaRaid and one RAID0 for each disk. Not in my case, both pool disks are attached to the Intel ICH7 SATA300 controller. Thanks. Eugene. __

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-20 Thread Eugene M. Zheganin
Hi. On 20.10.2016 21:17, Steven Hartland wrote: Do you have atime enabled for the relevant volume? I do. If so disable it and see if that helps: zfs set atime=off Nah, it doesn't help at all. Thanks. Eugene. ___ freebsd-stable@freebsd.org mailin

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Eugene M. Zheganin
Hi. On 21.10.2016 9:22, Steven Hartland wrote: On 21/10/2016 04:52, Eugene M. Zheganin wrote: Hi. On 20.10.2016 21:17, Steven Hartland wrote: Do you have atime enabled for the relevant volume? I do. If so disable it and see if that helps: zfs set atime=off Nah, it doesn't help a

Re: zfs, a directory that used to hold lot of files and listing pause

2016-10-21 Thread Eugene M. Zheganin
Hi. On 21.10.2016 15:20, Slawa Olhovchenkov wrote: ZFS prefetch affect performance dpeneds of workload (independed of RAM size): for some workloads wins, for some workloads lose (for my workload prefetch is lose and manualy disabled with 128GB RAM). Anyway, this system have only 24MB in ARC by

  1   2   >