Pkg dependency strangeness

2018-12-26 Thread Walter Parker
I've just upgraded an existing FreeBSD 11.1 system with php56 to FreeBSD
11.2 and php72.

In order to do this, I used a mix of ports and packages to delete php56 and
all of the php56 extensions and replace them with php72 and php72
extensions. Everything is working now, but when I try to install anything
using pkg, it wants to reinstall php56 and serveral php56 extensions. This
happens for packages that don't have php56 as a dependency.

For example
 pkg install alpine

gives
The following 11 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
alpine: 2.21.
php56-session: 5.6.39
php56: 5.6.39
php56-xmlrpc: 5.6.39
php56-xml: 5.6.39
postgresql95-client: 9.5.15_2
php56-pgsql: 5.6.39_1
php56-mbstring: 5.6.39
php56-json: 5.6.39
php56-pdo_pgsql: 5.6.39_1
php56-pdo: 5.6.39

Number of packages to be installed: 11

How do I tell pkg that I don't want php56 reinstalled?


Thank you,


Walter

-- 
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding.   -- Justice Louis D. Brandeis
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: Pkg dependency strangeness

2018-12-27 Thread Walter Parker
Hi Matthew,

I ran pkg upgrade -n

New packages to be INSTALLED:
xorgproto: 2018.4
wayland: 1.16.0
libepoll-shim: 0.0.20180530
libXtst: 1.2.3_2
postgresql95-client: 9.5.15_2
mysql56-client: 5.6.42_1
php72-pdo_sqlite: 7.2.13
(lots of p5-s)

Installed packages to be UPGRADED:
(lots of py36 s
(lots of p5-s)
dspam: 3.10.2_4 -> 3.10.2_5
dovecot-pigeonhole: 0.4.21_1 -> 0.5.4_1
dovecot: 2.2.33.2_4 -> 2.3.4_3

Installed packages to be REINSTALLED:
roundcube-php72-1.3.8_1,1 (options changed)
php72-pgsql-7.2.13_1 (direct dependency changed:
postgresql95-client)
php72-pdo_pgsql-7.2.13_1 (direct dependency changed:
postgresql95-client)
php72-openssl-7.2.13 (direct dependency changed: php72)

Note, the system is currently running postgresql96. I used postmaster to
install the php72-pgsql & php72-pdo_pgsql ports and to install
postgresql96-client and server.

Ran pkg version -vRL=
Found several ophaned packages, most seem be because I remove the
xproto packages per the UPDATING instructions
ImageMagick-nox11-6.9.9.28,1   ?   orphaned: graphics/ImageMagick-nox11
bigreqsproto-1.1.2 ?   orphaned: x11/bigreqsproto
Also, my X appears to be out of date (system runs headless so that is a
lower concern)
libXext-1.3.3_1,1  <   needs updating (remote has 1.3.3_3,1)
libXfixes-5.0.3<   needs updating (remote has 5.0.3_2)
libXfont-1.5.2,2   <   needs updating (remote has 1.5.4_2,2)
libXft-2.3.2_1 <   needs updating (remote has 2.3.2_3)
lha-1.14i_7?   orphaned: archivers/lha
p5-Net-SMTP-SSL-1.04   ?   orphaned: mail/p5-Net-SMTP-SSL
pecl-imagick-3.4.3_2   ?   orphaned: graphics/pecl-imagick
pecl-memcached2-2.2.0_5?   orphaned: databases/pecl-memcached2
roundcube-sieverules-2.1.2,1   ?   orphaned: mail/roundcube-sieverules

When I run pkg install phppgadmin-php72, it tells me that I'm on the most
recent version of the package. So I removed it and tried a reinstall

[root@natasha /usr/ports/mail/postfixadmin]# pkg install phppgadmin-php72
Updating FreeBSD repository catalogue...
FreeBSD repository is up to date.
All repositories are up to date.
The following 7 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
phppgadmin-php72: 5.1_5
php56-session: 5.6.39
php56: 5.6.39
postgresql95-client: 9.5.15_2
php56-json: 5.6.39
php56-pdo_pgsql: 5.6.39_1
php56-pdo: 5.6.39

Number of packages to be installed: 7

I stopped this and did a portmaster -B -D databases/phppgadmin it it
installed
===>>> All dependencies are up to date

===>  Cleaning for phppgadmin-php72-5.1_5
===>  Cleaning for phppgadmin-php56-5.1_5
===>  Cleaning for phppgadmin-php71-5.1_5
===>  Cleaning for phppgadmin-php73-5.1_5
# you can customize the installation directory
# by setting PGADMDIR in /etc/make.conf
===>  License GPLv2 accepted by the user
===>   phppgadmin-php72-5.1_5 depends on file: /usr/local/sbin/pkg - found
===> Fetching all distfiles required by phppgadmin-php72-5.1_5 for building
===>  Extracting for phppgadmin-php72-5.1_5
=> SHA256 Checksum OK for phpPgAdmin-5.1.tar.bz2.
===>  Patching for phppgadmin-php72-5.1_5


Thank you,


Walter

On Thu, Dec 27, 2018 at 5:08 AM Matthew Seaman  wrote:

> On 27/12/2018 01:45, Walter Parker wrote:
> > I've just upgraded an existing FreeBSD 11.1 system with php56 to FreeBSD
> > 11.2 and php72.
> >
> > In order to do this, I used a mix of ports and packages to delete php56
> and
> > all of the php56 extensions and replace them with php72 and php72
> > extensions. Everything is working now, but when I try to install anything
> > using pkg, it wants to reinstall php56 and serveral php56 extensions.
> This
> > happens for packages that don't have php56 as a dependency.
> >
> > For example
> >  pkg install alpine
> >
>
> As far as I can tell, this is nothing to do with alpine itself, which
> doesn't seem to depend on PHP at all.  My guess is that you have some
> other PHP-based application installed which has an unmet dependency on
> php56.
>
> If you run 'pkg upgrade -n' without trying to install anything new, what
> happens?  Also try 'pkg version -vRL=' Do you have any orphaned packages?
>
> See if you can work out what it is you've installed that wants to pull
> in php56.  Judging by the dependencies in your original post, it could
> be something like databases/phppgadmin or a similar PHP application that
> uses a postgresql back-end. I'm going to use phppgadmin as the example
> package in what follows: you should substitute the actual package or
> p

Re: Pkg dependency strangeness

2018-12-27 Thread Walter Parker
I appear to have found a solution:

I used pkg check -d and it said

pecl-memcached2 has a missing dependency: php56-session
pecl-memcached2 has a missing dependency: php56-json
policyd2 has a missing dependency: php56-pdo_pgsql


php56-session dependency failed to be fixed
php56-json dependency failed to be fixed
php56-pdo_pgsql dependency failed to be fixed

So use pkg remove to delete pecl-memcached2 and policyd2 and now when I
use  pkg it appears to work as expected

[root@natasha /usr/local/etc/postfix]# pkg install alpine

New packages to be INSTALLED:
alpine: 2.21.

Number of packages to be installed: 1

The process will require 8 MiB more space.
2 MiB to be downloaded.




On Thu, Dec 27, 2018 at 2:04 PM Walter Parker  wrote:

> Hi Matthew,
>
> I ran pkg upgrade -n
>
> New packages to be INSTALLED:
> xorgproto: 2018.4
> wayland: 1.16.0
> libepoll-shim: 0.0.20180530
> libXtst: 1.2.3_2
> postgresql95-client: 9.5.15_2
> mysql56-client: 5.6.42_1
> php72-pdo_sqlite: 7.2.13
> (lots of p5-s)
>
> Installed packages to be UPGRADED:
> (lots of py36 s
> (lots of p5-s)
> dspam: 3.10.2_4 -> 3.10.2_5
> dovecot-pigeonhole: 0.4.21_1 -> 0.5.4_1
> dovecot: 2.2.33.2_4 -> 2.3.4_3
>
> Installed packages to be REINSTALLED:
> roundcube-php72-1.3.8_1,1 (options changed)
> php72-pgsql-7.2.13_1 (direct dependency changed:
> postgresql95-client)
> php72-pdo_pgsql-7.2.13_1 (direct dependency changed:
> postgresql95-client)
> php72-openssl-7.2.13 (direct dependency changed: php72)
>
> Note, the system is currently running postgresql96. I used postmaster to
> install the php72-pgsql & php72-pdo_pgsql ports and to install
> postgresql96-client and server.
>
> Ran pkg version -vRL=
> Found several ophaned packages, most seem be because I remove the
> xproto packages per the UPDATING instructions
> ImageMagick-nox11-6.9.9.28,1   ?   orphaned: graphics/ImageMagick-nox11
> bigreqsproto-1.1.2 ?   orphaned: x11/bigreqsproto
> Also, my X appears to be out of date (system runs headless so that is a
> lower concern)
> libXext-1.3.3_1,1  <   needs updating (remote has
> 1.3.3_3,1)
> libXfixes-5.0.3<   needs updating (remote has 5.0.3_2)
> libXfont-1.5.2,2   <   needs updating (remote has
> 1.5.4_2,2)
> libXft-2.3.2_1 <   needs updating (remote has 2.3.2_3)
> lha-1.14i_7?   orphaned: archivers/lha
> p5-Net-SMTP-SSL-1.04   ?   orphaned: mail/p5-Net-SMTP-SSL
> pecl-imagick-3.4.3_2   ?   orphaned: graphics/pecl-imagick
> pecl-memcached2-2.2.0_5?   orphaned: databases/pecl-memcached2
> roundcube-sieverules-2.1.2,1   ?   orphaned: mail/roundcube-sieverules
>
> When I run pkg install phppgadmin-php72, it tells me that I'm on the most
> recent version of the package. So I removed it and tried a reinstall
>
> [root@natasha /usr/ports/mail/postfixadmin]# pkg install phppgadmin-php72
> Updating FreeBSD repository catalogue...
> FreeBSD repository is up to date.
> All repositories are up to date.
> The following 7 package(s) will be affected (of 0 checked):
>
> New packages to be INSTALLED:
> phppgadmin-php72: 5.1_5
> php56-session: 5.6.39
> php56: 5.6.39
> postgresql95-client: 9.5.15_2
> php56-json: 5.6.39
> php56-pdo_pgsql: 5.6.39_1
> php56-pdo: 5.6.39
>
> Number of packages to be installed: 7
>
> I stopped this and did a portmaster -B -D databases/phppgadmin it it
> installed
> ===>>> All dependencies are up to date
>
> ===>  Cleaning for phppgadmin-php72-5.1_5
> ===>  Cleaning for phppgadmin-php56-5.1_5
> ===>  Cleaning for phppgadmin-php71-5.1_5
> ===>  Cleaning for phppgadmin-php73-5.1_5
> # you can customize the installation directory
> # by setting PGADMDIR in /etc/make.conf
> ===>  License GPLv2 accepted by the user
> ===>   phppgadmin-php72-5.1_5 depends on file: /usr/local/sbin/pkg - found
> ===> Fetching all distfiles required by phppgadmin-php72-5.1_5 for building
> ===>  Extracting for phppgadmin-php72-5.1_5
> => SHA256 Checksum OK for phpPgAdmin-5.1.tar.bz2.
> ===>  Patching for phppgadmin-php72-5.1_5
>
>
> Thank you,
>
>
> Walter
>
> On Thu, Dec 27, 2018 at 5:08 AM Matthew Seaman 
> wrote:
>
>> On 27/12/2018 01:45, Walter Parker wrote:
>> > I've just upgraded an existing FreeBSD 11.1 system with php56 to FreeBSD
>> > 11.2 and php72.
>> >
>> > In order to do this, I used a mix of ports and packages t

Boot from one drive and load FreeBSD from another

2019-01-11 Thread Walter Parker
Hi,

I'd like to boot FreeBSD 12 on a system where the OS is installed to a ZFS
pool that can't be booted by the OS.

This is a pre-UEFI machine. It has a pair of SAS drives and 3 PCIe slots.
What I'd like to do is put the boot loader on the SAS drive and then have
FreeBSD load from a ZFS mirror created using 2 nvme SSD drives on PCIe to
M.2 adapter cards. The BIOS is old enough that it will not boot from a PCIe
card.

If I create a FreeBSD-boot partition on the SAS drive and a FreeBSD-zfs
partition on the ZFS mirror, will the boot partition loader automatically
find the ZFS pool? If not, is there anything special I can do to force a
boot?

Second, if I want to do this on a second machine that does have UEFI, can I
do the same thing? This time, I think would I would do is put a UEFI boot
partition on the SAS drive and have it find the FreeBSD-zfs partition on
the ZFS mirror.


Thank you,


Walter

-- 
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding.   -- Justice Louis D. Brandeis
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS...

2019-04-30 Thread Walter Parker
On Tue, Apr 30, 2019 at 8:19 PM Michelle Sullivan 
wrote:

>
>
> Michelle Sullivan
> http://www.mhix.org/
> Sent from my iPad
>
> > On 01 May 2019, at 12:37, Karl Denninger  wrote:
> >
> > On 4/30/2019 20:59, Michelle Sullivan wrote
> >>> On 01 May 2019, at 11:33, Karl Denninger  wrote:
> >>>
>  On 4/30/2019 19:14, Michelle Sullivan wrote:
> 
>  Michelle Sullivan
>  http://www.mhix.org/
>  Sent from my iPad
> 
> >>> Nope.  I'd much rather *know* the data is corrupt and be forced to
> >>> restore from backups than to have SILENT corruption occur and perhaps
> >>> screw me 10 years down the road when the odds are my backups have
> >>> long-since been recycled.
> >> Ahh yes the be all and end all of ZFS.. stops the silent corruption of
> data.. but don’t install it on anything unless it’s server grade with
> backups and ECC RAM, but it’s good on laptops because it protects you from
> silent corruption of your data when 10 years later the backups have
> long-since been recycled...  umm is that not a circular argument?
> >>
>
ZFS works fine on systems with ECC. According to one of the original Sun
authors of ZFS, the scrub of death is a myth. A non-ECC running ZFS is not
more risky than one running UFS.

As far as backups go, you should have backups for your important data. This
is true for any filesystem. Home users have been using backups for decades.
For anybody that has important data, not having a backup is a false
economy. Odds are that most people you don't have backup can afford them if
they plan and allocate money for this task (as 8TB USB drives are now at
Costco for ~$140, getting one of.should not be much of an issue). Tarsnap
is great and cheap online backup system.

Silent data corruption is a thing. CERN tested it 10-15 years on brand new,
high end production hardware. They run a test running on 3000 new servers
run for 1 week. They found 147 silent data corruption errors on the server
farm (found due to ZFS error checking).


> >> Don’t get me wrong here.. and I know you (and some others are) zfs in
> the DC with 10s of thousands in redundant servers and/or backups to keep
> your critical data corruption free = good thing.
> >>
> >> ZFS on everything is what some say (because it prevents silent
> corruption) but then you have default policies to install it everywhere ..
> including hardware not equipped to function safely with it (in your own
> arguments) and yet it’s still good because it will still prevent silent
> corruption even though it relies on hardware that you can trust...  umm say
> what?
> >>


I run ZFS on embedded firewalls, it has worked fine for years. ZFS by
default is a good idea on systems that ZFS built in (such as FreeBSD and
SmartOS). There are great things you do with boot environments  and volume
management is much nicer. Don't let the blowhards that say ZFS should only
be run on server grade (ie ECC) hardware spook you. It works as well as UFS
would on any system with at least 1GB of RAM (but I'd suggest getting at
least 2GB if you can't get 4GB), you just need to adjust a few memory
parameters.


> >> Anyhow veered way way off (the original) topic...
> >>
> >> Modest (part consumer grade, part commercial) suffered irreversible
> data loss because of a (very unusual, but not impossible) double power
> outage.. and no tools to recover the data (or part data) unless you have
> some form of backup because the file system deems the corruption to be too
> dangerous to let you access any of it (even the known good bits) ...
> >>
> >> Michelle
> >
> > IMHO you're dead wrong Michelle.  I respect your opinion but disagree
> > vehemently.
>
> I guess we’ll have to agree to disagree then, but I think your attitude to
> pronounce me “dead wrong” is short sighted, because it strikes of “I’m
> right because ZFS is the answer to all problems.” .. I’ve been around in
> the industry long enough to see a variety of issues... some disasters, some
> not so...
>
> I also should know better than to run without backups but financial
> constraints precluded me as will for many non commercial people.
>
> >
> > I run ZFS on both of my laptops under FreeBSD.  Both have
> > non-power-protected SSDs in them.  Neither is mirrored or Raidz-anything.
> >
> > So why run ZFS instead of UFS?
> >
> > Because a scrub will detect data corruption that UFS cannot detect *at
> all.*
>
> I get it, I really do, but that balances out against, if you can’t rebuild
> it make sure you have (tested and working) backups and be prepared for
> downtime when such corruption does occur.
>
> >
> > It is a balance-of-harms test and you choose.  I can make a very clean
> > argument that *greater information always wins*; that is, I prefer in
> > every case to *know* I'm screwed rather than not.  I can defend against
> > being screwed with some amount of diligence but in order for that
> > diligence to be reasonable I have to know about the screwing in a
> > reasonable amount of time after it happen

Re: ZFS...

2019-05-07 Thread Walter Parker
>
>
> Everytime I have seen this issue (and it's been more than once - though
> until now recoverable - even if extremely painful) - its always been
> during a resilver of a failed drive and something happening... panic,
> another drive failure, power etc.. any other time its rock solid...
> which is the yes and no... under normal circumstances zfs is very very
> good and seems as safe as or safer than UFS... but my experience is ZFS
> has one really bad flaw.. if there is a corruption in the metadata -
> even if the stored data is 100% correct - it will fault the pool and
> thats it it's gone barring some luck and painful recovery (backups
> aside) ... this other file systems also suffer but there are tools that
> *majority of the time* will get you out of the s**t with little pain.
> Barring this windows based tool I haven't been able to run yet, zfs
> appears to have nothing.
>
>
> This is the difference I see here. You keep says that all of the data
drive is 100% correct, that is only the meta data on the drive that is
incorrect/corrupted. How do you know this? Especially, how to you know
before you recovered the data from the drive. As ZFS meta data is stored
redundantly on the drive and never in an inconsistent form (that is what
fsck does, it fixes the inconsistent data that most other filesystems store
when they crash/have disk issues). If the meta data is corrupted, how
would  ZFS know what other correct (computers don't understand things, they
just follow the numbers)? If the redundant copies of the meta data are
corrupt, what are the odds that the file data is corrupt? In my experience,
getting the meta data trashed and none of the file data trashed is a rare
event on a system with multi-drive redundancy.

I have a friend/business partner that doesn't want to move to ZFS because
his recovery method is wait for a single drive (no-redundancy, sometimes no
backup) to fail and then use ddrescue to image the broken drive to a new
drive (ignoring any file corruption because you can't really tell without
ZFS). He's been using disk rescue programs for so long that he will not
move to ZFS, because it doesn't have a disk rescue program. He has systems
on Linux with ext3 and no mirroring or backups. I've asked about moving
them to a mirrored ZFS system and he has told me that the customer doesn't
want to pay for a second drive (but will pay for hours of his time to fix
the problem when it happens). You kind of sound like him. ZFS is risky
because there isn't a good drive rescue program. Sun's design was that the
system should be redundant by default and checksum everything. If the
drives fail, replace them. If they fail too much or too fast, restore from
backup. Once the system had too much corruption, you can't recover/check
for all the damage without a second off disk copy. If you have that off
disk, then you have backup. They didn't build for the standard use case as
found in PCs because the disk recover programs rarely get everything back,
therefore they can't be relied on to get you data back when your data is
important. Many PC owners have brought PC mindset ideas to the "UNIX"
world. Sun's history predates Windows and Mac and comes from a
Mini/Mainframe mindset (were people tried not to guess about data
integrity).

Would a disk rescue program for ZFS be a good idea? Sure. Should the lack
of a disk recovery program stop you from using ZFS? No. If you think so, I
suggest that you have your data integrity priorities in the wrong order
(focusing on small, rare events rather than the common base case).


Walter

-- 
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding.   -- Justice Louis D. Brandeis
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: ZFS...

2019-05-08 Thread Walter Parker
>
>
> ZDB (unless I'm misreading it) is able to find all 34m+ files and
> verifies the checksums.  The problem is in the zfs data structures (one
> definitely, two maybe, metaslabs fail checksums preventing the mounting
> (even read-only) of the volumes.)
>
> >   Especially, how to you know
> > before you recovered the data from the drive.
> See above.
>
> > As ZFS meta data is stored
> > redundantly on the drive and never in an inconsistent form (that is what
> > fsck does, it fixes the inconsistent data that most other filesystems
> store
> > when they crash/have disk issues).
> The problem - unless I'm reading zdb incorrectly - is limited to the
> structure rather than the data.  This fits with the fact the drive was
> isolated from user changes when the drive was being resilvered so the
> data itself was not being altered .. that said, I am no expert so I
> could easily be completely wrong.
>
>  What it sounds like you need is a meta data fixer, not a file recovery
tool. Assuming the meta data can be fixed that would be the easy route.
That sound not be hard to write if everything else on the disk has no
issues. Don't you say in another message that the system is now returning
100's of drive errors. How does that relate the statement =>Everything on
the disk is fine except for a little bit of corruption in the freespace map?


>
> >
> > I have a friend/business partner that doesn't want to move to ZFS because
> > his recovery method is wait for a single drive (no-redundancy, sometimes
> no
> > backup) to fail and then use ddrescue to image the broken drive to a new
> > drive (ignoring any file corruption because you can't really tell without
> > ZFS). He's been using disk rescue programs for so long that he will not
> > move to ZFS, because it doesn't have a disk rescue program.
>
> The first part is rather cavilier .. the second part I kinda
> understand... its why I'm now looking at alternatives ... particularly
> being bitten as badly as I have with an unmountable volume.
>
> On the system I managed for him, we had a system with ZFS crap out. I
restored it from a backup. I continue to believe that people running
systems without backups are living on borrowed time. The idea of relying on
a disk recovery tool is too risky for my taste.


> >   He has systems
> > on Linux with ext3 and no mirroring or backups. I've asked about moving
> > them to a mirrored ZFS system and he has told me that the customer
> doesn't
> > want to pay for a second drive (but will pay for hours of his time to fix
> > the problem when it happens). You kind of sound like him.
> Yeah..no!  I'd be having that on a second (mirrored) drive... like most
> of my production servers.
>
> > ZFS is risky
> > because there isn't a good drive rescue program.
> ZFS is good for some applications.  ZFS is good to prevent cosmic ray
> issues.  ZFS is not good when things go wrong.  ZFS doesn't usually go
> wrong.  Think that about sums it up.
>
> When it does go wrong I restore from backups. Therefore my systems don't
have problems. I sorry you had the perfect trifecta that caused you to lose
multiple drives and all your backups at the same time.


> >   Sun's design was that the
> > system should be redundant by default and checksum everything. If the
> > drives fail, replace them. If they fail too much or too fast, restore
> from
> > backup. Once the system had too much corruption, you can't recover/check
> > for all the damage without a second off disk copy. If you have that off
> > disk, then you have backup. They didn't build for the standard use case
> as
> > found in PCs because the disk recover programs rarely get everything
> back,
> > therefore they can't be relied on to get you data back when your data is
> > important. Many PC owners have brought PC mindset ideas to the "UNIX"
> > world. Sun's history predates Windows and Mac and comes from a
> > Mini/Mainframe mindset (were people tried not to guess about data
> > integrity).
> I came from the days of Sun.
>
> Good then you should understand Sun's point of view.


> >
> > Would a disk rescue program for ZFS be a good idea? Sure. Should the lack
> > of a disk recovery program stop you from using ZFS? No. If you think so,
> I
> > suggest that you have your data integrity priorities in the wrong order
> > (focusing on small, rare events rather than the common base case).
> Common case in your assessment in the email would suggest backups are
> not needed unless you have a rare event of a multi-drive failure.  Which
> I know you're not advocating, but it is this same circular argument...
> ZFS is so good it's never wrong we don't need no stinking recovery
> tools, oh but take backups if it does fail, but it won't because it's so
> good and you have to be running consumer hardware or doing something
> wrong or be very unlucky with failures... etc.. round and round we go,
> where ever she'll stop no-one knows.
>
> I advocate 2-3 backups of any important system (at least one different
th

Re: "dhclient: send_packet: No buffer space available"

2019-12-04 Thread Walter Parker
On Tue, Dec 3, 2019 at 8:39 PM Yoshihiro Ota  wrote:
>
> Hi,
>
> I recently switched internet service provider and got much faster connection.
> However, I started seeing some unstable connections between the WiFi router 
> and FreeBSD.
> I was on FreeBSD 12.0-RELEASE and switched to 12.1-RELEASE.
> Both versions have same symptoms.
>
> I get "dhclient: send_packet: No buffer space available" to console and then 
> I frequently lose connection after seeing it.
> After taking the wlan down and re-connect it with wpa_supplicant, I usually 
> get a connection back.
>
> On the side note, I also see these outputs to console and syslog:
> ath0: bb hang detected (0x4), resetting
> ath0: ath_legacy_rx_tasklet: sc_inreset_cnt > 0; skipping
>
> Does anyone have any advice how to fix/work-around or where to start looking: 
> ath driver, dhclinet, kernel, wlan, and/or wpa_supplicant?
>
> Thanks,
> Hiro
> ___
> freebsd-stable@freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

IIRC, that means that you are running out of mbufs. There are two ways
to fix it, increase the number of mbufs (kern.ipc.nmbufs or
kern.ipc.nmbclusters), or increase the aggressiveness of closing old
network connections (I do this from a menu in pfSense, so I don't
remember the command line setting name off hand).

Searching for the error online suggests that this error can also be
caused if the layer 2 network connection failed, such as a wlan
association failure. Or that there is a network filter somewhere that
is sinking/blocking packets and that this is causing the mbuf queue to
fill up (every network request requires an mbuf). In the past, people
have also fixed this by swapping the network hardware.


Walter

-- 
The greatest dangers to liberty lurk in insidious encroachment by men
of zeal, well-meaning but without understanding.   -- Justice Louis D.
Brandeis
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Building Kernel and World with -j

2017-01-22 Thread Walter Parker
Hi,

For decades there has always been a warning not to do parallel builds of
the kernel or the world (Linux kernel builds also suggest not to do this).

Every once in a while, I see people post about 5 minutes. This only way I
can see this happening is by doing a parallel build (-j 16 on a Xeon
Monster box).

Are parallel builds safe? If not, what are actual risk factors and can they
be mitigated?


Walter

-- 
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding.   -- Justice Louis D. Brandeis
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


GPT partition gets erased on boot

2017-11-12 Thread Walter Parker
Hi,

I'm setting up an old Sunfire X4140 system with 4 SAS drives plugged into
the SUN STK controller. Each drive is configured as a separate device.

I ran the FreeBSD 11.1 installer and picked guided ZFS install where I
selected RAID10 (2 2way mirrors) using aacd0, aacd1, aacd2, and aacd3. It
installed the system and rebooted.

When the new system came up, zpool status complained that aacd0p3 was
unavailable. Looking at /dev, I noticed that the only dev file was aacd0
(aacd0p* were all missing). Using gpart on aacd0 it said there was no geom.
I was able to run gpart create to recreate the drive geom and then gpart
add to readd all of the partitions. After recreating aacd0p3, I was even
able to run zpool replace -f zroot /dev/aacd0p3 and the pool resilvered and
stopped complaining.

On the next boot, aacd0 was missing all of its partitions.

What would cause the partitions on a ZFS drive to disappear?


Thank you,


Walter

-- 
The greatest dangers to liberty lurk in insidious encroachment by men of
zeal, well-meaning but without understanding.   -- Justice Louis D. Brandeis
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"