Re: Using an older port to make another port

2017-03-05 Thread db
On 5 Mar 2017, at 01:58, Michael  wrote:
> Ok, how?
> 
> Is it as simple as "copy these files out of the git tree into the /opt tree"? 
> And if so, will that "clean up" automatically the next time I do a selfupdate?
> 
> Is there an environment variable I can set to say "Find the portfiles here, 
> rather than in the default location"? My concern here is that I can easily 
> think of cases where turning back a library requires turning back the 
> programs that use that library.


· installing an older version of a port in the github era -- an answer
  https://lists.macports.org/pipermail/macports-dev/2016-December/035058.html

· set port not to upgrade
  
https://lists.macports.org/pipermail/macports-users/2017-February/thread.html#42751

· Local Portfile Repositories
  https://guide.macports.org/#development.local-repositories


I haven't gotten around to make it with git, nor used local libraries yet, but 
you could try duplicating libarchive and cmake locally and make the latter 
dependent on a renamed version of the former, like port:libarchive_local in its 
portfile. For the caveats read the threads above.

A question on dynamic linking / version-changing libraries

2017-03-05 Thread Michael
So here's a real basic question: Why dynamic linking? Why dynamic linking of 
libraries by library name (as opposed to linking by library name + API version)?

I understand that, way way back in the dawn of time, computer drives were 
small, computer memory was small, and needing to reuse code on disk was critial 
to make the systems fit, and reusing code in memory was important to make the 
systems fit the computers.

But now?

And even if you have dynamic linking, why does a change in the API keep the 
same library / linking?

I mean, it's not like Mac OS permits app-specific versions of frameworks to be 
shipped with the app so that as the libraries on the system change, the 
known-case frameworks used by the app stay the same. Oh wait, it does exactly 
that.

With frameworks, an app can have specific versions, shipped with it, duplicated 
on disk, and not shared in memory. If those frameworks have bugs and get 
improved, you don't automatically get to use the updated frameworks installed 
in the system.

So why do libraries *still* behave differently? 

Why does Macports generate libraries that follow the 1970-era linking strategy?

Is it a limitation of the underlying dynamic library linking system in the OS?
Is it a case of "Apple never updated how their system works, so we just 
duplicate the same design flaw that Apple uses"?
Is it a case of "Fixing this behavior in Darwin would break all Linux 
compatibility"? If so, why not send those fixes back upstream and fix Linux at 
the same time?

This issue was discussed last month, with the key example being webp and 
ImageMagik. This month, it's libarchive and cmake. Next month it will probably 
be something else -- someone mentioned that a relatively simple fix to 
something else (icu, I think) could not be pushed until everything that used it 
was updated as well. Heck, Google points to this very issue as why they use a 
single monolithic source tree rather than separated, isolated libraries -- and 
in reading their paper, I realized that the whole argument against single 
monolithic systems is fundamentally, "Right now, in 1980, we don't have the 
tools or ability to maintain such a system", and Google basically had to make 
such tools (Heck, even modern desktop IDE's like Eclipse do a really good job 
for the 95% case).

Why should libraries for webp version 5.2 and webp version 6 occupy the same 
filename/location on the disk?
Why should programs that want different versions of webp be unable to be 
installed on the same system?
Why should a program not be built, by default, with the libraries that it needs 
(and shared libraries only when requested)?

Why does it seem like we are 30+ years out of date on linking 
technology/behavior/systems?

---
Entertaining minecraft videos
http://YouTube.com/keybounce



Re: libarchive @3.3.1 fails to build on Snow Leopard (10.6.8)

2017-03-05 Thread Ryan Schmidt

> On Mar 4, 2017, at 10:39, Richard L. Hamilton  wrote:
> 
> libarchive/archive_read_disk_entry_from_file.c:677: error: ‘ACL_SYNCHRONIZE’ 
> undeclared here (not in a function)
> 
> What it boils down to is that the source is now using a symbol that was not 
> defined in Snow Leopard (without checking for its availability).
> 
> Since it looks like it's making no decisions with that, but merely using it 
> as part of a group of attributes to use when constructing a "trivial NFSv4 
> ACL from mode", some #ifdefs could probably work around it being present or 
> absent.
> 
> 

See https://trac.macports.org/ticket/53712




Re: A question on dynamic linking / version-changing libraries

2017-03-05 Thread Brandon Allbery
On Sun, Mar 5, 2017 at 11:43 AM, Michael  wrote:

> Why does Macports generate libraries that follow the 1970-era linking
> strategy?


Because MacPorts is ports of programs for other platforms which don't have
frameworks... and do have politics (for example: Debian's strict adherence
to their package guidelines amounts to political interference that has
caused problems for users in the past several years, notably with regards
to availability of mate-desktop but there are other examples... like
google's monolithic software that you mentioned, where Google packages its
own software for Debian so they don't have to deal with the distribution
package politics). And we are not in a position to rewrite/reimplement
stuff into a frameworks-based model, if upstream hasn't already done it.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net


Re: A question on dynamic linking / version-changing libraries

2017-03-05 Thread Michael

On 2017-03-05, at 9:27 AM, Brandon Allbery  wrote:

> 
> On Sun, Mar 5, 2017 at 11:43 AM, Michael  wrote:
> Why does Macports generate libraries that follow the 1970-era linking 
> strategy?
> 
> ... And we are not in a position to rewrite/reimplement stuff into a 
> frameworks-based model, if upstream hasn't already done it.

I wouldn't expect re-writing things.

I'm curious more as to: Why do we still generate code that links against a 
fixed-name library? Why does that name not include a version/API reference? Why 
not make static linked stuff, so that changes in the libraries don't break 
things?

If dynamic linking is good/desired even now, why do we link to "libpng.a" or 
"libicu.a" or "libarchive.a", and not "libpng-0.98.a" vs "libpng-1.0.a", etc?

If dynamic linking has the inherent, unavoidable problem of "Can't mix two 
different programs that want different versions of a system library on the same 
installation", why not use static linking?

The question of "Why do frameworks behave differently and link so differently" 
is similar, but not identical. (And I don't understand the whole idea of "Ship 
separate, unlinked copies of the frameworks, that will not be shared with 
anyone else, but still have all the startup overhead of runtime linking")


Re: A question on dynamic linking / version-changing libraries

2017-03-05 Thread Brandon Allbery
On Sun, Mar 5, 2017 at 12:33 PM, Michael  wrote:

> I'm curious more as to: Why do we still generate code that links against a
> fixed-name library? Why does that name not include a version/API reference?
> Why not make static linked stuff, so that changes in the libraries don't
> break things?


Mostly, because of Apple's ecosystem which is actively hostile to static
linking (left over from PPC days where the ABI essentially forbade static
linking; to understand why, you'd want to study the PPC CPU family closely)
and Apple-provided toolchain limitations (mainly ld, and while we do
replace ld sometimes for bug fixes, we are not in a position to alter its
basic behavior: in this case, version information is present but only used
for validation, and this has interactions with things like compatibility of
software across OS X versions. Or, more concretely: we already get
complaints when MP-built stuff doesn't play along with Matlab, and it'd get
far worse if Matlab's (ab)use of DYLD_LIBRARY_PATH ran headlong into
treating version information as part of a dylib name).

Also fixed-*path* libraries are part of the Mach-O format and the tooling
does not exist to override this well... and as of Sierra there are Mach-O
limitations coded into the kernel (link command table size limit) that
restrict your ability to override it (upstream ghc is already fighting with
this due to the way its dependencies work).

In short: most of this is not our call, and we are not in a position to
push on the people who could do it. MacPorts has to live with what *is*.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net


gcc 4.8 problem in update

2017-03-05 Thread Comer Duncan
I have had a failure to complete an update using a script that I have used
for quite a while. I attach the script and the end part of the main.log.
Can someone please take a look and offer some likely cause(s)?

Thanks very much.

Comer
`/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_lang_gcc48/gcc48/work/build'
:info:build Command failed:  cd 
"/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_lang_gcc48/gcc48/work/build"
 && /usr/bin/make -j8 -w bootstrap-lean 
:info:build Exit code: 2
:error:build Failed to build gcc48: command execution failed
:debug:build Error code: CHILDSTATUS 42516 2
:debug:build Backtrace: command execution failed
:debug:build while executing
:debug:build "system {*}$notty {*}$nice $fullcmdstring"
:debug:build invoked from within
:debug:build "command_exec build"
:debug:build (procedure "portbuild::build_main" line 8)
:debug:build invoked from within
:debug:build "$procedure $targetname"
:error:build See 
/opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_lang_gcc48/gcc48/main.log
 for details.


portupdateupgrade
Description: Binary data


Re: A question on dynamic linking / version-changing libraries

2017-03-05 Thread Michael

On 2017-03-05, at 9:49 AM, Brandon Allbery  wrote:
> Also fixed-*path* libraries are part of the Mach-O format and the tooling 
> does not exist to override this well...

Is this why a program compiled against a brew installation of qt5 in /usr/opt 
won't work with a ports installation of qt5 in /opt/local, and visa-versa?

Is there really no way around this -- no way to say "This program wants qt5 in 
whatever this system says is the local path of libraries"? No equivalent of 
$PATH for libraries?

... OK, is there any way - at all - to have a program compiled with either brew 
or ports that will run on an arbitrary OSX that might not have either? (i.e. -- 
fully built and contained)?



Re: A question on dynamic linking / version-changing libraries

2017-03-05 Thread Dominik Reichardt
Oh you can build stuff statically but that is a kind of manual work and not for 
MP to do.

> On 5. Mar 2017, at 19:47, Michael  wrote:
> 
> 
>> On 2017-03-05, at 9:49 AM, Brandon Allbery  wrote:
>> Also fixed-*path* libraries are part of the Mach-O format and the tooling 
>> does not exist to override this well...
> 
> Is this why a program compiled against a brew installation of qt5 in /usr/opt 
> won't work with a ports installation of qt5 in /opt/local, and visa-versa?
> 
> Is there really no way around this -- no way to say "This program wants qt5 
> in whatever this system says is the local path of libraries"? No equivalent 
> of $PATH for libraries?
> 
> ... OK, is there any way - at all - to have a program compiled with either 
> brew or ports that will run on an arbitrary OSX that might not have either? 
> (i.e. -- fully built and contained)?
> 


Re: A question on dynamic linking / version-changing libraries

2017-03-05 Thread Brandon Allbery
On Sun, Mar 5, 2017 at 1:47 PM, Michael  wrote:
>
> On 2017-03-05, at 9:49 AM, Brandon Allbery  wrote:
> > Also fixed-*path* libraries are part of the Mach-O format and the
> tooling does not exist to override this well...
>
> Is this why a program compiled against a brew installation of qt5 in
> /usr/opt won't work with a ports installation of qt5 in /opt/local, and
> visa-versa?
>

Partly. Also different build options making libraries even with the same
version not binary compatible, which would be an argument for static
linking if not for Apple deciding to pretend x86 is PPC.


> Is there really no way around this -- no way to say "This program wants
> qt5 in whatever this system says is the local path of libraries"? No
> equivalent of $PATH for libraries?
>

DYLD_LIBRARY_PATH... but this exact situation is also why DYLD_LIBRARY_PATH
can break your whole system and should not be used except as a last resort.
(Yes, I do mean "whole system" --- you can search for DYLD_LIBRARY_PATH in
MacPorts Trac and find instances of people breaking Apple-provided system
binaries... which is part of why Sierra prevents Apple binaries from using
that and other dyld envars via SIP. (Thereby causing other kinds of
breakage for various people... dyld envars are *extremely* blunt tools, and
so is SIP in many ways.)


> ... OK, is there any way - at all - to have a program compiled with either
> brew or ports that will run on an arbitrary OSX that might not have either?
> (i.e. -- fully built and contained)?
>

Use something like Platypus to create an app bundle, and dylibbundler to
stow the needed dylibs in its Resources. Everything has to cart around its
own set of dylibs. And even then you have no guarantees, because something
like KDE uses tightly integrated IPC which relies on all those dylibs
agreeing on where things live and what format they have so a service daemon
autostarted by one of them will play nicely with the others regardless of
what environment they were built in (good luck with that! --- a KDE app
built by MacPorts will likely get indigestion from a HomeBrew ksycoca or
kdeinit, and vice versa, and *you cannot fix this with envars*, you have to
hack KDE source or at least synchronize build options).

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net


Re: libarchive @3.3.1 fails to build on Snow Leopard (10.6.8)

2017-03-05 Thread Richard L. Hamilton

> On Mar 5, 2017, at 11:58, Ryan Schmidt  wrote:
> 
> 
>> On Mar 4, 2017, at 10:39, Richard L. Hamilton  wrote:
>> 
>> libarchive/archive_read_disk_entry_from_file.c:677: error: ‘ACL_SYNCHRONIZE’ 
>> undeclared here (not in a function)
>> 
>> What it boils down to is that the source is now using a symbol that was not 
>> defined in Snow Leopard (without checking for its availability).
>> 
>> Since it looks like it's making no decisions with that, but merely using it 
>> as part of a group of attributes to use when constructing a "trivial NFSv4 
>> ACL from mode", some #ifdefs could probably work around it being present or 
>> absent.
>> 
>> 
> 
> See https://trac.macports.org/ticket/53712

Thanks, I see mention of a fix that just hasn't made it into a release yet, so 
I'm hopeful that with the next version bump or two, it will build again.