Report: Authors not in Authorities
SELECT DISTINCT(author) AS heading FROM biblio WHERE author NOT IN (SELECT
ExtractValue(marcxml,'//datafield[@tag="100"]/subfield[@code="a"]') AS
heading FROM auth_header WHERE authtypecode='PERSO_NAME')ORDER BY heading
I am not a SQL expert, but I believe this
I believe that The Horowhenua Library Trust as the non-profit behind
koha-community.org and trustees of the koha codebase would qualify to
enroll as an open-source organization.
-Doug-
On Wed, May 25, 2022 at 5:06 PM Tomas Cohen Arazi
wrote:
> Most limitations are about workers quotas. We can s
Good thought, but its already installed. Any other ideas?
ls -al /etc/apache2/*/*env*
-rw-r--r-- 1 root root 58 Aug 1 2015
/etc/apache2/mods-available/env.load
-rw-r--r-- 1 root root 1280 Aug 7 2015
/etc/apache2/mods-available/setenvif.conf
-rw-r--r-- 1 root root 68 Aug 1 2015
/etc/apac
+1. That is definitely a upgrade I would buy.
On Sun, Jul 3, 2016 at 7:22 PM, Dan Baker wrote:
> I would like to be able to get rid of the AA cell holders inside the KX3,
> and go with a battery pack similar to the KX2. LiFePO4 would be a nice
> choice in a battery. For instance, I am very happ
What is the recommended way to hook 2 headsets to a K3S? Is a stereo "Y"
cable sufficient?
-Doug-
KD7DK
__
Elecraft mailing list
Home: http://mailman.qth.net/mailman/listinfo/elecraft
Help: http://mailman.qth.net/mmfaq.htm
Post: mailto:E
Add me to the list too. They should consider open-sourcing the software
and they would probably get help with porting, features and bug fixes.
-Doug-
On Thu, Apr 7, 2016 at 2:20 PM, John K7JLT wrote:
> Add me to this lengthening list of RPi
> users that would like a download of the utilities f
Package: ddclient
Version: 3.8.2-2
Severity: critical
Tags: patch
Justification: breaks the whole system
Dear Maintainer,
As far as I can tell, this also impacts the unstable version (3.8.2-3)
What led up to the situation?
After installing ddclient on my Google Compute Engine hosted Debian/Jess
Package: ddclient
Version: 3.8.2-2
Severity: critical
Tags: patch
Justification: breaks the whole system
Dear Maintainer,
As far as I can tell, this also impacts the unstable version (3.8.2-3)
What led up to the situation?
After installing ddclient on my Google Compute Engine hosted Debian/Jess
n/installer/data/mysql/atomicupdate/oai_sets.sql:
- [Sat Oct 24 23:52:23 2015] updatedatabase.pl: ERROR 1062 (23000) at
line 35: Duplicate entry 'OAI-PMH:AutoUpdateSets' for key 'PRIMARY'
On Sun, Oct 25, 2015 at 11:05 AM, Doug Kingston wrote:
> While doing a upgrade on o
While doing a upgrade on our test koha instance (based on a copy of
production), I received the following errors.:
- [Sat Oct 24 23:52:23 2015] updatedatabase.pl: C4::Installer::load_sql
returned the following errors while attempting to load
/home/koha/koha-test/intranet/cgi-bin/installe
I have access to signal generator similar to this (
http://www.circuitspecialists.com/sdg830.html), but its minimum output is
4mv. The calibration calls for 50uV of signal.
It looks like I could set the output to 40mv (I don't like working at the
bottom of the range), and then attenuate the output
This patch adds locking to rebuild_zebra.pl to ensure that simultaneous
changes are prevented (as one is likely to overwrite the other).
Incremental updates in daemon mode will skipped if the lock is busy
and they will be picked up on the next pass. Non-daemon mode
invocations will also exit immed
1. Add code to check if flock is available and ignore locking if
its missing (from M. de Rooy)
2. Change default for adhoc invocations to abort if they cannot
obtain the lock. Added option -wait-for-lock if the user prefers
to wait until the lock is free, and then continue processing.
3. added m
Mystery solved.
I had the OPAC site in my LastPass password manager with the "auto-login"
option set.
Every time I visited the OPAC site, Lastpass would provide the login
credentials in the POST. Koha acted on those credentials even though we
had marked user logins disabled. This is probably a b
Messages like this are appears nearly constantly from both 3.12 and 3.14
version of Koha. I checked cpan and it claims I have the latest version of
Memoize::Memcached. Anyone else seeing this?
[Mon Jan 20 17:42:32 2014] [error] [client 127.0.0.1] [Mon Jan 20 17:42:32
2014] Memcached.pm: UNIVERSA
Add missing reference to subdirectory for rebuild_zebra locks
under zebra lock diretory. This is already handled correctly
in the docs, template and makefiles. The code in fact works
either way, but docs and reality should match.
---
misc/migration_tools/rebuild_zebra.pl |1 +
1 file changed
This patch adds locking to rebuild_zebra.pl to ensure that simultaneous
changes are prevented (as one is likely to overwrite the other).
Incremental updates in daemon mode will skipped if the lock is busy
and they will be picked up on the next pass. Non-daemon mode
invocations will wait for the lo
+1 Mark. His help was invaluable - and it worked! Its great to see this
wiki page for others, much as I enjoyed working with him on this.
Thanks!
-Doug-
On Sun, Nov 17, 2013 at 9:50 PM, Mark Tompsett wrote:
> Greetings,
>
> The QA test tool is handy to have set up. You can check your ow
Address concern on the initialization of $lockdir. If lockdir is
undef, then it will default to /var/lock. If lockdir is specified
in the config, it is expected to be a valid directory. mkpath()
will fail if its not a valid path.
---
misc/migration_tools/rebuild_zebra.pl | 11 +--
1 f
Based on feedback, make daemon mode imply -z -a -b and abort
on startup if flags incompatible with an incremental update daemon
are used. Update documentation to match.
---
misc/migration_tools/rebuild_zebra.pl | 20
1 file changed, 20 insertions(+)
diff --git a/misc/migra
Based on feedback, make daemon mode imply -z, -a, -b and abort
on startup if flags incompatible with an incremental update daemon
are used. Update documentation to match.
---
misc/migration_tools/rebuild_zebra.pl | 20
1 file changed, 20 insertions(+)
diff --git a/misc/mig
Address coding style issues from koha-qa.pl
---
misc/migration_tools/rebuild_zebra.pl | 12 ++--
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/misc/migration_tools/rebuild_zebra.pl
b/misc/migration_tools/rebuild_zebra.pl
index 8b77686..74976c1 100755
--- a/misc/migration
On Wed, Oct 30, 2013 at 5:05 PM, Galen Charlton wrote:
> Hi Elaine,
>
> On Wed, Oct 30, 2013 at 4:31 PM, Elaine Bradtke wrote:
>
>> Here's an example of the imported 008:
>> 130515s2006stka###gr#000#0#eng#d
>>
>
> Just to confirm, for the migrated records, "#" in the 008 represents a
> l
A final tweak to the debian packages template to ensure the lock
file is under /var/lock/koha/INSTANCENAME. Its not ideal but
this work for all legacy and new instalations.
---
debian/templates/koha-conf-site.xml.in |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/debian/tem
In the event lockdir is not specified in the koha-conf.xml file
which will occur for old installations, default the lockdir to a
sensible default (/var/lock).
---
misc/migration_tools/rebuild_zebra.pl |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/misc/migration_tools/re
Add lockdir to the debian config template.
---
debian/templates/koha-conf-site.xml.in |1 +
1 file changed, 1 insertion(+)
diff --git a/debian/templates/koha-conf-site.xml.in
b/debian/templates/koha-conf-site.xml.in
index 71de9fb..e15e249 100644
--- a/debian/templates/koha-conf-site.xml.in
+
+1 on the strategy.
On Mon, Oct 28, 2013 at 11:34 AM, Owen Leonard wrote:
> +1 from me on the timeline.
>
> > [4] The RM will assist in getting OPAC template patches in the pipeline
> that
> > were written for prog updated to support Bootstrap as well.
>
> I'm happy to help too.
>
> -- Owen
>
The race condition exists whether you are doing incremental updates with
a periodic cronjob or with the new daemon mode. Suppose you start a full
rebuild
at time T0 which will take until T20 to extract the records. Suppose also at
T10,
a biblio or auth is updated and processed through the zebra
This change adds code to check the zebraqueue table with a cheap SQL query
and a daemon loop that checks for new entries and processes them incremantally
before sleeping for a controllable number of seconds. The default is 5 seconds
which provides a near realtime search index update. This is desi
Based on feedback from KohaCon13, the default of 25 biblios per page
is woefully too small. Make it 500. 500 lines of text is very
reasonable for modern browsers.
---
tools/manage-marc-import.pl |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/manage-marc-import.pl b/
The race condition exists whether you are doing incremental updates with
a periodic cronjob or with the new daemon mode. Suppose you start a full
rebuild
at time T0 which will take until T20 to extract the records. Suppose also at
T10,
a biblio or auth is updated and processed through the zebra
This change adds code to check the zebraqueue table with a cheap SQL query
and a daemon loop that checks for new entries and processes them
incremantally
before sleeping for a controllable number of seconds. The default is 5
seconds
which provides a near realtime search index update. This is desi
My last install was on 3.10.02 and all was well with the set of Perl
modules had then, which included Text::CSV_XS-0.85. As I prepared to build
3.10.06, I upgraded my Perl modules as some additions were required in any
case. After the cpan upgrades, 'make test' was failing for both my 3.10.02
and
I have a suspicion that the problem may be related to this section of
lib/C4/Koha.pm:
sub getFacets {
my $facets;
if ( C4::Context->preference("marcflavour") eq "UNIMARC" ) {
$facets = [
...
}
else {
$facets = [
{
idx => 'su-to'
While updating the database on our test system, we received the following
errors. I looks like these values are already in the 3.8.7 version of the
database.
-Doug-
Update errors :
- [Sun Dec 2 23:08:15 2012] updatedatabase.pl: DBD::mysql::db do failed:
Duplicate entry 'SuspendHoldsIntran
Bug 9149 <http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=9149>filed
for tracking this.
On Mon, Nov 26, 2012 at 2:46 PM, Doug Kingston wrote:
> This patch appears to break koha 3.8.7. The routine GetAuthorizedHeading
> does not exist anywhere in the source.
&
This patch appears to break koha 3.8.7. The routine GetAuthorizedHeading
does not exist anywhere in the source.
This breaks link_bibs_to_authorities.pl and possibly other things.
-Doug-
On Wed, Sep 26, 2012 at 2:39 PM, Jared Camins-Esakov <
jcam...@cpbibliography.com> wrote:
> On 3.8.x, it was
Patch 8823 has introduce a bug (tracked in
http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=9149) where
C4/Biblio.pm
calls a non-existent subroutine: GetAuthorizedHeading. This breaks bin/
link_bibs_to_authorities.pl. I have informed the author of the patch.
__
Biblio number is only relevant for bibs. What should be used for
authorities? That is what I was referring to below but the problem may be
common to both objects.
-Doug-
On Sep 3, 2012 9:49 AM, "Jared Camins-Esakov"
wrote:
> Doug,
>
> I would concur that 001 and 003 need to be taken together
I would concur that 001 and 003 need to be taken together to have any
chance of a unique identifier. Our library has our own unique 003
(UkLoVW). For indexing, I suspect a better key to hand to zebra would be
the system control number (035 $a). Are these kept unique by koha?
While I do believe
bib1.att for reference attached.
-Doug-
On 2 September 2012 19:55, Doug Kingston wrote:
> Could something be amiss with /etc/koha/zebradb/authorities/etc/bib1.att?
> Especially given the error:
> [Sun Sep 02 23:38:10 2012] [error] [client 75.149.175.233] ZOOM error 25
> "Spec
10
On 2 September 2012 19:37, Doug Kingston wrote:
> We still get the same errors after running
> link_bibs_to_authorities.plfollowed by
> rebuild_zebra.pl -b -a -r
> The authorities not showing up was caused by duplicate control numbers.
> We have cleared up that problem.
&g
We still get the same errors after running
link_bibs_to_authorities.plfollowed by
rebuild_zebra.pl -b -a -r
The authorities not showing up was caused by duplicate control numbers. We
have cleared up that problem.
-Doug-
On 2 September 2012 18:57, Jared Camins-Esakov
wrote:
> Doug,
>
> More rele
Server information Koha version: 3.08.04.000 OS version ('uname -a'): Linux
library.efdss.org 2.6.32.33-kvm-i386-2028-dirty #5 SMP Mon Nov 28
20:23:39 GMT 2011 i686 GNU/Linux Perl interpreter: /usr/bin/perl Perl
version: 5.010001 Perl @INC: /usr/share/koha/lib
/etc/perl
/usr/local/lib/perl/5.10
relink headings that have previously been linked to
authority records.
On 2 September 2012 17:26, Doug Kingston wrote:
> I reproduced your 500 error and found these in the logfile. ZOOM is a
> zebra lookup error.
> [Sun Sep 02 23:38:10 2012] [error] [client 75.149.175.233] ZOO
I reproduced your 500 error and found these in the logfile. ZOOM is a
zebra lookup error.
[Sun Sep 02 23:38:10 2012] [error] [client 75.149.175.233] ZOOM error 25
"Specified element set name not valid for specified database" (addinfo:
"F") from diag-set 'Bib-1', referer:
http://catalogue-admin.efd
Better yet, we could use 'uniq -d'. It only prints out the duplicate
lines, so the following should suffice:
marcprint | grep "=001" | sort | uniq -d
I agree we should also be able to do something with SQL or SQL/Perl pretty
easily, but this was quick.
-Doug
On 1 September 2012 17:26, Mark Tomp
September 2012 16:11, Doug Kingston wrote:
> So doing some further research, it definitely looks like we have duplicate
> control numbers (001). This is a data entry mistake and it looks like the
> cataloger copied the biblios for similar entries. I have gone back and
> altered the c
nique then it
> will index OK.
> The better solution is to fix the xsl to probably not use the z:id for
> biblios or maybe get it to use the 999$c, but the zebra config scares me.
> It took ages to find the cause so I hope this helps someone.
> Ian
>
> On 01/09/2012 18:11, Doug Kin
On 1 September 2012 09:46, Jared Camins-Esakov
wrote:
> Doug,
>
> So environment variables are not the issue. We are carefully managing
>> those.
>>
>
> Make sure when you are using cron jobs that you set the environment
> variables IN YOUR CRONTAB. Setting environment variables elsewhere is a
>
So environment variables are not the issue. We are carefully managing
those.
I have tried using the new tool checkNonIndexedBiblios.pl (from patch 6566)
and it indeed finds a few recent biblios that are not indexed. Using the
-z option to mark them for indexing followed by a manual run of
rebuil
I can confirm that applying this patch to my 3.8.4 system and rebuilding
the indexes with rebuild_zebra.pl -r -b, and rebuild_zebra.pl -r -a has
resolved our problem.
-Doug-
On 28 August 2012 01:36, Chris Cormack wrote:
> On 28 August 2012 19:49, Ian Bays wrote:
> > We also found that on a rec
We are trying to fix up some duplicate and wrong authority records (note
that Mr Calvertt below has an extra comma after his middle name in the MAIN
ENTRY--PERSONAL NAME, but not in SUBJECT ADDED ENTRY-PERSONAL NAME which is
correct. We want to dump auth 2705 and just keep auth 2706. The
merge_au
We start the zebraqueue daemon from the /etc/init.d/koha-zebraqueue-daemon
script at startup. This is a link to
/usr/share/koha/bin/koha-zebraqueue-ctl.sh.
My hack to keep things up to date is to run
'/usr/share/koha/bin/migration_tools/rebuild_zebra.pl -b' every 2 minutes
from cron.
/etc/cron.d/
It seems to have a default of 25 entries per page displayed. I'd like to
be able to significantly increase that. How can I change the default
results_per_page for this other than editing tools/manage-marc-import.pl?
-Doug-
___
Koha mailing list http:/
Running Koha 3.4.3, it appears that Koha is only using Memcache for
storing session information and certain very static information:
<28 get KOHAgetTranslatedLanguages-list-opac prog
<28 get KOHAgetTranslatedLanguages-scalar-opac prog en
<28 get KOHAgetAllLanguages-scalar-
There would seem to be m
The basis for this enhancement was a bad batch of MARC records that
had some unknown Unicode characters in them which caused the batch
import function from the web interface to abort after importing only
a portion of the records. At this point you are completely stuck using
the web user interface.
OpenSUSE already packages dkim-milter in their contrib section and there
is a README.suse_postfix there. They also have other config changes and
supporting tempate files to complete the package. I include their READM
here.
-Doug-
Scott Kitterman wrote:
On Friday 25 April 2008 13:27:46 Murr
As the one who reported this bug originally, I can confirm that this
fixes the problem I was seeing with Werner here in London. I like the
PATCH contents - it all makes sense to me.
-Doug-
Murray S. Kucherawy wrote:
> A bug has been identified with the "UseASPDiscard" feature. Its use
> thro
I have this combination working partially. Signing is working fine -
passes various test sites without issue.
What is not working is verification. Logging from dkim-milter, and the
resulting header lines makes it quite clear that it knows that its
verifying and it makes correct Authentication-
The function dkimf_checkhost tries to check a hostname against a list of
DNS names for either an exact or lefthand (most significant component)
match. It uses the following code:
/* walk the list */
for (node = list; node != NULL; node = node->peer_next)
{
if
I can show you my .amanda.exclude for a Cygwin installation (attached).
There is no problem with spaces.
By DLE I assume you mean the disklist file? Not sure how to get spaces
in that. Have you looked at the source?
-Doug-
Richard Morse wrote:
Hi! This is probably a really easily answered
It turns out that the bonding driver does indeed handle interface
redundancy to two separate switches. Martin was right and the kernel
documentation file (networking/bonding.txt) is packed full of useful
information. The specific section that deals with what I need is under
the heading "High
I am interested in setting up a host with dual ethernet connections to the same
IP subnet (but different switches) for redundancy. We need reasonably
transparent failover if an interface fails. In studying the existing HOWTO
documents and other stuff produced by Google, it looks like the confi
64 matches
Mail list logo