https://bugs.kde.org/show_bug.cgi?id=511011

--- Comment #3 from Matt P. <[email protected]> ---
Created attachment 186219
  --> https://bugs.kde.org/attachment.cgi?id=186219&action=edit
Startup_error_debug_log

Hello Gilles, Hello all,

I have now been able to reproduce the error with Debug_Logging enabled.
The file is attached.
Now, judging from what I read in there the whole thing becomes even more funny: 

- You can see two processes (10184 and 25636) starting, "interwoven" in some
way (one very shortly after the other)
- 25636 seems to be a very slight bit ahead of the other process
- They both seem to run some "test schema update" to check the authorization
against the database
- Process 10184 then gets the DB error in line 94 and following, reading 
-------------------------
"00000093       0.41654301      [25636] digikam.coredb: Core database: running
schema update    
00000094        0.42391151      [10184] digikam.coredb: Core database: running
schema update    
00000095        0.44741601      [10184] digikam.dbengine: Failure executing
query:      
00000096        0.44741601      [10184]  ""     
00000097        0.44741601      [10184] Error messages: "QMYSQL: Die Abfrage
konnte nicht ausgeführt werden" "Can't DROP COLUMN `name`; check that it
exists" "1091" 2  
00000098        0.44741601      [10184] Bound values:  QList()  
00000099        0.44749770      [10184] digikam.dbengine: Error while executing
DBAction [ "CheckPriv_ALTER_TABLE" ] Statement [ "\n                    ALTER
TABLE PrivCheck DROP COLUMN name;\n                " ]    
00000100        0.44752491      [10184] digikam.coredb: Core database: error
while creating a trigger. Details: QSqlError("1091", "QMYSQL: Die Abfrage
konnte nicht ausgeführt werden", "Can't DROP COLUMN `name`; check that it
exists")       
00000101        0.44767579      [10184] digikam.coredb: Core database:
insufficient rights on database. 
00000102        0.44780830      [10184] digikam.coredb: Core database: cannot
process schema initialization     
------------------
The rest  of  log isn't of great relevance any more, it seems to be from
process 25636  starting up normally, 10184 then dies resp. shows the splash
screen with the error - this does not seem to be reflected in the log.

I have checked my MariaDB settings regarding transaction timeout etc. , and in
fact they are all at default, and  - contrary to your original comment - I
would have expected for the two processes to "wait for another" for some time -
this would be normal behaviour of any RDBMS ( I have been doing support for
database systems for over 40 years), but here the situation is a bit different:

I assume that this  "authorisation check" runs (as per MariaDBs default) in
autocommit mode, so every single statement (assuming again: a series of create
table / alter table / drop table, against a table "Privcheck") is executed
immediately, and the locks (against the system catalogs of mariadb) freed. Now,
one of the processes gets in the situations where it stumbles over the
"unfinished results" of this series of the DDL statements of the other process,
leading to the error.
(The "Privcheck" table seems to be completely dropped in the end of this check,
could not find it in the database).

The typical response of this would be to wrap these staments in a "start
transaction" / "commit transaction" block.
Now, would that that be 100% waterproof / rock-solid ? Assumingly not. There
still would be a chance that two of these "authority checks" runnng "in
parallel" could suffer some error, but it might be reduced a bit. Think it
could be worth a shot.
Willing to beta-test :-)

Best, 
Matt.

-- 
You are receiving this mail because:
You are watching all bug changes.

Reply via email to