Gidsik opened a new issue, #5909:
URL: https://github.com/apache/couchdb/issues/5909

   ### Version
   
   3.5.1
   
   ### Describe the problem you're encountering
   
   I tried to run one small single node of couchdb for personal use in docker 
desktop for obsidian selfhosted live sync.
   During Initialization of obsidian vault with 2k .md files (~2.2mb size of 
obisidian vault) and uploading it to database (~6.6mb database size in fauxton 
ui) I encountered an http 500 error that prevents me from getting synced. 
   Error get fixed with reboot of couchdb container.
   Noticed that `GET /db` queries fails with error 500, but `GET\PUT 
/db/_local/<somestr>` queries finishes with 200 and 201 and database size still 
grows, so uploading files is ok, but getting it back doesnt.
   Noticed that this error begin to pop after couchdb decides to run 
auto-compaction (because of db grows??).
   
   Tried to manually run compaction on new database to confirm and yes - after 
compaction database became unavailable.
   
   ### Expected Behaviour
   
   database works stable as intended without errors.
   
   ### Steps to Reproduce
   
   1. run `couchdb:latest` docker container with pretty much default 
configuration
   2. create new database <db-name> (leave it empty or add some data - doesn't 
matter)
   3. run `curl -H "Content-Type: application/json" -X POST 
http://<name>:<pass>@localhost:5984/<db-name>/_compact`
   4. run `curl  http://<name>:<pass>@localhost:5984/<db-name>`
   
   it returns `{"error":"error","reason":"{badmatch,{error,enoent}}"}`
   > if compaction is still running it will return normal json as intended with 
"compact_running":true value)
   
   
   
   
   ### Your Environment
   
   I'm running couchdb as just single node in docker.desktop under win11 (with 
wsl)
   image: couchdb:latest 
   version 3.5.1
   
   ### Additional Context
   
   here is debug logs from reproduction with empty test database
   > `curl http://gidsik:<pass>@localhost:5984/test`
   ```
   [notice] 2026-03-03T00:50:17.729237Z nonode@nohost <0.643.0> 0884c5319c 
localhost:5984 172.18.0.1 gidsik GET /test 200 ok 156
   ```
   > `curl -H "Content-Type: application/json" -X POST 
http://gidsik:<pass>@localhost:5984/test/_compact`
   ```
   [debug] 2026-03-03T00:53:19.917826Z nonode@nohost <0.925.0> -------- 
Starting compaction for db "shards/80000000-ffffffff/test.1772498476" at 0
   [debug] 2026-03-03T00:53:19.917847Z nonode@nohost <0.919.0> -------- 
Starting compaction for db "shards/00000000-7fffffff/test.1772498476" at 0
   [notice] 2026-03-03T00:53:19.917873Z nonode@nohost <0.1212.0> fccfbe44c6 
localhost:5984 172.18.0.1 gidsik POST /test/_compact 202 ok 1
   [debug] 2026-03-03T00:53:19.920434Z nonode@nohost <0.1252.0> -------- 
Compaction process spawned for db "shards/00000000-7fffffff/test.1772498476"
   [debug] 2026-03-03T00:53:19.920432Z nonode@nohost <0.1251.0> -------- 
Compaction process spawned for db "shards/80000000-ffffffff/test.1772498476"
   [debug] 2026-03-03T00:53:20.185521Z nonode@nohost <0.242.0> -------- New 
task status for <0.1252.0>: 
[{changes_done,0},{database,<<"shards/00000000-7fffffff/test.1772498476">>},{phase,document_copy},{progress,0},{started_on,1772499200},{total_changes,0},{type,database_compaction},{updated_on,1772499200}]
   [debug] 2026-03-03T00:53:20.205917Z nonode@nohost <0.242.0> -------- New 
task status for <0.1251.0>: 
[{changes_done,0},{database,<<"shards/80000000-ffffffff/test.1772498476">>},{phase,document_copy},{progress,0},{started_on,1772499200},{total_changes,0},{type,database_compaction},{updated_on,1772499200}]
   [debug] 2026-03-03T00:53:20.361294Z nonode@nohost <0.242.0> -------- New 
task status for <0.1252.0>: 
[{changes_done,0},{database,<<"shards/00000000-7fffffff/test.1772498476">>},{phase,docid_sort},{progress,0},{started_on,1772499200},{total_changes,0},{type,database_compaction},{updated_on,1772499200}]
   [debug] 2026-03-03T00:53:20.383774Z nonode@nohost <0.242.0> -------- New 
task status for <0.1251.0>: 
[{changes_done,0},{database,<<"shards/80000000-ffffffff/test.1772498476">>},{phase,docid_sort},{progress,0},{started_on,1772499200},{total_changes,0},{type,database_compaction},{updated_on,1772499200}]
   [debug] 2026-03-03T00:53:20.461284Z nonode@nohost <0.242.0> -------- New 
task status for <0.1252.0>: 
[{changes_done,0},{database,<<"shards/00000000-7fffffff/test.1772498476">>},{phase,docid_copy},{progress,0},{started_on,1772499200},{total_changes,0},{type,database_compaction},{updated_on,1772499200}]
   [debug] 2026-03-03T00:53:20.490932Z nonode@nohost <0.242.0> -------- New 
task status for <0.1251.0>: 
[{changes_done,0},{database,<<"shards/80000000-ffffffff/test.1772498476">>},{phase,docid_copy},{progress,0},{started_on,1772499200},{total_changes,0},{type,database_compaction},{updated_on,1772499200}]
   ```
   > `curl http://gidsik:<pass>@localhost:5984/test`
   ```
   [error] 2026-03-03T00:53:22.528529Z nonode@nohost <0.1317.0> aaa3bb5814 rpc 
couch_db:get_db_info/1 {badmatch,{error,enoent}} 
[{couch_bt_engine,get_size_info,1,[{file,"src/couch_bt_engine.erl"},{line,264}]},{couch_db,get_db_info,1,[{file,"src/couch_db.erl"},{line,625}]},{fabric_rpc,with_db,3,[{file,"src/fabric_rpc.erl"},{line,376}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,141}]}]
   [error] 2026-03-03T00:53:22.528553Z nonode@nohost <0.1316.0> aaa3bb5814 rpc 
couch_db:get_db_info/1 {badmatch,{error,enoent}} 
[{couch_bt_engine,get_size_info,1,[{file,"src/couch_bt_engine.erl"},{line,264}]},{couch_db,get_db_info,1,[{file,"src/couch_db.erl"},{line,625}]},{fabric_rpc,with_db,3,[{file,"src/fabric_rpc.erl"},{line,376}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,141}]}]
   [notice] 2026-03-03T00:53:22.528732Z nonode@nohost <0.1278.0> aaa3bb5814 
localhost:5984 172.18.0.1 gidsik GET /test 500 ok 2
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to