[kstars] [Bug 390124] Job Schedule won't unpark mount a 2nd time
https://bugs.kde.org/show_bug.cgi?id=390124 schwim changed: What|Removed |Added Resolution|FIXED |--- Ever confirmed|0 |1 Status|RESOLVED|REOPENED --- Comment #2 from schwim --- OK, now I can restart a job that parked the mount and the mount will be unparked. However, about 50% of the time I get the following error in ekos and the job will not run: 2018-02-25T20:00:55 Dome unparking error. I see no mention of this error in the logs. Restarting the job skips the error, the mount is unparked (if needed) and the job proceeds. Not sure if this is related but seems it may be so re-opening for guidance. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 391085] New: kstars crash during imaging session
https://bugs.kde.org/show_bug.cgi?id=391085 Bug ID: 391085 Summary: kstars crash during imaging session Product: kstars Version: git Platform: Mint (Debian based) OS: Linux Status: UNCONFIRMED Severity: normal Priority: NOR Component: general Assignee: mutla...@ikarustech.com Reporter: sch...@bitrail.com Target Milestone: --- In at least the last two commits with git, I noticed I get about 2-3 hours into an imaging run and kstars crashes. The indiserver continues running. Logfiles reveal nothing, they just stop. Backtrace from gdb: Thread 1 "kstars" received signal SIGABRT, Aborted. 0x717f7428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 54 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory. (gdb) bt #0 0x717f7428 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:54 #1 0x717f902a in __GI_abort () at abort.c:89 #2 0x7213a84d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #3 0x721386b6 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #4 0x72138701 in std::terminate() () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #5 0x72138919 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6 #6 0x73cba072 in qBadAlloc() () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #7 0x73d60305 in QString::reallocData(unsigned int, bool) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #8 0x73d60b72 in QString::append(QString const&) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #9 0x5584daa0 in ServerManager::processStandardError (this=0x58dbbb60) at /home/schwim/src/kstars/kstars/indi/servermanager.cpp:292 #10 0x55807f66 in ServerManager::qt_static_metacall (_o=0x58dbbb60, _c=QMetaObject::InvokeMetaMethod, _id=6, _a=0x7fffd550) at /home/schwim/src/build/kstars/kstars/moc_servermanager.cpp:107 #11 0x73ee0d2a in QMetaObject::activate(QObject*, int, int, void**) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #12 0x73de7d7c in ?? () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #13 0x73de8568 in ?? () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #14 0x73ee0d2a in QMetaObject::activate(QObject*, int, int, void**) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #15 0x73f6024e in QSocketNotifier::activated(int, QSocketNotifier::QPrivateSignal) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #16 0x73eed1cb in QSocketNotifier::event(QEvent*) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #17 0x74c2005c in QApplicationPrivate::notify_helper(QObject*, QEvent*) () from /usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5 #18 0x74c25516 in QApplication::notify(QObject*, QEvent*) () from /usr/lib/x86_64-linux-gnu/libQt5Widgets.so.5 #19 0x73eb238b in QCoreApplication::notifyInternal(QObject*, QEvent*) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #20 0x73f08c95 in ?? () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 ---Type to continue, or q to quit--- #21 0x7fffee0fd197 in g_main_context_dispatch () from /lib/x86_64-linux-gnu/libglib-2.0.so.0 #22 0x7fffee0fd3f0 in ?? () from /lib/x86_64-linux-gnu/libglib-2.0.so.0 #23 0x7fffee0fd49c in g_main_context_iteration () from /lib/x86_64-linux-gnu/libglib-2.0.so.0 #24 0x73f087cf in QEventDispatcherGlib::processEvents(QFlags) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #25 0x73eafb4a in QEventLoop::exec(QFlags) () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #26 0x73eb7bec in QCoreApplication::exec() () from /usr/lib/x86_64-linux-gnu/libQt5Core.so.5 #27 0x556204fb in main (argc=1, argv=0x7fffdf68) at /home/schwim/src/kstars/kstars/main.cpp:322 -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 391085] kstars crash during imaging session
https://bugs.kde.org/show_bug.cgi?id=391085 --- Comment #3 from schwim --- I was using the following drivers: indi_paramount_telescope indi_fli_ccd indi_fli_filter indi_sx_ccd indi_gpsd indi_gemini_focus I'll try the updates. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389960] New: Fits Open fails for files with parenthesis in path or name
https://bugs.kde.org/show_bug.cgi?id=389960 Bug ID: 389960 Summary: Fits Open fails for files with parenthesis in path or name Product: kstars Version: git Platform: Other OS: Linux Status: UNCONFIRMED Severity: normal Priority: NOR Component: general Assignee: mutla...@ikarustech.com Reporter: sch...@bitrail.com Target Milestone: --- Repeatable on Mint Xenial and OSX as of 2.9.2 as the latest git builds. - In the kstars main window or the FITS viewer, go to File->Open FITS - Navigate to a fits file that either has a parenthesis in the path or in the file name itself. - Attempt to open the file. This results in an error and the file does not open. Error message is: Could not open file . Error could not open the named file I noticed this due to the fact that Dropbox forces a parenthesis in the directory name. Workaround for directories: create a link to the directory, e.g.: ln -s -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389960] Fits Open fails for files with parenthesis in path or name
https://bugs.kde.org/show_bug.cgi?id=389960 --- Comment #2 from schwim --- (In reply to Jasem Mutlaq from comment #1) > It seems you can't have filenames with brackets or parentheses since they > have special meanings in the FITS extended file syntax. > > More information here: > https://heasarc.gsfc.nasa.gov/docs/software/fitsio/filters.html > > Not sure if we can have a workaround for this. Confirmed that files with brackets cause the same problem. The workaround above does not work appear to work on OSX due to how the OSX software handles links. I see the CFITSIO docs specify fits_open_diskfile as a possible solution where such characters are used. Not sure if this would help: https://heasarc.gsfc.nasa.gov/docs/software/fitsio/c/c_user/node35.html -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 390124] New: Job Schedule won't unpark mount a 2nd time
https://bugs.kde.org/show_bug.cgi?id=390124 Bug ID: 390124 Summary: Job Schedule won't unpark mount a 2nd time Product: kstars Version: git Platform: Other OS: Linux Status: UNCONFIRMED Severity: normal Priority: NOR Component: general Assignee: mutla...@ikarustech.com Reporter: sch...@bitrail.com Target Milestone: --- Created attachment 110474 --> https://bugs.kde.org/attachment.cgi?id=110474&action=edit kstars log This can be reproduced. Running on Mint Xenial, built from git. If you schedule a job and ask it to unpark the mount and park the mount after the job completes and the job fails for some reason, the mount will correctly be parked. On restarting the job, the schedule will skip the unpark method and go directly to slewing. Meanwhile it appears that indi is still reporting the mount as parked. The job eventually times out if not cancelled. Disconnecting and reconnecting to the indiserver is a workaround. Logfile attached. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389738] .esq files can't be run again by scheduler if prior captures exist
https://bugs.kde.org/show_bug.cgi?id=389738 --- Comment #8 from schwim --- (In reply to Jasem Mutlaq from comment #7) > Can you check with latest GIT if this issue is resolved? Built and testing in between imaging. Standby... -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389738] .esq files can't be run again by scheduler if prior captures exist
https://bugs.kde.org/show_bug.cgi?id=389738 --- Comment #9 from schwim --- (In reply to schwim from comment #8) > (In reply to Jasem Mutlaq from comment #7) > > Can you check with latest GIT if this issue is resolved? > > Built and testing in between imaging. Standby... Here's what I'm seeing for two different job completion conditions - Sequence Completion: Can re-run the job over and over - GOOD - Multiple jobs mixed of the two: seems to work fine - GOOD - Repeat for _N_ runs - will repeat N times as specified. An attempt to re-run this job (by restarting it) fails with "No valid jobs found, aborting..." message. If you edit the job and change N (e.g. N+1) it will run that new number of times. If you try to run it again, the same failure occurs. If you run the job several times to Sequence Completion as above, then switch to Repeat for ___ runs it will run. - Run one "N runs" job, deleted it, then re-add an identical job gives "observation job is already complete" message. One thing I noticed is if you have multiple jobs with "Repeat for __ runs" they all have to repeat the same number of times. You can't have one job repeat 2 times and another repeat 3 times, they both have to repeat the same number of times. I also noticed you can't edit the priority of a job once it is in the list. These are probably a different item to be tracked. I'll open bugs on those two independently if you prefer. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389738] .esq files can't be run again by scheduler if prior captures exist
https://bugs.kde.org/show_bug.cgi?id=389738 --- Comment #10 from schwim --- More testing. I decided a video of what's going on is best. In short, a save .esq is loaded. The save location for each image is an empty directory. However, this job had been run previously, but all files moved or deleted. The sequence job will start and the 1st job will run, but all others will move to completed. Resetting the status on them at that point allows them all to run. https://www.dropbox.com/s/gm909b7c8smg8gf/seq_oddity.mov?dl=0 ...and a logfile: https://www.dropbox.com/s/r9tuxzwm77ofw8w/seq_oddity.txt?dl=0 -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389583] New: Kstars crash when re-starting INDI server (reproducable)
https://bugs.kde.org/show_bug.cgi?id=389583 Bug ID: 389583 Summary: Kstars crash when re-starting INDI server (reproducable) Product: kstars Version: 2.9.2 Platform: Mint (Ubuntu based) OS: Linux Status: UNCONFIRMED Severity: crash Priority: NOR Component: general Assignee: mutla...@ikarustech.com Reporter: sch...@bitrail.com Target Milestone: --- Created attachment 110196 --> https://bugs.kde.org/attachment.cgi?id=110196&action=edit backtrace Versions 2.8.9 - 2.9.2 from the PPA as well as 2.9.2. built from the git repository. Both systems running Mint Xenial. To reproduce: Open kstars Open Ekos Start INDI (automatic connect) Disconnect from devices Stop INDI Restart INDI This results in a crash. A backtrace is attached, however the backtrace tool says it is not useful info. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389583] Kstars crash when re-starting INDI server (reproducable)
https://bugs.kde.org/show_bug.cgi?id=389583 --- Comment #1 from schwim --- This happens when connecting to an external/remote indi proces as well. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389583] Kstars crash when re-starting INDI server (reproducable)
https://bugs.kde.org/show_bug.cgi?id=389583 --- Comment #3 from schwim --- Pulled and built the latest. Fix appears to have solved the problem. Thanks! -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389738] New: .esq files can't be run again by scheduler if prior captures exist
https://bugs.kde.org/show_bug.cgi?id=389738 Bug ID: 389738 Summary: .esq files can't be run again by scheduler if prior captures exist Product: kstars Version: 2.9.2 Platform: Other OS: Linux Status: UNCONFIRMED Severity: normal Priority: NOR Component: general Assignee: mutla...@ikarustech.com Reporter: sch...@bitrail.com Target Milestone: --- Created attachment 110276 --> https://bugs.kde.org/attachment.cgi?id=110276&action=edit Example .esq file Confirmed in latest pull as of 31 Jan 2018. I think this goes back to prior releases. If an .esq file is run more than once by the scheduler the first job is successful but all subsequent jobs quit stating they are already complete. The scheduler log window shows the job "in progress" then immediately shows "Complete". The debug logs show: [2018-02-01T00:19:57.275 MST DEBG ][ org.kde.kstars.ekos.capture] - Preparing capture job "Light_L_60_secs_ISO8601" for execution. [2018-02-01T00:19:57.275 MST DEBG ][ org.kde.kstars.ekos.capture] - Job "Light_L_60_secs_ISO8601" already complete. [2018-02-01T00:19:57.275 MST DEBG ][ org.kde.kstars.ekos.capture] - All capture jobs complete. If the original output fits files are deleted, the job can be run again. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389740] New: Odd behavior in ekos.focus when fitting v-curve
https://bugs.kde.org/show_bug.cgi?id=389740 Bug ID: 389740 Summary: Odd behavior in ekos.focus when fitting v-curve Product: kstars Version: 2.9.2 Platform: Mint (Debian based) OS: Linux Status: UNCONFIRMED Severity: normal Priority: NOR Component: general Assignee: mutla...@ikarustech.com Reporter: sch...@bitrail.com Target Milestone: --- Been having a hard time getting focus to succeed. In troublshooting I found several occurances of the following. All looks normal until the V-Curve slope and resultant targetPosition in the example below. [2018-01-31T23:55:59.841 MST DEBG ][ org.kde.kstars.ekos.focus] - Starting focus with box size: 36 Step Size: 500 Threshold: 150 Tolerance: 1 Frames: 1 Maximum Travel: 5000 [2018-01-31T23:55:59.843 MST DEBG ][ org.kde.kstars.ekos.focus] - State: "In Progress" [2018-01-31T23:56:01.408 MST DEBG ][ org.kde.kstars.ekos.focus] - Focus newFITS # 1 : Current HFR 1.61869 [2018-01-31T23:56:03.321 MST DEBG ][ org.kde.kstars.ekos.focus] - Focus newFITS # 2 : Current HFR 1.67733 [2018-01-31T23:56:05.239 MST DEBG ][ org.kde.kstars.ekos.focus] - Focus newFITS # 3 : Current HFR 1.77468 [2018-01-31T23:56:05.241 MST DEBG ][ org.kde.kstars.ekos.focus] - [2018-01-31T23:56:05.241 MST DEBG ][ org.kde.kstars.ekos.focus] - Current HFR: 1.69024 Current Position: 46000 [2018-01-31T23:56:05.241 MST DEBG ][ org.kde.kstars.ekos.focus] - Last minHFR: 3.85693 Last MinHFR Pos: 42350 [2018-01-31T23:56:05.241 MST DEBG ][ org.kde.kstars.ekos.focus] - Delta: "217" % [2018-01-31T23:56:05.241 MST DEBG ][ org.kde.kstars.ekos.focus] - [2018-01-31T23:56:05.244 MST DEBG ][ org.kde.kstars.ekos.focus] - Focus out ( 500 ) [2018-01-31T23:56:08.803 MST DEBG ][ org.kde.kstars.ekos.focus] - Focus newFITS # 1 : Current HFR 1.6343 [2018-01-31T23:56:10.724 MST DEBG ][ org.kde.kstars.ekos.focus] - Focus newFITS # 2 : Current HFR 1.57151 [2018-01-31T23:56:12.644 MST DEBG ][ org.kde.kstars.ekos.focus] - Focus newFITS # 3 : Current HFR 1.63131 [2018-01-31T23:56:12.646 MST DEBG ][ org.kde.kstars.ekos.focus] - [2018-01-31T23:56:12.646 MST DEBG ][ org.kde.kstars.ekos.focus] - Current HFR: 1.61237 Current Position: 46500 [2018-01-31T23:56:12.646 MST DEBG ][ org.kde.kstars.ekos.focus] - Last minHFR: 1.69024 Last MinHFR Pos: 46000 [2018-01-31T23:56:12.646 MST DEBG ][ org.kde.kstars.ekos.focus] - Delta: "7.79" % [2018-01-31T23:56:12.646 MST DEBG ][ org.kde.kstars.ekos.focus] - [2018-01-31T23:56:12.650 MST DEBG ][ org.kde.kstars.ekos.focus] - Using slope to calculate target pulse... [2018-01-31T23:56:12.650 MST DEBG ][ org.kde.kstars.ekos.focus] - V-Curve Slope 3.14119e-08 current Position 46500 targetPosition -5.12835e+07 [2018-01-31T23:56:12.650 MST DEBG ][ org.kde.kstars.ekos.focus] - new minHFR 1.61237 @ positioin 46500 [2018-01-31T23:56:12.650 MST DEBG ][ org.kde.kstars.ekos.focus] - targetPosition ( 0 ) - initHFRAbsPos ( 46000 ) exceeds maxTravel distance of 5000 [2018-01-31T23:56:12.651 MST DEBG ][ org.kde.kstars.ekos.focus] - Stopppig Focus [2018-01-31T23:56:12.652 MST DEBG ][ org.kde.kstars.ekos.focus] - State: "Aborted" [2018-01-31T23:56:12.652 MST DEBG ][ org.kde.kstars.ekos.focus] - AutoFocus result: false [2018-01-31T23:56:12.653 MST DEBG ][ org.kde.kstars.ekos.focus] - State: "Failed" -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389738] .esq files can't be run again by scheduler if prior captures exist
https://bugs.kde.org/show_bug.cgi?id=389738 --- Comment #3 from schwim --- (In reply to Jasem Mutlaq from comment #2) > Actually, that solution is not complete and can have issues. I have another > proposal. Would it be OK if we make the path like this: > > TargetName/Job_#/Light/OIII/..etc > > That is, after the target name, each each a subdirectory containing the job > number. This way the path is always unique. What do you think? That would work, however it would induce a bit of work when collecting all the data for processing (need to descend into multiple directory trees to get all the files pulled together). Presuming its a matter of having a unique filename for each image, I have a few alternatives: 1) Require that TS be on for file naming and rely on that -or- 2) Append a part of the job name to the file -or- 3) Use a system-wide counter for all image saves I've seen 1&3 in use elsewhere and they work well. In the case of #3, the user can maybe even set the numbering if they wish. Example - append 4 digits at the end of each image, start at 1234 and increment, never repeat unless reset. Probably good to have some corner case collision avoidance as well. In the rare instance a collision does occur, iterate to the next for the filename. #1 and #3 should both accommodate this. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389738] .esq files can't be run again by scheduler if prior captures exist
https://bugs.kde.org/show_bug.cgi?id=389738 --- Comment #4 from schwim --- (In reply to Jasem Mutlaq from comment #2) > Actually, that solution is not complete and can have issues. I have another > proposal. Would it be OK if we make the path like this: > > TargetName/Job_#/Light/OIII/..etc > > That is, after the target name, each each a subdirectory containing the job > number. This way the path is always unique. What do you think? Actually, thinking more on this. This may not work that well. Consider processing the data in PixInsight. Presuming the file names are the same but in different directories, you'd end up with problems when the tools save them out. Best to have each file have a unique name. -- You are receiving this mail because: You are watching all bug changes.
[kstars] [Bug 389738] .esq files can't be run again by scheduler if prior captures exist
https://bugs.kde.org/show_bug.cgi?id=389738 --- Comment #6 from schwim --- (In reply to Jasem Mutlaq from comment #5) > Ok I need to think more about now. The solution as applied already solves > the issue at the scheduler level, but as soon as the job goes to capture, it > checks the directory and finds that all the required images are already > captured. It does not know number of other jobs with the exact same path. So > I'll see if there a graceful way to resolve this. Yes, this would make multi-night imaging with the same .esq files a breeze. .esq re-use would be very handy to have indeed - basically they become capture profiles that are orchestrated by the scheduler. -- You are receiving this mail because: You are watching all bug changes.