Hello,
I just finished my first training of tesseract 4.0 and I ran a lstmeval on
the generated model, which I named *mod01.*
I use this command line :
lstmeval --model data/checkpoints/mod01_checkpoint --traineddata ./usr/share
/tessdata/mod01.traineddata --eval_listfile data/list.eval
It worke
See
https://github.com/tesseract-ocr/tesseract/blob/master/doc/lstmeval.1.asc
When using checkpoint you need to also use the starter traineddata file
used for training.
Or give final traineddata file as model.
So, if after training u have converted the checkpoint to a traineddata, you
can use th
training/lstmeval --model ~/tesstutorial/engoutput/base_checkpoint \
--traineddata ~/tesstutorial/engtrain/eng/eng.traineddata \
--eval_listfile ~/tesstutorial/engeval/eng.training_files.txt
training/lstmeval --model tessdata/best/eng.traineddata \
--eval_listfile ~/tesstutorial/engeval/eng
I'd like to produce high-quality OCR of books that contain text
interspersed with music. Is it possible to train Tesseract to ignore
musical notation instead of turning it into junk OCR? How would one go
about doing this?
--
You received this message because you are subscribed to the Google Gr
On Tesseract site, it is mentioned that no GPU is needed (No support).
What does this statement means?
If i have a machine with GPU, does it improve training performance or it
has no impact on training time.
Please respond.
--
You received this message because you are subscribed to the Google Gr
5 matches
Mail list logo