Hi, The problem seems that you are using a pre-trained Tesseract model. In your scenario, you need to retrain another Tesseract model based on your handwriting. This new model will increase your accuracy.
Regards Ajinkya On Tuesday, 26 November 2024 at 19:27:50 UTC+5:30 samlee...@gmail.com wrote: > Dear Tesseract Team, > > I am currently working on a project that involves recognizing English > text, and I’ve implemented a workflow using the CRAFT text detector to > identify text regions. After isolating these regions, I process each > segment with Tesseract OCR. While this approach achieves high accuracy with > printed text, the performance drops significantly when dealing with > handwritten text(example the image). > > To improve accuracy, I’ve already applied preprocessing steps such as > grayscale conversion and binarization. However, I would like to ask for > advice on optimizing preprocessing parameters for images with diverse > characteristics. Specifically: > > 1. Are there recommended preprocessing configurations that generally > work well for most images when preparing them for OCR? > 2. Are there additional steps or methods that could further enhance > handwritten text recognition accuracy? > > Thank you for your time and support. I look forward to your valuable > insights. > > Best regards [image: 螢幕擷取畫面 2024-11-26 214426.png] > -- You received this message because you are subscribed to the Google Groups "tesseract-ocr" group. To unsubscribe from this group and stop receiving emails from it, send an email to tesseract-ocr+unsubscr...@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/tesseract-ocr/ca4651f1-ad5e-4313-ab57-4912f8d6b04dn%40googlegroups.com.