you were right again actually :)
I will stick with the simple fine tuning.
However I wouldn't have been able to experiment with the other scenarios
without your help. Thanks! All is working perfectly well.
Regards
On Monday, September 10, 2018 at 8:51:54 PM UTC+1, Lorenzo Blz wrote:
>
> Il gior
Il giorno lun 10 set 2018 alle ore 15:38 Raniem ha
scritto:
> I am actually doing that not to limit the number of output chars, I am
> doing it cause I thought this way I am only tuning the final layer as I
> wanted to keep the weights for other layers.
> I was trying to experiment whether this i
you were right regarding the different models type. Thanks :)
On Monday, September 10, 2018 at 2:38:38 PM UTC+1, Raniem wrote:
>
> I think there is no need to change the network definition appending layers
>> with a limited number of output chars. The line you replaced already takes
>> care of t
>
> I think there is no need to change the network definition appending layers
> with a limited number of output chars. The line you replaced already takes
> care of this with:
>
I am actually doing that not to limit the number of output chars, I am
doing it cause I thought this way I am only
I think there is no need to change the network definition appending layers
with a limited number of output chars. The line you replaced already takes
care of this with:
--net_spec "[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1c*`head
-n1 data/unicharset`*]"
I had this error when I was mi
Thanks Lorenzo.
Your method makes all the magic I needed.
One other question please, I am attempting to fine tune only the last
layer, so I have replaced the
--net_spec "[1,36,0,1 Ct3,3,16 Mp3,3 Lfys48 Lfx96 Lrx96 Lfx256 O1c`head -n1
data/unicharset`]" \
int the lstmtraining command with:
Thanks for the detailed answer, I am giving it a shot and hoping for
getting some better results :)
Thanks for all your help and support
Best Regards
On Friday, June 29, 2018 at 1:01:08 PM UTC+1, Lorenzo Blz wrote:
>
>
>
> Hi,
> I'm trying to do fine tuning of an existing model using line i
Hi Raniem,
I did 5 fine tunings for different fonts and text content with roughly
these numbers:
iterations: samples (training data)
750:208 numbers (4 upper case + 5 digits each)
1000: 400 MRZ codes (22 uppercase chars each)
1800: 1000 numbers (10 digits each)
2250
Hi @ Lorenzo Blz
How many data lines and iterations have you used in your fine tuning.
In your last reply you have mentioned you replaced
merge_unicharsets $(TESSDATA)/$(CONTINUE_FROM).lstm-unicharset
$(TRAIN)/my.unicharset "$@"
with:
cp "$(TRAIN)/my.unicharset" "data/unicharset"
which is
9 matches
Mail list logo