Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request for Updated Pre-trained Models #423

Open
Meriem-DAHMANI opened this issue Jun 10, 2024 · 11 comments
Open

Request for Updated Pre-trained Models #423

Meriem-DAHMANI opened this issue Jun 10, 2024 · 11 comments

Comments

@Meriem-DAHMANI
Copy link

Hello,
I want to fine-tune EasyOCR for French (easyocr.Reader([fr])), and I followed the instructions provided in this note and this article. However, I encountered a problem: the note suggests downloading the OCR pre-trained model from this Google Drive link, but the latest models available there were uploaded in 2020. Given that the last updates to EasyOCR were made 10 months ago, these models are outdated and do not perform as well as the latest EasyOCR version.
Additionally, I need to improve the French version specifically, but there is no option to specify which language to train. I tried to obtain a .pth file for the latest version of EasyOCR but wasn't sure how to proceed.
Could you please guide me on how to get the latest pre-trained model for EasyOCR, and how to fine-tune it specifically for French?
Thank you.

@charlyjazz-sprockets
Copy link

I am using the trainer of easyOCR , its inside the repository. And I also could not use a pre trained model, I ahve to create one form scratch

@Meriem-DAHMANI
Copy link
Author

@charlyjazz-sprockets did you run this code ? i think in configuration file you can make option FR to True in order to finetune easyocr, and in this case it will use the pretrained model

@CharlyJazz
Copy link

I did not try! Let me know to talk about this repo!

@CharlyJazz
Copy link

Are you having weird validations scores? My validations score are too good, but when I use the model in easyocr the perfomance is bad

@Meriem-DAHMANI
Copy link
Author

@CharlyJazz how did you train you model ? what code did you use ?

@CharlyJazz
Copy link

https://github.com/JaidedAI/EasyOCR/blob/master/trainer/train.py

image

The validation is super good

image

But when I use the pth file the prediciton are super stupid bad

image

@Meriem-DAHMANI
Copy link
Author

Meriem-DAHMANI commented Jun 22, 2024

what's the size of your dataset ? also can you please show me the code of how you used the model.pth to get text as output ? for me it gives me a matrix of numbers and when i convert it to text , it doesn't give the right output

@CharlyJazz
Copy link

Same problem for me, my dataset is 5M of images of synthetic music chords notation, is not big deal.. is super normal the use case
image

import os
from train import train
from model import Model
from PIL import Image
from utils import CTCLabelConverter, AttnLabelConverter
import torchvision.transforms as transforms
import pandas as pd
import torch
from ddevice import device
from dataset import NormalizePAD, ResizeNormalize, adjust_contrast_grey
import math
import numpy as np
from get_config import get_config

opt = get_config("config_files/en_chords_synth_config.yaml")

class PredictAlignCollate(object):

    def __init__(self, imgH=32, imgW=100, keep_ratio_with_pad=False, contrast_adjust = 0.):
        self.imgH = imgH
        self.imgW = imgW
        self.keep_ratio_with_pad = keep_ratio_with_pad
        self.contrast_adjust = contrast_adjust

    def __call__(self, image):
        image = np.array(image)
        if self.keep_ratio_with_pad:  # same concept with 'Rosetta' paper
            resized_max_w = self.imgW
            input_channel = 1
            transform = NormalizePAD((input_channel, self.imgH, resized_max_w))
            h, w = image.shape[:2]

            #### augmentation here - change contrast
            if self.contrast_adjust > 0:
                image = np.array(image.convert("L"))
                image = adjust_contrast_grey(image, target = self.contrast_adjust)
                image = Image.fromarray(image, 'L')

            ratio = w / float(h)
            if math.ceil(self.imgH * ratio) > self.imgW:
                resized_w = self.imgW
            else:
                resized_w = math.ceil(self.imgH * ratio)

            image = Image.fromarray(image, 'L')
            resized_image = image.resize((resized_w, self.imgH), Image.BICUBIC)
            import uuid
            # resized_image.save(f"temp/{uuid.uuid4()}.jpg")
            return resized_image
        else:
            transform = ResizeNormalize((self.imgW, self.imgH))
            resized_image = transform(resized_image)
            return resized_image

def predict(opt, image_path, model_path, text_for_pred):
    """ Predict text from a single image """
    image = Image.open(image_path).convert('L')
    transform = transforms.Compose([
        PredictAlignCollate(imgH=opt.imgH, imgW=opt.imgW, keep_ratio_with_pad=opt.PAD, contrast_adjust = opt.contrast_adjust),
        transforms.ToTensor(),
        transforms.Normalize((0.5,), (0.5,)),
    ])
    image = transform(image).unsqueeze(0).to(device)
    converter = CTCLabelConverter(opt.character)
    opt.num_class = len(converter.character)
    with torch.no_grad():
        model = Model(opt)
        model = torch.nn.DataParallel(model).to(device)
        model.load_state_dict(torch.load(model_path, map_location=device))
        model.eval()
        preds = model(image, text_for_pred, is_train=False)
        preds_size = torch.IntTensor([preds.size(1)])
        _, preds_index = preds.max(2)
        preds_index = preds_index.view(-1)
        preds_str = converter.decode_greedy(preds_index.data, preds_size.data)
        return preds_str[0]

root  = os.path.dirname(os.path.abspath(__file__))
pathi = os.path.join(root, "dataset_structure", "validation_book")
# pathi = os.path.join(root, "jazz_book_dataset", "transformed")
# pathi = "/Users/carlosazuaje/Charlyjazz/Github/OCR-Chord-Notation/synthetic_dataset_chords_v2/batch_1"
files = os.listdir(pathi)

if __name__ == "__main__":
    for filename in files:
        if not filename.endswith('.jpg'):
            continue
        file_path = os.path.join(pathi, filename)
        label = (file_path.split('/')[-1])
        prediction = predict(
            opt, 
            file_path, 
            "/Users/carlosazuaje/Charlyjazz/Github/OCR-Chord-Notation/saved_models/chords_5millions/saved_models_chords_5millions_iter_949999.pth",
            label)
        print(f'Label: {label} Prediction: {prediction}')

@CharlyJazz
Copy link

Update: after 1M iteration the model seems to have better predictions

@CharlyJazz
Copy link

let me know if u wanna share knowledge about this thing I have discord

@Meriem-DAHMANI
Copy link
Author

i'll try to inscrease number of iterations too , and i contacted you on linkedIn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants