Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Page level images #7

Open
Shreeshrii opened this issue May 4, 2018 · 49 comments
Open

Page level images #7

Shreeshrii opened this issue May 4, 2018 · 49 comments
Assignees
Labels
enhancement New feature or request

Comments

@Shreeshrii
Copy link
Collaborator

The script works for line level images.

I have a number of scanned page images with ground truth files.

Does OCR-D project have any tools to segment it to line images with corresponding ground truth text?

@wrznr
Copy link
Collaborator

wrznr commented May 4, 2018

Unfortunately, not yet. we are working on something in this direction to align the fulltexts from the German Text Archive with the corresponding images. Hopefully, I can get back to you soon with some tool.
Additionally, @jbaiter does some things along these lines...

@Shreeshrii
Copy link
Collaborator Author

Thanks. It will be a useful tool.

I am trying to use some ocropus tools to split the page into line images. I will either ocr the line images to create text to be corrected for ground truth, or type it fully,

@jbaiter
Copy link

jbaiter commented May 4, 2018

@Shreeshrii , you could try this approach:

  1. Split the page image into line images with ocropus/kraken
  2. Run the most suitable OCR model on the line images
  3. For each line in the resulting OCR, find the ground truth line with the lowest edit distance (e.g. Levenshtein)
  4. Every matching line with an edit distance above a certain threshold should have fairly high chance of being a correct match

One problem with this approach is that segmentation errors (e.g. a line gets cut in two, a few words at the beginning/end are missing, etc) lead to false positives.
This also assumes that your ground-truth is split into lines. If not, you will have to modify step 3 to slide each OCR line over the ground truth and determine the best match that way, with some added heuristics to not match partial words, etc.

@Shreeshrii
Copy link
Collaborator Author

@jbaiter

I want to use it for Devanagari script. I had looked at ocropus quite sometime back. I am not sure if ocropus/kraken supports Devanagari.

Do you know if it has support for complex scripts?

@zuphilip
Copy link
Contributor

@Shreeshrii There are some papers with text recognition results with Ocropus on Devanagari script. However, I am not aware of any shared model you could reuse. You can find some models for Ocropus here https://github.com/tmbdev/ocropy/wiki/Models

However, instead of 1.+2. you can also use tesseract for creating a hocr output and then use hocr-extract-images to create the line images and texts.

Moreover, if you have the ground truth in hocr format you can use hocr-eval for the evaluation with your recognition format. Or, do you have the ground truth only as a text with the geometric information?

@Shreeshrii
Copy link
Collaborator Author

@zuphilip I have also read about Devanagari training for ocropus but the models are not available (I had looked couple of years ago or so).

Thank you for the link to specific HOCR tools. I will give them a try.

The ground truth I have are plain text files matching the scanned images without any positional info. I was able to use them to eval OCR accuracy by comparing to recognized output.

@Shreeshrii
Copy link
Collaborator Author

https://github.com/Shreeshrii/imagessan/tree/master/groundtruthimages

Sanskrit language samples in Devanagari script.

@zuphilip
Copy link
Contributor

zuphilip commented May 12, 2018

Ping @adnanulhasan who may still have some sources from the Ocropus training with Deviangari script texts.

@Shreeshrii
Copy link
Collaborator Author

you can also use tesseract for creating a hocr output and then use hocr-extract-images to create the line images and texts.

@zuphilip Thank you. I was able to use it for Devanagari script files also. The commands which worked for me (it took a little experimenting to get it right).

:~/hocr-tools$ PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./shree/ -p ./shree/san.pothi-%03d.png  ./shree/Mudgala-Test-01.hocr
:~/hocr-tools$ PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./shree/ -p ./shree/san.pothi-%03d.tif  ./shree/Mudgala-Test-01.hocr

@Shreeshrii
Copy link
Collaborator Author

The other option which I had used was


    # perform binarization
    ./ocropus-nlbin tests/devatest?.png -o devatest -n -g

    # perform page layout analysis
    ./ocropus-gpageseg 'devatest/????.bin.png' -n

And then running tesseract to get text and correcting it.

@Shreeshrii
Copy link
Collaborator Author

Shreeshrii commented Sep 9, 2018

In case it is helpful to others looking for a solution, posting below a bash script I use for -

  1. taking a scanned page image,
  2. running tesseract with hocr option on it,
  3. running hocr tools to split it into lines.

The ground truth needs to be updated manually, if there is an existing page level ground truth file, copy line by line into the lines ground truth.

#!/bin/bash
SOURCE="./myfiles/"
lang=san
set -- "$SOURCE"*.png
for img_file; do
    echo -e  "\r\n File: $img_file"
    OMP_THREAD_LIMIT=1 tesseract --tessdata-dir ../tessdata_fast   "${img_file}" "${img_file%.*}"  --psm 6  --oem 1  -l $lang -c page_separator='' hocr
    source venv/bin/activate
    PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./myfiles/ -p "${img_file%.*}"-%03d.exp0.tif  "${img_file%.*}".hocr 
    deactivate
done
rename s/exp0.txt/exp0.gt.txt/ ./myfiles/*exp0.txt

echo "Image files converted to tif. Correct the ground truth files and then run ocr-d train to create box and lstmf files"

@wrznr wrznr added the enhancement New feature or request label Sep 20, 2018
@wrznr wrznr self-assigned this Sep 20, 2018
@SultanOrazbayev
Copy link

SultanOrazbayev commented Nov 23, 2018

Occasionally, the line images are a bit wider than the text and so they catch the letters from the preceding or the subsequent lines. Is this a problem for the training (i.e. should such images be fixed to ensure that they do not contain top/bottom of the neighbouring lines)?

@wrznr
Copy link
Collaborator

wrznr commented Nov 23, 2018

I think this is a problem. It would be great if you could provide a corresponding example, maybe in a specific GitHub issue. Many thanks in advance!

@Shreeshrii
Copy link
Collaborator Author

Please see tesseract-ocr/tesseract#2231 for the WordStr format box files.

@wrznr
Copy link
Collaborator

wrznr commented Aug 29, 2019

@bertsky: Concerning the comment by @SultanOrazbayev, clipping may help here, right? Is it possible, to get polygonal line shapes from tesseract?

@bertsky
Copy link
Collaborator

bertsky commented Aug 29, 2019

It is possible to get polygon-based segmentation from Tesseract: with BlockPolygon from the page iterator delivered by AnalyseLayout. There is a bug somewhere though: sometimes, paths self-intersect, which even Tesseract itself does not cope with very well (as can be seen by the mask images produced internally, available with GetImage when also passing the raw image again). Maybe by postprocessing one can circumvent this issue – using shapely.geometry functions to self-disjoin paths, or similar.

But even without polygon masked line images you could try clipping to rid of the intrusions from neighbours, yes. Or alternatively, do resegmentation (i.e. increase coherence via another line segmentation). Both methods are already available as OCR-D processors, as is Tesseract region segmentation (optionally with polygons).

But you want line segmentation with polygons here, right? I am afraid Tesseract's API does not offer that – only for the "block" level!

Should I give details (what/where/how) on using clipping and resegmentation?

@kabilankiruba
Copy link

Hi,
I am using ocr-D for preparing traindata and i am try to extract data from dot matrix font pdf.i created some sample in dot matrix tif image and gt.txt then i am using tesseract to extract my pdf but it extract only some letters and some time its consider 0 as 8.please give solution to fix this issue

@wrznr
Copy link
Collaborator

wrznr commented Oct 1, 2019

@kabilankiruba This is clearly not related to this thread. Pls. consider to contact the Tesseract user group.

@Shreeshrii
Copy link
Collaborator Author

Shreeshrii commented Jan 12, 2020

Is there any tool which will display the line images and gt.txt side by side for easy correction after generating the files from HOCR output (as suggested here).

I do not want to run a web server to do this.

Can it be done via javascript/html - show an image and its gt.txt - save corrected gt.txt and have an arrow/option to display next image and gt.txt.

Basically, i would like to run this on my windows10 desktop.

@wrznr
Copy link
Collaborator

wrznr commented Jan 13, 2020

@kba @cneud @stweil Can you recommend a tool for this purpose? Wasn't there such a thing in OCRopy?

@Shreeshrii
Copy link
Collaborator Author

https://github.com/OpenArabic/OCR_GS_Data/blob/master/_doublecheck_viewer.py creates HTML5 based webpage for Reviewing OCR Training/Testing Data.

@kba
Copy link
Collaborator

kba commented Jan 13, 2020

Can you recommend a tool for this purpose?

Can it be done via javascript/html - show an image and its gt.txt - save corrected gt.txt and have an arrow/option to display next image and gt.txt.

Both kraken's and ocropy's transcription do that. the hocrjs viewer has an option to make items contenteditable but no way to save it.

@Shreeshrii
Copy link
Collaborator Author

Shreeshrii commented Jan 13, 2020

Thank you. I think the following workflow will do the trick.

./ocropus-nlbin bookpages/*.png -o book

 ./ocropus-gpageseg 'book/????.bin.png'

 ./ocropus-gtedit html -f 20 -H 48   ./book/*/*.png

writing correction.html

Transfer and browse correction.html on Windows. Add the ground truth text for each line image. Save HTML as complete webpage. Transfer file back to Linux.

./ocropus-gtedit extract -p bookgt correction.html

@Shreeshrii
Copy link
Collaborator Author

@fjp These are two different approaches.
I have used both separately, only on experimental basis, mostly for testing.

@M3ssman
Copy link
Contributor

M3ssman commented Apr 2, 2020

Hello,
are there still any plans to integrate some kind of tool into tesstrain?

I was facing similar requirements for generation of training data in a windows-env, which ended up in a small Script that extracts both coords and textdata from an existing ALTO-file and writes training-data-pairs.

@wrznr
Copy link
Collaborator

wrznr commented Apr 2, 2020

@M3ssman This would be a great contribution. Especially, since it opens up a way to use Aletheia-created GT with tesstrain.

@M3ssman
Copy link
Contributor

M3ssman commented Apr 2, 2020

@wrznr I must confess: There are some caveats.
It adds another dependency, python-opencv. Pillow kept complaining about images >80 MB.
Further, on Windows 10, one needs additionally to install C++14.0-Buildtools, which varies according to the used Python Version which is used by numpy which in turn is used by opencv.

@rraina97
Copy link

rraina97 commented Jun 1, 2020

In case it is helpful to others looking for a solution, posting below a bash script I use for -

1. taking a scanned page image,

2. running tesseract with hocr option on it,

3. running hocr tools to split it into lines.

The ground truth needs to be updated manually, if there is an existing page level ground truth file, copy line by line into the lines ground truth.

#!/bin/bash
SOURCE="./myfiles/"
lang=san
set -- "$SOURCE"*.png
for img_file; do
    echo -e  "\r\n File: $img_file"
    OMP_THREAD_LIMIT=1 tesseract --tessdata-dir ../tessdata_fast   "${img_file}" "${img_file%.*}"  --psm 6  --oem 1  -l $lang -c page_separator='' hocr
    source venv/bin/activate
    PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./myfiles/ -p "${img_file%.*}"-%03d.exp0.tif  "${img_file%.*}".hocr 
    deactivate
done
rename s/exp0.txt/exp0-gt.txt/ ./myfiles/*exp0.txt

echo "Image files converted to tif. Correct the ground truth files and then run ocr-d train to create box and lstmf files"

Could you please explain what each line does. I want to run it on my system but am confused on what to change @Shreeshrii

@Shreeshrii
Copy link
Collaborator Author

Shreeshrii commented Jun 1, 2020

I want to run it on my system but am confused on what to change

Assuming that you have tesseract and hocr-tools installed, put your image (png) files in ./myfiles/ folder. Change lang=san in the bash script to whichever language you need eg. lang=eng
save and run the bash script.

for each image file
runs tesseract on image file to produce hocr output
runs hocr-extract-images to split the image to line images with the OCRed text for the line
done
rename generated text file from *.txt to *.gt.txt
Correct command will be the following (. instead of - in filename)
rename s/exp0.txt/exp0.gt.txt/ ./myfiles/*exp0.txt

After this the *.gt.txt files need to be manually corrected to match the line images.

@rraina97
Copy link

rraina97 commented Jun 2, 2020

I want to run it on my system but am confused on what to change

Assuming that you have tesseract and hocr-tools installed, put your image (png) files in ./myfiles/ folder. Change lang=san in the bash script to whichever language you need eg. lang=eng
save and run the bash script.

for each image file
runs tesseract on image file to produce hocr output
runs hocr-extract-images to split the image to line images with the OCRed text for the line
done
rename generated text file from *.txt to *.gt.txt
Correct command will be the following (. instead of - in filename)
rename s/exp0.txt/exp0.gt.txt/ ./myfiles/*exp0.txt

After this the *.gt.txt files need to be manually corrected to match the line images.

Thank You. It has solved some issues but still a problem persists. I'm attaching a screenshot. Please look into the matter @Shreeshrii
Screenshot from 2020-06-02 11-12-01

@Shreeshrii
Copy link
Collaborator Author

Do you have tesseract and hocr-tools installed correctly?

It is not finding hocr config file. Is your tessdata_prefix directory setup correctly?

Are hocr-tools working fine?

Change the paths based on your setup.

@rraina97
Copy link

rraina97 commented Jun 2, 2020

i installed hocr-tools using "sudo pip3 install hocr-tools". And as of tesseract i cloned the tesstrain repo aur used make leptonica tesseract since i had to train tesseract manually on data.
i guess my tessdata_prefix if fine. its ./usr/share/tessdata .
I tried several steps but am not able to run it correctly @Shreeshrii

@Shreeshrii
Copy link
Collaborator Author

Take one image file. Run tesseract on it, see if you get text output. Try again with pdf at end of command and see if you get pdf output. Then try with hocr.

Similarly test the hocr-tools. Check that you can run the hocr-extract-images command.
If you have installed it, then you may not need ./ before command.

Once you can do this for one file, use the appropriate commands in a for loop for all files.

@Shreeshrii
Copy link
Collaborator Author

./usr/share/tessdata

Check the files and folders in that directory. Do you have a newer set of files under /usr/share/tessdata/4.00

@rraina97
Copy link

rraina97 commented Jun 2, 2020

both tesseract and hocr are working.
So, i manually ran tesseract hocr on a jpg file "img.png" and it provided ne with an output "img.hocr".
Now, to extract line data from this i ran hocr-extract-images but am faced with an error which i have attached below. Please help @Shreeshrii
Screenshot from 2020-06-02 18-46-19

@kba
Copy link
Collaborator

kba commented Jun 2, 2020

@rraina97 Please open an issue at https://github.com/tmbdev/hocr-tools for help on invoking hocr-extract-images to keep this issue uncluttered.

It looks like img.hocr is not in the current directory. Make sure you are in the right location, i.e. ls img.hocr is successful.

@prasad01dalavi
Copy link

In my case to make it run, have made some minor changes in @Shreeshrii script, I put the image file of page in myfiles and run the script with bash generate_training_data.sh

  1. Created training virtualenv
  2. sudo apt-get install hocr-tools
  3. sudo apt-get install rename
#!/bin/bash
SOURCE="./myfiles/"
lang=eng
set -- "$SOURCE"*.jpg
for img_file; do
    echo -e  "\r\n File: $img_file"
    OMP_THREAD_LIMIT=1 tesseract "${img_file}" "${img_file%.*}"  --psm 6  --oem 1  -l $lang -c page_separator='' hocr
    source training_env/bin/activate
    PYTHONIOENCODING=UTF-8 hocr-extract-images -b ./myfiles/ -p "${img_file%.*}"-%03d.exp0.tif  "${img_file%.*}".hocr 
    deactivate
done
rename s/exp0.txt/exp0-gt.txt/ ./myfiles/*exp0.txt
echo "Image files converted to tif. Correct the ground truth files and then run ocr-d train to create box and lstmf files"

Special thanks to Shreeshri!

@sahrawat
Copy link

Nitpick:

rename s/exp0.txt/exp0-gt.txt/ ./myfiles/*exp0.txt

should be

rename s/exp0.txt/exp0.gt.txt/ ./myfiles/*exp0.txt

@kba
Copy link
Collaborator

kba commented Nov 26, 2020

Note that @M3ssman has proposed a set of python scripts to generate line image/text pairs from PAGE and ALTO in #205.

@bertzi87
Copy link

In case it is helpful to others looking for a solution, posting below a bash script I use for -

  1. taking a scanned page image,
  2. running tesseract with hocr option on it,
  3. running hocr tools to split it into lines.

The ground truth needs to be updated manually, if there is an existing page level ground truth file, copy line by line into the lines ground truth.

#!/bin/bash
SOURCE="./myfiles/"
lang=san
set -- "$SOURCE"*.png
for img_file; do
    echo -e  "\r\n File: $img_file"
    OMP_THREAD_LIMIT=1 tesseract --tessdata-dir ../tessdata_fast   "${img_file}" "${img_file%.*}"  --psm 6  --oem 1  -l $lang -c page_separator='' hocr
    source venv/bin/activate
    PYTHONIOENCODING=UTF-8 ./hocr-extract-images -b ./myfiles/ -p "${img_file%.*}"-%03d.exp0.tif  "${img_file%.*}".hocr 
    deactivate
done
rename s/exp0.txt/exp0.gt.txt/ ./myfiles/*exp0.txt

echo "Image files converted to tif. Correct the ground truth files and then run ocr-d train to create box and lstmf files"

For a simpler and more efficient way, I recommend gnu parallel. The above stuff becomes 2 lines. First generate the hocr files:
parallel --bar -j 4 'OMP_THREAD_LIMIT=1 tesseract {} {/.} --psm 4 --oem 1 -l eng hocr' ::: *.png
Then extract the tif/txt pairs:
parallel --bar -j 4 'hocr-extract-images {} -p {/.}-%03d.tif' ::: *.hocr

Even faster (around 10% for me) if you recompile tesseract without openMP (./configure --disable-openmp)

@whisere
Copy link

whisere commented Apr 28, 2022

If we only have page images and page ground truth text, can we use them to train tesseract instead of line images and line ground truth? I imagine page images/text are closer to the tesseract input/output format?

@wrznr
Copy link
Collaborator

wrznr commented Apr 28, 2022

@whisere The question is whether your page images/texts are aligned on line-level. I.e. for each text line the coordinates of the corresponding part of the page image have to be annotated. If not, training Tesseract with your data is not possible.

@whisere
Copy link

whisere commented Apr 28, 2022

Thanks, That's not good, There is no text line information in page texts at all.. only multiple blocks with <p>..

@whisere
Copy link

whisere commented Apr 29, 2022

How about block images and block ground truth text?

@wrznr
Copy link
Collaborator

wrznr commented Apr 29, 2022

You would have to align them manually or semi-automatically (i.e. you could try to OCR the images to get the line segmentation and than heuristically match the text on the lines) on the line level. Tesseract text recognition has to be trained on the level of lines. No other way (cf. e.g. https://ieeexplore.ieee.org/abstract/document/6628705).

@whisere
Copy link

whisere commented Apr 29, 2022

Many thanks for the information!

@ssandrews
Copy link

This is a helpful script. Thank you. However, it ends with "run ocr-d train to create box and lstmf files". Can someone tell me how to do this? Thanks.

@SawatKia
Copy link

for those who are unfamiliar with a bash script, I recommend using this Python script to crop the images automatically.

import pytesseract
from PIL import Image
import os

def segment_lines_tesseract(image_path):
    """
    Segments lines from an image using Tesseract OCR and saves them as separate images.

    Args:
        image_path (str): The path to the input image file.

    Returns:
        list: A list of cropped line images.
    """
    # Open the image using PIL
    image = Image.open(image_path)
    
    # Use Tesseract to get detailed information about text lines in the image
    details = pytesseract.image_to_data(image, output_type=pytesseract.Output.DICT)
    
    # List to hold cropped line images
    line_images = []
    
    # Iterate through the detected text elements
    for i in range(len(details['level'])):
        if details['level'][i] == 5:  # Check if the level corresponds to a line
            x = details['left'][i]     # X-coordinate of the bounding box
            y = details['top'][i]      # Y-coordinate of the bounding box
            w = details['width'][i]     # Width of the bounding box
            h = details['height'][i]    # Height of the bounding box
            
            # Crop the line from the original image using the bounding box coordinates
            line_image = image.crop((x, y, x + w, y + h))
            
            # Get folder and file names for saving cropped images
            folder_name = os.path.basename(os.path.dirname(image_path))
            file_name = os.path.splitext(os.path.basename(image_path))[0]
            
            # Define output directory for line images
            output_dir = './line_images'
            if not os.path.exists(output_dir):
                os.makedirs(output_dir)  # Create directory if it doesn't exist
            
            # Save the cropped line image with a unique filename
            line_image.save(os.path.join(output_dir, f'{folder_name}_{file_name}_box{i}.png'))
            
            # Append the cropped line image to the list
            line_images.append(line_image)
    
    return line_images

def process_directory(directory):
    """
    Processes all images in a given directory and segments lines from each.

    Args:
        directory (str): The path to the directory containing images.
    """
    print("Processing directory:", directory)
    
    # Walk through all files in the directory
    for root, dirs, files in os.walk(directory):
        for file in files:
            if file.endswith(('.jpg', '.jpeg', '.png', '.gif', '.bmp', '.tiff')):
                image_path = os.path.join(root, file)  # Full path to the image file
                print("Processing image:", image_path)
                try:
                    img = Image.open(image_path)  # Open the image file
                    segment_lines_tesseract(image_path)  # Segment lines from the image
                except Exception as e:
                    print(f"Error processing {image_path}: {str(e)}")  # Handle errors gracefully

if __name__ == "__main__":
    print("Segmenting lines from images in the current directory...")
    
    # Get the current working directory
    current_dir = os.getcwd()
    
    # Process all images in the current directory
    process_directory(current_dir)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests