Export your OPTC character box automatically from screenshots.
OPTCbx can analyze your character box screenshots (Figure 1), and obtain all the characters within them. OPTCbx does that without any manual intervention.
OPTCbx is the first box exporter for One Piece Treasure Cruise without any manual intervention. It works with just a set of screenshots.
By its own, OPTCbx, does not have any utility apart from showing your character box in a fancy manner.
Power comes with contributions. I believe that OPTCbx can be a key point combined with projects such as:
-
NakamaNetwork: With OPTCbx you are not far from automatically exporting your teams or your box to this extraordinary website.
-
CrewPlanner: Imagine that you can export all your characters to CrewPlanner without spending hours introducing your characters one by one. With OPTCbx, you will be able to do so with just a few minutes or even seconds 😲.
-
TM Planner: OPTCbx would provide the ability to automatically select the characters that you don't own.
-
This project is fully implemented in Python, so make sure you have Python installed in your computer.
-
Install the dependencies
$ pip install -r requirements.txt
- Optional. Download the most recent units from OPTC DB
$ cd tools
$ sh download-units.sh
$ cd ..
After running the above commands you should be able to find units.json
under the data directory.
- Download the portraits images
$ python -m optcbx download-portraits \
--units data/units.json \
--output data/Portraits
- Download pretrained CNNs
$ cd ai
$ sh prepare-ai.sh
$ cd ..
- Run the demo with your screenshot
$ python -m optcbx demo <screenshot-path>
Note: If OpenCV shows warnings regarding to png files, run the
fix.bat
insidetools
directory
OPTCbx supports different computer vision techniques to retrieve the characters sitting on your box and match them with the corresponding OPTC database entries.
Currently, OPTCbx supports 2 main approaches to detect and match the characters:
- Gradient based approach: Handcrafted steps based on the colors' change gradients.
- Smart approach: Based on object detection models and self-supervision to match characters.
The used technologies are:
- Retrieve character box portraits: First keep only the high intensity colors such as white and yellow. Then with the resulting masked image I apply a canny edge detection (Figure 2) to obtain the character box borders.
-
Then with the Canny result, I apply an approach called Hough Lines Transform to detect vertical and horizontal lines to draw the box grid.
-
Finally, with the grid I am able to find the boxes wrapping each character (Figure 3). Then with these regions, cropping the characters one by one is straightforward.
- Finding matching characters: To do so I compute a pairwise Mean Squared Error (Equation 1) distance with all the characters found in your box with all the database entries. Finally I pick the smallest distance (Figure 4).
- Results
Figure 4: Matched characters. For each column, the character appearing at the right is your character and the one at the left is the OPTC db entry (exported one)
-
Object detection model: Using an own SSD implementation I train an SSD model with a VGG-16 backbone to detect OPTC characters in any screen (Character box, crew build, pirate festival teams, etc.).
-
Self-supervision to generate image features and character matching: Ideally instead of comparing large images, we want to compare small vectors encoding the most important features of the given images. To generate this features I use a pretrained CNN. Sadly an ImageNet fine-tuning is not enough to generate feature vectors with high capacity of representation. Therefore, I self-supervise a Resnet-18 so it learns to reconstruct One Piece Characters' portraits. Using this new pretrained model the resulting matches seems accurate.
NOTE: The ones interested in the AI part I upload all the related code inside the
notebooks
directory.