Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

script.convert tfjs model to onnx support #1038

Open
JohnRSim opened this issue Nov 18, 2024 · 5 comments
Open

script.convert tfjs model to onnx support #1038

JohnRSim opened this issue Nov 18, 2024 · 5 comments
Labels
question Further information is requested

Comments

@JohnRSim
Copy link

Question

I'm using tfjs-node to create an image-classifier model;
but I'm stuck with how to convert model.json to a format that can be used by optimum or script.convert to convert it to a onnx file.

I'm able to convert to a graph model using

tensorflowjs_converter --input_format=tfjs_layers_model \  --output_format=tfjs_graph_model \  ./saved-model/layers-model/model.json \  ./saved-model/graph-model

and then I can convert to an onnx using

python3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx

This works fine when I test in python but I'm unable to use in transformers.js - I probably need to use optimum to convert it?
I tried a number of approaches but was unable to convert to onnx - I then saw script.convert but am having difficulties

  • This is an example of the code I'm using to test the model with
import onnxruntime as ort
from PIL import Image
import numpy as np

# Load the ONNX model
session = ort.InferenceSession('./saved-model/model.onnx')

# Get input and output names
input_name = session.get_inputs()[0].name
output_name = session.get_outputs()[0].name

# Load and preprocess the image
img = Image.open('./training_images/shirt/00e745c9-97d9-429d-8c3f-d3db7a2d2991.jpg').resize((128, 128))
img_array = np.array(img).astype(np.float32) / 255.0  # Normalize pixel values to [0, 1]
img_array = np.expand_dims(img_array, axis=0)  # Add batch dimension

# Run inference
outputs = session.run([output_name], {input_name: img_array})
print(f"Inference outputs: {outputs}")

Uploading model.onnx.txt…

Any guidance on how to go from tfjs model.json to onnx supported by transformers.js would really help me out.
Thanks!

@JohnRSim JohnRSim added the question Further information is requested label Nov 18, 2024
@xenova
Copy link
Collaborator

xenova commented Nov 18, 2024

Hi there 👋 which model are you trying to convert? Also, can you provide the transformers.js code you are trying to run?

Note that our conversion script is only built for Hugging Face transformers models (and not just arbitrary conversion)

@JohnRSim
Copy link
Author

Ah.. thanks Xenova.

I created a custom image-classifier model with tfjs-node - attached the model.onnx with txt extension in prior msg.

Let me grab and share shortly the code it's pretty basic.

@JohnRSim
Copy link
Author

JohnRSim commented Nov 18, 2024

This is what I'm using to validate test the onnx generated:
validate_onnx.py.txt
test_image.py.txt

I'm generating the model using tfjs-node
generate.js.txt

transformers.js to test with (not working)
test.js.txt

And then I was playing around with web worker and your latest on ms-florence example and seeing if I could find tune with the custom images. (wip)
customVision.js.txt

here is an image in the training model I was using to test against.
00e745c9-97d9-429d-8c3f-d3db7a2d2991

If there are any guides you can point me to - I just want to create a custom mini image classifier ideally with node convert it to onnx use transformers.js and pass images through it to return a classified label.

config.json

{
  "model_type": "vit",
  "hidden_size": 768,
  "num_attention_heads": 12,
  "num_hidden_layers": 12,
  "intermediate_size": 3072,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "attention_probs_dropout_prob": 0.1,
  "image_size": 128,
  "patch_size": 16,
  "num_channels": 3,
  "num_labels": 2
}

preprocessor_config.json

{
	"feature_extractor_type": "ViTFeatureExtractor",
	"image_mean": [0.5, 0.5, 0.5],
	"image_std": [0.5, 0.5, 0.5],
	"size": 128
}

@xenova
Copy link
Collaborator

xenova commented Nov 18, 2024

Hmm looks like the link to the model is broken:
image

Feel free to upload it to the Hugging Face Hub for easier transferring (https://huggingface.co/new)

@JohnRSim
Copy link
Author

Thanks @xenova

I've dropped the files into here:
https://huggingface.co/jrsimuix/issue1038

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants