You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey,
I want to convert a custom tf model into .kmodel format to run it on the k210 chip.
For testing I wanted to start really simple with 1 input neuron, and one output neuron (here the model trained to return 1 if the input is >0.5 and 0 elsewhen).
However, after saving it to tf.lite format I can't get it to convert using ncc. I currently use the command:
with open("model.tflite", "wb") as f:
f.write(tflite_model)
`
The error I receive is the following:
Import graph...
Optimize Pass 1...
Optimize Pass 2...
Quantize...
4.1. Add quantization checkpoints...
4.2. Get activation ranges...
Plan buffers...
Fatal: Invalid dataset, should contain one file at least
However, even when I try to change the path the error maintains. I.e. only giving the directory instead of the file for example.
The text was updated successfully, but these errors were encountered:
So can you only run comptrr vision models on the k210? Additionally, for images what would the dir structure look like? Is there any way to run a time series model on it?
Hey,
I want to convert a custom tf model into .kmodel format to run it on the k210 chip.
For testing I wanted to start really simple with 1 input neuron, and one output neuron (here the model trained to return 1 if the input is >0.5 and 0 elsewhen).
However, after saving it to tf.lite format I can't get it to convert using ncc. I currently use the command:
./ncc compile k210_model.tflite k210_model.kmodel --dataset X_train.npy --input-type uint8 --output-type uint8 --inference-type uint8.
However, that does not seem to work. It crashes during quantization. How would I need to format the dataset to work properly?
The code base is the following:
`
import tensorflow as tf
import numpy as np
X_train = np.random.rand(10000, 1) # 1000 samples, 1 feature
y_train = (X_train > 0.5).astype(int) # 1 if >0.5, else 0
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(1,)),
tf.keras.layers.Dense(1, activation='sigmoid') # Output layer for binary classification
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=100, batch_size=32, verbose=1)
test_samples = np.array([[0.3], [0.7]])
print(model.predict(test_samples)) # Expect outputs close to [0, 1]
np.save("mnt/data/X_train.npy", X_train)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
with open("model.tflite", "wb") as f:
f.write(tflite_model)
`
The error I receive is the following:
4.1. Add quantization checkpoints...
4.2. Get activation ranges...
Plan buffers...
Fatal: Invalid dataset, should contain one file at least
However, even when I try to change the path the error maintains. I.e. only giving the directory instead of the file for example.
The text was updated successfully, but these errors were encountered: