Matrix multiplication #747
-
Hi all, I am working with a custom ML model that's originally written using the subclassing method. I am in the process of recreating this model using the Functional API, so that I can use hls4ml to deploy it on an FPGA. However, I've come across a big issue; the original model performed a tf.matmul(t1,t2) operation where tensor 1 has a shape of (1, 9, 3, 128, 25) and tensor 2 of (1, 3, 25, 25) where the 1st dimension is just the batch size. To convert this into a Keras layer that's supported by hls4ml, I tried to use the Dot layer. Unfortunately, this is unsuccessful because I ran into the following issue:
Then, I tried to create a custom Keras layer and use the My model looks like this:
Next, I tried to create the hls model with the following code:
The configuration file that's parsed is created manually and looks like this:
But this results in the following error:
So, my question is; is it possible to use a custom Keras layer inside a Keras model that uses tf.matmul? If so, how do I save the model correctly ? Do I use the SavedModel or the h5 format? And, how do I create a correct configuration file? Ps: Am I correct that sublcassed Keras models won't work with hls4ml? I look forward to your feedback! Thanks in advance! Arthur |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
We don't have direct implementation of |
Beta Was this translation helpful? Give feedback.
We don't have direct implementation of
matmul
. We use matrix-vector multiplication in most places (like Dense layer or convolutional layers). Also,tf_to_hls
hasn't seen development for a long time, so it probably doesn't work well anymore. But the biggest issue I see here that I mentioned in the other issues you opened is that those tensors are simply too large for io_parallel