-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Breaking change: #7951 (tfjs-v4.11.0) #7974
Comments
Hi, @AmitMY Thank you for bringing this issue to our attention and I attempted to replicate the same issue from my end and I'm getting below error message so to confirm, are you also getting the same error message or Am I doing something wrong here, if so please help me with steps to replicate the same issue from my end which will help us to investigate this issue further. I'm following this official documentation to replicate the same issue from my end. Thank you! I received the below error message : |
I'm getting the same issue now. Here, I wrote a minimal reproduction: (Here's a directory with a new model, and an HTML file. to reproduce, I ran <!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="utf-8" />
<title>TensorFlow.js browser example</title>
<!-- Load TensorFlow.js from a script tag -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@latest/dist/tf.min.js"></script>
</head>
<body>
<h1>TensorFlow.js example</h1>
<h2>Open the console to see the results.</h2>
<script>
function resetDropout(layers) {
for (const layer of layers) {
// For Sequential models, the layers are in the layers property
if (layer.layers) {
resetDropout(layer.layers);
}
// For TimeDistributed models, the layer is in the layer property
if (layer.layer) {
resetDropout([layer.layer]);
}
// If the layer is a dropout layer, reset the rate
if (layer.rate) {
layer.rate = 0;
}
}
}
async function main() {
const model = await tf.loadLayersModel('model_new.json');
resetDropout(model.layers);
const tensor = tf.zeros([1, 1, 256, 256, 3]).toFloat();
tensor.print();
// Must apply model in training=True mode to avoid using aggregated norm statistics
model.apply(tensor, {training: true}).print();
}
main();
</script>
</body>
</html> |
Hi, @AmitMY Thank you for providing the minimal code example to replicate the same issue from my end and I'm getting below error message so to confirm, Are you also getting the same error message or different error message if possible could you please share your error log or screenshot which will help us to investigate this issue further. Thank you! For reference I have added screenshot below : When I tried without
|
Yes, the error I am getting is the same:
This is happening with 4.11.0 but not with 4.10.0 Just so it is clear: |
Hi, @AmitMY Thank you for the confirmation, I really appreciate your findings and observations which will help us to investigate this issue further. We will have to dig more into this issue and will update you soon. Thank you! |
Hi @haoyunfeix , I saw you fixed #6699 before, could you help check this again? |
Yunfei already left Intel, so he may not double check this. We will take a look once we get back to office after holidays (Oct 7). |
Thank you Yang and take your time. |
This is likely a bug in #7951. I'm trying to reproduce it, but I'm running into a different error than expected:
Did the model file shared at that drive link change? The sha256sum I'm seeing for the zip file is |
I redownloaded the zip file and can confirm the same hash:
However, the error I am seeing is still
|
I tried webgpu, webgl, cpu, all complain same errors:
Revert this change (#7951) works well on webgpu, webgl, cpu. Below code is required to change backend before predict:
I drafted a fix here: #7993 |
Hi, @AmitMY I apologize for the delayed response. I see that PR #7993 has been merged and I have tested the same code snippet, which is now working as expected. could you please confirm this from your end once ? if so, please feel free to close this issue ? Thank you! |
i can confirm that i am using 4.12.0 My reproduction still reproduces the issue in 4.12.0 |
@AmitMY, it seems 4.12.0 doesn't include change #7993. So @gaikwadrahul8 @Linchenn, Could you help why #7993 is not included in v4.12.0? BTW, @AmitMY, if you want to try this feature on the latest code, You can download TFJS code and build, then put below file under fold tfjs\e2e\benchmarks\local-benchmark:
|
since @gaikwadrahul8 says it works, i'll just wait for it to be released. Do you know why it was not released, and if it would be in 4.13.0? |
Sorry, we're working on a new RC based release structure, and our branch cut for 4.12.0 was before the fix was merged. This will be included in 4.13.0 next week. |
Any news on this? 4.13.0 seems to not be released yet and it's been more than a week. Thanks! |
ditto, waiting for new release with this fix so can enable dropout in our training models |
No bug in 4.13.0 🥳 |
Describe the current behavior
Old models fail now that #7951 is merged. Error:
(This is not the case with v4.10.0)
Describe the expected behavior
Minor updates should not break functionality
Standalone code to reproduce the issue
Same as #6699
The text was updated successfully, but these errors were encountered: