Skip to content

feat: allow to preload ML models when running inference (#224) #126

feat: allow to preload ML models when running inference (#224)

feat: allow to preload ML models when running inference (#224) #126

Triggered via push September 28, 2023 09:54
Status Success
Total duration 14m 11s
Artifacts 2
This run and associated checks have been archived and are scheduled for deletion. Learn more about checks retention
Matrix: Build Rust binaries
Build Container Image
1m 46s
Build Container Image
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
wws-aarch64 Expired
37.7 MB
wws-x86_64 Expired
40.3 MB