-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AOT] Added a test for detecting output size post MLF export #13655
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -225,6 +225,64 @@ def test_packed_global_variables(): | |
assert f"{func}_packed" not in tvmgen_names | ||
|
||
|
||
def test_io_size_definition(): | ||
"""Check network IO size definitions in the codegen output.""" | ||
dtype = "float32" | ||
ishape = (1, 32, 14, 14) | ||
wshape = (32, 32, 3, 3) | ||
interface_api = "c" | ||
use_unpacked_api = True | ||
|
||
data0 = relay.var("data", shape=ishape, dtype=dtype) | ||
weight0 = relay.var("weight", shape=wshape, dtype=dtype) | ||
out = relay.nn.conv2d(data0, weight0, kernel_size=(3, 3), padding=(1, 1), groups=1) | ||
main_f = relay.Function([data0, weight0], out) | ||
mod = tvm.IRModule() | ||
mod["main"] = main_f | ||
mod = transform.InferType()(mod) | ||
|
||
i_data = np.random.uniform(0, 1, ishape).astype(dtype) | ||
w1_data = np.random.uniform(0, 1, wshape).astype(dtype) | ||
|
||
inputs = OrderedDict([("data", i_data), ("weight", w1_data)]) | ||
|
||
output_list = generate_ref_data(mod, inputs) | ||
compiled_models_list = compile_models( | ||
models=AOTTestModel(module=mod, inputs=inputs, outputs=output_list), | ||
interface_api=interface_api, | ||
use_unpacked_api=use_unpacked_api, | ||
workspace_byte_alignment=8, | ||
enable_op_fusion=True, | ||
pass_config=AOT_DEFAULT_RUNNER.pass_config, | ||
use_runtime_executor=True, | ||
target=tvm.target.Target("c"), | ||
) | ||
ref_output_size = output_list["output"].size * np.dtype(dtype).itemsize | ||
compiled_model = compiled_models_list[0] | ||
|
||
tmp_path = utils.tempdir() | ||
base_path = tmp_path.temp_dir | ||
|
||
model = compiled_model.model | ||
tar_file = os.path.join(base_path, f"{model.name}.tar") | ||
export_model_library_format(compiled_model.executor_factory, tar_file) | ||
t = tarfile.open(tar_file) | ||
t.extractall(base_path) | ||
|
||
file_list = [] | ||
for path in (pathlib.Path(base_path) / "codegen" / "host" / "include").iterdir(): | ||
if path.is_file(): | ||
file_list.append(path) | ||
assert len(file_list) > 0 | ||
|
||
for path in file_list: | ||
with open(path, "r") as header: | ||
contents = header.readlines() | ||
contents = "".join(map(str, contents)) | ||
assert contents.count("_SIZE") == 4 | ||
assert str(ref_output_size) in contents | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should probably check the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I tried doing that initially. Any short cuts to do that? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Something like:
? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah ok. I misunderstood what you were asking for. This makes sense. Thanks for the help. |
||
|
||
|
||
@parametrize_aot_options | ||
def test_concatenate(interface_api, use_unpacked_api, test_runner): | ||
"""Tests compilation of concatenate""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given we know the
model_name
can we not just look fortvmgen_{model_name}.h
rather than looping?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also do this for the input sizes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will extend the check for inputs. We could directly look for the file, but I thought that check maynot work for multiple models. But it does, so I will update that too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume this just requires looking for both headers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, there is just one header in those cases too. Both models' sizes appear in a single header. So, need not be tested additionally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds wrong, shouldn't there be a
tvmgen_model1.h
and atvmgen_model2.h
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My bad. I confused this with the multi-model test which it is not. In case of multi model test, I do see two separate headers being produced.