Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ci] Fix: don't skip add_version job for nightly release #3

Closed
wants to merge 862 commits into from
Closed
Changes from 1 commit
Commits
Show all changes
862 commits
Select commit Hold shift + click to select a range
7ddf34f
[doc] Update docs about kernels and functions (#4044)
lin-hitonami Jan 25, 2022
42b0768
[doc] Add doc about difference between taichi and python programs (#3…
lin-hitonami Jan 25, 2022
7dd646d
[refactor] Remove dependency on lang::current_ast_builder() in lang::…
PGZXB Jan 25, 2022
bf6cd41
[misc] Version bump: v0.8.11 -> v0.8.12 (#4125)
ailzhang Jan 25, 2022
2340fd4
[ci] Use GHA workflow to control the concurrency (#4116)
frostming Jan 25, 2022
2b94355
[build] Handle empty TAICHI_EMBIND_SOURCE (#4127)
ailzhang Jan 25, 2022
13190de
[misc] Fix the changelog generator to only count current branch commi…
frostming Jan 25, 2022
a86f256
[Refactor] Remove exposure of internal functions in taichi.lang.ops (…
lin-hitonami Jan 25, 2022
a1fb291
[test] Replace make_temp_file with tempfile
Jan 25, 2022
033a118
[test] Remove get_rel_eps() at top level.
Jan 25, 2022
e9e4cd6
[test] Remove approx at top level.
Jan 25, 2022
10d80fb
[test] Remove allclose at top level.
Jan 25, 2022
9dc8ad3
[javascript] [misc] Remove redundant pybind include in Taichi Core Li…
AmesingFlank Jan 25, 2022
31da0f9
[javascript] Avoid all usages of glfw/vulkan/volk when TI_EMSCRIPTENE…
AmesingFlank Jan 25, 2022
8ea0eb8
Support compiling taichi in x86 (#4107)
AmesingFlank Jan 25, 2022
569e5a0
[bug] Fix starred expression when the value is not a list (#4130)
lin-hitonami Jan 26, 2022
1d605fd
[ci] Fix approx in autodiff example test (#4132)
ailzhang Jan 26, 2022
e7dcf8e
[Refactor] Remove supported_log_levels (#4120)
qiao-bo Jan 26, 2022
8a6ccd3
[refactor] Remove critical/debug/error/trace/warn/info/is_logging_eff…
ailzhang Jan 26, 2022
6c528f7
[doc] Revise doc for GUI system. (#4006)
Leonz5288 Jan 26, 2022
1412e0b
[Refactor] Do not expose main to users (#4136)
strongoier Jan 26, 2022
ac84d9a
[Refactor] Do not expose functions in taichi.lang.util to users (#4128)
strongoier Jan 26, 2022
3305f66
[Refactor] Avoid exposing real and integer types API (#4129)
qiao-bo Jan 26, 2022
c3807d5
Move __getattr__ back to __init__.py (#4142)
lin-hitonami Jan 26, 2022
36880b8
[Refactor] Prevent modules in lang being wild imported and exposed (#…
qiao-bo Jan 26, 2022
4cb9b71
[Refactor] Do not expose internal function in ti.lang.impl (#4134)
mzmzm Jan 27, 2022
7be68aa
[refactor] [ir] Remove load_if_ptr and move pointer dereferencing to …
re-xyr Jan 27, 2022
d8356a8
[Refactor] Do not expose taichi.snode (#4149)
qiao-bo Jan 27, 2022
2854186
[Refactor] Do not expose internal function in field, exception, expr,…
mzmzm Jan 27, 2022
3227c31
[Refactor] Remove unnecessary exposure related to matrix and mesh (#4…
lin-hitonami Jan 27, 2022
f2892e6
[Refactor] Do not expose TapeImpl to users (#4148)
strongoier Jan 27, 2022
7e3abda
[Doc] Update README.md (#4139)
k-ye Jan 27, 2022
a990779
[refactor] Expose ti.abs and ti.pow (#4157)
lin-hitonami Jan 27, 2022
5b4f4bd
[refactor] Remove dependency on get_current_program() and lang::curre…
PGZXB Jan 27, 2022
32687b7
[refactor] Move functions in __init__ to misc (#4150)
mzmzm Jan 27, 2022
0f27bb5
[doc] Improve Fields documentation (#4063)
yolo2themoon Jan 27, 2022
c9baf93
[spirv] Move external arrays into seperate buffers (#4121)
bobcao3 Jan 27, 2022
967c599
[Refactor] Rename and move scoped profiler info under ti.profiler (#4…
strongoier Jan 27, 2022
42a9138
[ci] Fix concurrent run issue (#4168)
frostming Jan 28, 2022
d4ea35f
[Refactor] Move core_vec(i) to gui and hide (#4172)
strongoier Jan 28, 2022
3dcc9a0
[opengl] Use && instead of and in C++ code (#4171)
AmesingFlank Jan 28, 2022
c5aa99c
[Refactor] Expose runtime/snode ops properly (#4167)
strongoier Jan 28, 2022
dd0312b
remove KernelDefError KernelArgError InvalidOperationError (#4166)
lin-hitonami Jan 28, 2022
2a4e646
[Refactor] Remove inspect for modules in lang init (#4173)
qiao-bo Jan 28, 2022
0719320
[test] [example] Add a test for taichi_logo example (#4170)
nasnoisaac Jan 28, 2022
2c03cc6
[refactor] Remove bit_vectorize from top level. (#4158)
ailzhang Jan 28, 2022
11f8777
[Refactor] Clean up helper functions in tools.util (#4174)
strongoier Jan 28, 2022
847ce08
[refactor] Remove dependency on get_current_program() in lang::Ndarra…
PGZXB Jan 28, 2022
f8e5563
[refactor] Remove locale_encode from top level. (#4179)
ailzhang Jan 28, 2022
b54d720
[refactor] Move version_check out of taichi.lang. (#4178)
ailzhang Jan 28, 2022
2f7a9bb
[refactor] Remove is_signed/is_integral from top level. (#4182)
ailzhang Jan 28, 2022
00183bb
[refactor] Add TI_DLL_EXPORT to control symbol visibility (#4177)
k-ye Jan 28, 2022
b5f50fe
[refactor] Remove dependency on current_ast_builder() in lang::For an…
PGZXB Jan 29, 2022
1d14b2d
[bug] [opengl] Process child nodes to compute alignment (#4191)
ailzhang Jan 29, 2022
f4d11e6
[Doc] update demo code in readme doc (#4193)
Tiny-Box Jan 29, 2022
0f8aa26
[dx11] Fix parse_reference_count signature (#4189)
quadpixels Jan 29, 2022
f636355
[misc] Add atomic operators to micro-benchmarks (#4169)
yolo2themoon Jan 29, 2022
957f052
[misc] Add math operators to micro-benchmarks (#4122)
yolo2themoon Jan 29, 2022
e45b0ea
[doc] Update the doc for differentiable programming (#4057)
erizmr Jan 29, 2022
27abb2a
[Refactor] Rename and move kernel profiler APIs (#4194)
strongoier Jan 29, 2022
b036161
[javascript] Support TI_EMSCRIPTENED option as an env var (JS 3/n) (#…
AmesingFlank Jan 29, 2022
1e094fe
[javascript] Disable stack trace logging when TI_EMSCRIPTENED (JS 9/n…
AmesingFlank Jan 29, 2022
a0676f0
[javascript] Avoid using C++ inline asm when TI_EMSCRIPTENED (JS 6/n)…
AmesingFlank Jan 29, 2022
6feecd3
[Refactor] Remove redundant set_gdb_trigger (#4198)
strongoier Jan 29, 2022
20390cf
[Error] Add function name to traceback (#4195)
lin-hitonami Jan 29, 2022
c26dcf7
[Refactor] Do not expose StructField and SourceBuilder to users (#4200)
strongoier Jan 29, 2022
5cdcc58
[refactor] Export some functions which depend on current_ast_builder(…
PGZXB Jan 29, 2022
de1c7da
[refactor] Remove dependency on get_current_program() in exported fun…
PGZXB Jan 29, 2022
fe0ae9a
[misc] Test unified doc & api preview. (#4186)
ailzhang Jan 30, 2022
9898326
[misc] Export visibility of symbols required for Vulkan AOT execution…
ghuau-innopeak Feb 1, 2022
49b4318
[spirv] SPIR-V / Vulkan NDArray (#4202)
bobcao3 Feb 7, 2022
c85c9e5
[Refactor] Rename tools.util to tools.async_utils and hide functions …
strongoier Feb 7, 2022
8a53ce2
[vulkan] Fix MoltenVK support (#4205)
bobcao3 Feb 7, 2022
9473c60
[Refactor] Move ti.taichi_logo to examples (#4216)
strongoier Feb 7, 2022
c8d5b3f
[CI] Update release workflow (#4215)
knight42 Feb 7, 2022
6636922
[vulkan] Support Vulkan 1.3 (#4211)
bobcao3 Feb 7, 2022
1e14221
[refactor] Remove top level __all__ (#4214)
strongoier Feb 7, 2022
12328cd
[Refactor] Move ti.parallel_sort under _kernels (#4217)
strongoier Feb 7, 2022
41f03ed
[Refactor] Move public APIs of ti.tools outside top level (#4218)
strongoier Feb 7, 2022
de88306
[doc] Add instruction to install clang-format-10 on M1 Mac (#4219)
lin-hitonami Feb 7, 2022
0a664a2
[doc] Add the step of setting "TI_WITH_VULKAN" for linux (#4209)
jerrylususu Feb 7, 2022
0e417be
[autodiff] Refactor the IB identification and optimize the checker fo…
erizmr Feb 7, 2022
f8e93cf
[metal] Give random seeds a unique value (#4206)
k-ye Feb 8, 2022
d4343ce
[refactor] Remove dependency on get_current_program() in lang::Fronte…
PGZXB Feb 8, 2022
35a5cc1
[Bug] Ban passing torch view tensors into taichi kernel (#4225)
ailzhang Feb 8, 2022
118b278
[Refactor] Rename and move memory profiler info under ti.profiler (#4…
mzmzm Feb 8, 2022
937ab8c
[Refactor] Merge ti.tools.image.imdisplay() into ti.tools.image.imsho…
neozhaoliang Feb 8, 2022
b36f614
[misc] Add memcpy to micro-benchmarks (#4220)
qiao-bo Feb 8, 2022
b391acb
[refactor] Remove legacy ti.benchmark() and ti.benchmark_plot() (#4222)
mzmzm Feb 8, 2022
8bdb4ab
[lang] Annotate constants with dtype without casting. (#4224)
ailzhang Feb 8, 2022
2cc3204
[Refactor] Avoid exposing ti.tape (#4234)
qiao-bo Feb 8, 2022
7705f68
[vulkan] Add buffer device address (physical pointers) support & othe…
bobcao3 Feb 9, 2022
a1bf85a
[refactor] Remove lang::current_ast_builder() and dependencies on it …
PGZXB Feb 9, 2022
4a2db2e
[refactor] Remove global scope_stack and dependencies on it (#4237)
PGZXB Feb 9, 2022
67d9c6c
[ci] Disable Vulkan backend for mac1014 release. (#4241)
ailzhang Feb 9, 2022
526de2b
[Refactor] Add require_version configuration in ti.init() (#4151)
tczhangzhi Feb 9, 2022
ff80471
[refactor] Remove dependency on get_current_program() in lang::Binary…
PGZXB Feb 9, 2022
459ebac
[ci] Create PR card in projects automatically (#4229)
frostming Feb 9, 2022
bae96c4
[CUDA] Fix random generator routines for f32 and f64 to make sure the…
neozhaoliang Feb 9, 2022
06cb1c5
[vulkan] Disable buffer device address if int64 is not supported (#4244)
bobcao3 Feb 9, 2022
c0c4f82
[doc] Major revision to the field (advanced) document (#4156)
turbo0628 Feb 9, 2022
f6482e2
[Refactor] Move ti.quant & ti.type_factory under ti.types.quantized_t…
strongoier Feb 9, 2022
a6763b6
[ci] Disable Vulkan backend for mac1015 release. (#4245)
ailzhang Feb 9, 2022
0cc9961
[Error] Raise an error when breaking the outermost loop (#4235)
lin-hitonami Feb 9, 2022
99018c9
[Doc] Update sparse compuation doc (#4060)
FantasyVR Feb 9, 2022
9f793e5
[refactor] Remove get_current_program() and global variable current_p…
PGZXB Feb 9, 2022
5680b84
[ci] Move _testing.py into tests folder (#4247)
ailzhang Feb 10, 2022
e8ed9b4
[misc] Use test_utils.approx directly (#4252)
ailzhang Feb 10, 2022
3df5cb9
[misc] Update master version to 0.9.0 (#4248)
ailzhang Feb 10, 2022
2b70abd
[ci] Run on pull_request_target to access the secrets (#4253)
frostming Feb 10, 2022
973c04d
[Bug] Only ban passing non contiguous torch tensors to taichi kernels…
ailzhang Feb 11, 2022
a7f09f2
[lang] Expose mesh_patch_idx at top level (#4260)
ailzhang Feb 11, 2022
150097e
[misc] Remove some unnecessary #include lines (#4265)
PGZXB Feb 14, 2022
f0660d0
[vulkan] Use TI_VISIBLE_DEVICE to select vulkan device (#4255)
qiao-bo Feb 14, 2022
6aadfa7
[vulkan] Test & build macOS 10.15 MoltenVK (#4259)
bobcao3 Feb 14, 2022
ec8887d
Add more camera controls (#4212)
YuCrazing Feb 14, 2022
73f01fe
[llvm] Add missing pre-processor macro in cpp-tests when LLVM is disa…
PGZXB Feb 14, 2022
4811226
[refactor] Re-expose important implementation classes (#4268)
strongoier Feb 14, 2022
d627830
[refactor] Remove support for raise statement (#4262)
lin-hitonami Feb 14, 2022
2a8a0f7
[misc] Add stencil_2d to micro-benchmarks (#4176)
yolo2themoon Feb 14, 2022
b303a52
[refactor] Remove global instance of DecoratorRecorder (#4254)
PGZXB Feb 14, 2022
8dabf83
[bug] Disallow function definition inside ti.func/kernel (#4274)
lin-hitonami Feb 15, 2022
1c3b125
[spirv] Fix buffer info compare to fix external array bind point (#4277)
bobcao3 Feb 15, 2022
a14ed16
[doc] Improve operators page (#4073)
lin-hitonami Feb 15, 2022
d6380a0
[misc] Code cleanup in benchmarks (#4280)
yolo2themoon Feb 15, 2022
7ebca9c
[lang] Fix bls_buffer allocation of x64 crashed in py3.10 (#4275)
g1n0st Feb 15, 2022
e6b7279
[llvm] Use GEP for array access instead of ptrtoint/inttoptr (#4276)
strongoier Feb 15, 2022
2163596
[docs] Hide unnessary methods in annotation classes (#4287)
ailzhang Feb 16, 2022
c231aa3
[lang] Only expose start_recording/stop_recording for now (#4289)
ailzhang Feb 16, 2022
e342409
[lang] Hide get_addr and type_assert in api docs (#4290)
ailzhang Feb 16, 2022
c4f92f2
[lang] Remove CompoundType from taichi.types (#4291)
ailzhang Feb 16, 2022
6008bfb
Hide data handle (#4292)
qiao-bo Feb 16, 2022
7e5d82e
[lang] Hide fill_by_kernel in Ndarray (#4293)
qiao-bo Feb 16, 2022
83d643c
[lang] Hide get_element_size and get_nelement in Ndarray (#4294)
qiao-bo Feb 16, 2022
2c1dfaa
[lang] Hide internal functions in TaichiOperation (#4288)
lin-hitonami Feb 16, 2022
9d03269
[lang] Hide initialize_host_accessor in Ndarray (#4296)
qiao-bo Feb 16, 2022
a8a4847
[lang] Hide subscript in Matrix (#4299)
lin-hitonami Feb 16, 2022
8cc2d88
[lang] Hide internal functions in Matrix and Struct (#4295)
lin-hitonami Feb 16, 2022
6e0a682
[lang] Hide ndarray*_from_numpy (#4297)
qiao-bo Feb 16, 2022
2c5f09e
[lang] Hide internal functions in SNode and _Root (#4303)
strongoier Feb 16, 2022
d343fac
[lang] Hide pad_key and ndarray*_to_numpy in Ndarray (#4298)
qiao-bo Feb 16, 2022
72a6310
[lang] Hide internal APIs of FieldsBuilder (#4305)
strongoier Feb 16, 2022
9683e59
[lang] Remove Matrix.value (#4300)
lin-hitonami Feb 16, 2022
063d630
[vulkan] Reduce runtime host overhead (#4282)
bobcao3 Feb 16, 2022
be74dbd
[lang] Hide dtype and needs_grad from SNode (#4308)
strongoier Feb 17, 2022
bb33ca5
[refactor] Refactor ForLoopDecoratorRecorder (#4309)
PGZXB Feb 17, 2022
b4214e4
[lang] Make ti.cfg an alias of runtime cfg (#4264)
ailzhang Feb 17, 2022
142b473
[lang] Move sparse_matrix_builder from taichi.linalg to taichi.types …
ailzhang Feb 17, 2022
61c215a
[Doc] Revise "Why a new programming language" (#4306)
k-ye Feb 17, 2022
00636fa
[refactor] Remove Ndarray torch implementation and tests (#4307)
qiao-bo Feb 17, 2022
f29faa9
[Doc] Avoid log(0) problem in _funcs._randn() and update primitive_ty…
neozhaoliang Feb 17, 2022
d7853fe
[lang] Hide internal apis about Fields (#4302)
mzmzm Feb 17, 2022
73c02fe
[doc] More revision on a new language (#4321)
k-ye Feb 18, 2022
ff7ec33
[refactor] make class Expr constructor explicit (#4272)
Retrospection Feb 18, 2022
cc725d1
[refactor] Allow more build types from setup.py (#4313)
qiao-bo Feb 18, 2022
c6aa84b
[lang] Add deprecation warnings to atomic ops (#4325)
lin-hitonami Feb 18, 2022
b98d9df
[lang] Remove logical_and and logical_or from TaichiOperation (#4326)
lin-hitonami Feb 18, 2022
66161d5
[gui] [refactor] Avoid exposing different APIs with different GGUI_AV…
strongoier Feb 18, 2022
d6943eb
[ci] Install requirements and matplotlib for GPU tests (#4336)
qiao-bo Feb 20, 2022
05a96e9
[test] Add a test for autodiff/regression (#4322)
Tiny-Box Feb 21, 2022
2a71258
[ci] Fix generate_example_videos.py (#4347)
ailzhang Feb 21, 2022
a07a53e
[error] Let deprecation warnings display only once (#4346)
lin-hitonami Feb 21, 2022
b8a7569
[opengl] Remove support for dynamic snode
Feb 21, 2022
33047c7
[ci] Increase ci test parallelism (#4348)
qiao-bo Feb 21, 2022
e651441
[lang] Add support for operators "is" and "is not" in static scope an…
lin-hitonami Feb 21, 2022
16e6e19
[ci] Use conda python for m1 jobs (#4351)
ailzhang Feb 21, 2022
e9e2389
[gui] Update GGUI examples to use vulkan backend if available (#4353)
ailzhang Feb 21, 2022
d152248
[bug] Update children_offsets & stride info to align as elem_stride (…
ailzhang Feb 21, 2022
c9fb269
[ci] Exit on error windows test script (#4354)
qiao-bo Feb 22, 2022
f2aaf7b
[opengl] Use element shape as compile information for OpenGL backend …
turbo0628 Feb 22, 2022
c466797
[example] Add implicit fem example (#4352)
BillXu2000 Feb 22, 2022
0c432a3
[test] Add a test for simple_derivative example (#4323)
Tiny-Box Feb 22, 2022
cc0b48a
[dx11] Materialize runtime, map and unmap (#4339)
quadpixels Feb 23, 2022
2977fa9
[misc] Version bump: v0.9.0->v0.9.1 (#4363)
ailzhang Feb 23, 2022
d5e7f86
[Vulkan] Enable Vulkan device selection when using cuda (#4330)
qiao-bo Feb 23, 2022
58c565f
[Doc] Re-structure the articles: getting-started, gui (#4360)
k-ye Feb 23, 2022
248565a
[ci] Run vulkan and metal separately on M1 (#4367)
ailzhang Feb 23, 2022
79b30a5
[Doc] Fix broken links (#4368)
k-ye Feb 23, 2022
708d340
[test] Add a test for the nbody example (#4366)
0xzhang Feb 23, 2022
37bad44
[test] Add a test for the game_of_life example (#4365)
0xzhang Feb 23, 2022
6996999
[test] [example] Add a test for print_offset example (#4355)
LittleFall Feb 23, 2022
3266324
[build] Build with Apple clang-13 (#4370)
ailzhang Feb 23, 2022
55f027b
[refactor] Move arch files (#4373)
k-ye Feb 23, 2022
fbc45d0
[test] Add test for exposed top-level APIs (#4361)
strongoier Feb 23, 2022
c83981d
[refactor] Move aot_module files (#4374)
k-ye Feb 23, 2022
ba235ff
[mesh] Constructing mesh from data in memory (#4375)
BillXu2000 Feb 24, 2022
2d89c9e
[example] Fix implicit_fem example command line arguments (#4372)
BillXu2000 Feb 24, 2022
66bd441
[lang] External Ptr alias analysis & demote atomics (#4273)
bobcao3 Feb 24, 2022
d420249
[vulkan] Refactor Runtime to decouple the SNodeTree part (#4380)
k-ye Feb 25, 2022
2fbf3db
[test] Merge the py38 only cases into the main test suite (#4378)
frostming Feb 25, 2022
1e6904a
[aot] [vulkan] Output shapes/dims to AOT exported module (#4382)
ghuau-innopeak Feb 25, 2022
3664b9d
[misc] Upgrade test and docker image to support python 3.10 (#3986)
qiao-bo Feb 25, 2022
1b870e0
[ci] Windows build exits on the first error (#4391)
qiao-bo Feb 25, 2022
da21b4b
[test] disable serveral workflows on forks (#4393)
knight42 Feb 25, 2022
a54af78
[llvm] Remove LLVM functions related to a SNode tree from the module …
lin-hitonami Feb 25, 2022
9244144
[vulkan] [aot] Move add_root_buffer to public members (#4396)
ghuau-innopeak Feb 26, 2022
67a038f
[aot] [vulkan] Add AotKernel and its Vulkan impl (#4387)
k-ye Feb 26, 2022
35f6297
[ci] Reduce test parallelism for m1 (#4394)
qiao-bo Feb 28, 2022
01917b1
[ci] Update tag to projects (#4400)
qiao-bo Feb 28, 2022
8373b83
[build] Enforce compatibility with manylinux2014 when TI_WITH_VULKAN=…
strongoier Feb 28, 2022
1449091
[CI] Cleanup workspace before window test (#4405)
knight42 Feb 28, 2022
b33d5d3
[Doc] Update docstring for functions in operations (#4392)
neozhaoliang Mar 1, 2022
5616671
[test] Add a test for the ad_gravity example (#4404)
cyberkillor Mar 1, 2022
46fc8f5
[Lang] Support kernel to return a matrix type value (#4062)
mzmzm Mar 1, 2022
451db9f
[vulkan] [aot] Add aot namespace Vulkan (#4419)
qiao-bo Mar 1, 2022
437cf65
[vulkan] Support templated kernel in aot module (#4417)
ailzhang Mar 1, 2022
1915a80
[Doc] Update docstring for functions in operations (#4413)
neozhaoliang Mar 1, 2022
01cfd83
[metal] Add AotModuleLoader (#4423)
k-ye Mar 2, 2022
f1cf7c0
[Error] Remove the mentioning of ti.pyfunc in the error message (#4429)
lin-hitonami Mar 2, 2022
35a9435
[misc] Add matrix operations to micro-benchmarks (#4190)
yolo2themoon Mar 2, 2022
bf501f7
[misc] Add deserialization tool for benchmarks (#4278)
yolo2themoon Mar 2, 2022
f05bc50
[Doc] Update PyTorch interface documentation (#4311)
victoriacity Mar 2, 2022
1a8ec6e
[refactor] Refactor llvm-offloaded-task-name mangling (#4418)
PGZXB Mar 2, 2022
99a1a4b
[metal] Add Unified Device API skeleton code (#4431)
k-ye Mar 3, 2022
6b8fc65
[Doc] Update docstring for functions in operations (#4427)
neozhaoliang Mar 3, 2022
b531a3d
[Lang] Support simple matrix slicing (#4420)
mzmzm Mar 3, 2022
1af1a3f
[metal] Expose BufferMemoryView (#4432)
k-ye Mar 3, 2022
4794f65
[misc] Remove a unnecessary function (#4443)
PGZXB Mar 4, 2022
ef8f669
[Lang] Support type annotations for literals (#4440)
strongoier Mar 4, 2022
5a37718
[llvm] Support real function which has scalar arguments (#4422)
lin-hitonami Mar 4, 2022
0fe0db2
[Doc] Update docstrings in misc (#4446)
neozhaoliang Mar 4, 2022
58d3417
[bug] [lang] Enable break in the outermost for not in the outermost s…
lin-hitonami Mar 4, 2022
c8501f0
[refactor] Move literal construction to expr module (#4448)
strongoier Mar 4, 2022
0a1b9a6
[misc] Remove some warnings (#4453)
PGZXB Mar 7, 2022
2f4f457
[Error] Add error messages for wrong type annotations of literals (#4…
strongoier Mar 7, 2022
3d0bb2e
[misc] Optimize verison check (#4461)
Leonz5288 Mar 7, 2022
d8b2229
[aot] [refactor] Refactor AOT runtime API to use module (#4437)
qiao-bo Mar 7, 2022
7836995
[Error] Add error for invalid snode size (#4460)
lin-hitonami Mar 7, 2022
8db1ccd
[Lang] Support sparse matrix builder datatype configuration (#4411)
FantasyVR Mar 7, 2022
475ebf2
[bug] Fix metal linker error when TI_WITH_METAL=OFF (#4469)
qiao-bo Mar 7, 2022
c3a9704
[ci] Add python 3.10 into nightly test and release (#4467)
qiao-bo Mar 7, 2022
31582c5
[lang] Add decorator ti.experimental.real_func (#4458)
lin-hitonami Mar 7, 2022
f714872
[refactor] Remove LLVM logic from the generic Device interface (#4470)
PGZXB Mar 8, 2022
2c04cb2
[llvm] Support real function with single scalar return value (#4452)
lin-hitonami Mar 8, 2022
17f8615
update docstring for exceptions (#4475)
neozhaoliang Mar 8, 2022
98553ca
[metal] Support device memory allocation/deallocation (#4439)
k-ye Mar 8, 2022
d51fe99
[Doc] Update docstring for functions in misc (#4474)
neozhaoliang Mar 8, 2022
159895c
[Error] Add error message when the number of elements in kernel argum…
mzmzm Mar 8, 2022
c270d67
[Doc] Update docstrings for functions in ops (#4465)
neozhaoliang Mar 8, 2022
e2e0e66
[bug] [llvm] Initialize the field to 0 when finalizing a field (#4463)
lin-hitonami Mar 8, 2022
af7e5a9
[ci] Update gpu docker image to test python 3.10 (#4472)
qiao-bo Mar 8, 2022
651b510
[misc] Version bump: v0.9.1 -> v0.9.2 (#4484)
rexwangcc Mar 8, 2022
b369ae7
[misc] Add a convenient script for testing compatibility of Taichi re…
rexwangcc Mar 9, 2022
2caf37c
[Doc] Add initial variable and fragments (#4457)
Justinterest Mar 9, 2022
4a7328b
[test] Add test for recursive real function (#4477)
lin-hitonami Mar 9, 2022
89a4ddb
[ir] Fix a bug in simplify pass (#4489)
mzmzm Mar 9, 2022
a2714dc
[fix] dangling ti.func decorator in euler.py (#4492)
lucifer1004 Mar 9, 2022
be843e4
[ci] Automate release publishing (#4428)
frostming Mar 9, 2022
f2e9670
[ci] Add a Dockerfile for building manylinux2014-compatible Taichi wh…
strongoier Mar 9, 2022
b0f9378
[ci] Skip in steps rather than the whole job
frostming Mar 10, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[Refactor] Do not expose functions in taichi.lang.util to users (taic…
…hi-dev#4128)

Co-authored-by: Taichi Gardener <[email protected]>
strongoier and taichi-gardener authored Jan 26, 2022
commit ac84d9ac0bca9640a41e9dc1b852fe7c41156e1c
3 changes: 0 additions & 3 deletions python/taichi/lang/__init__.py
Original file line number Diff line number Diff line change
@@ -48,9 +48,6 @@
from taichi.lang.struct import Struct, StructField
from taichi.lang.tape import TapeImpl
from taichi.lang.type_factory_impl import type_factory
from taichi.lang.util import (cook_dtype, has_clangpp, has_pytorch,
is_taichi_class, python_scope, taichi_scope,
to_numpy_type, to_pytorch_type, to_taichi_type)
from taichi.profiler import KernelProfiler, get_default_kernel_profiler
from taichi.profiler.kernelmetrics import (CuptiMetric, default_cupti_metrics,
get_predefined_cupti_metrics)
10 changes: 5 additions & 5 deletions python/taichi/lang/kernel_impl.py
Original file line number Diff line number Diff line change
@@ -8,22 +8,22 @@
import numpy as np
import taichi.lang
from taichi._lib import core as _ti_core
from taichi.lang import impl, runtime_ops, util
from taichi.lang import impl, runtime_ops
from taichi.lang.ast import (ASTTransformerContext, KernelSimplicityASTChecker,
transform_tree)
from taichi.lang.enums import Layout
from taichi.lang.exception import TaichiCompilationError, TaichiSyntaxError
from taichi.lang.expr import Expr
from taichi.lang.matrix import MatrixType
from taichi.lang.shell import _shell_pop_print, oinspect
from taichi.lang.util import to_taichi_type
from taichi.lang.util import has_pytorch, to_taichi_type
from taichi.linalg.sparse_matrix import sparse_matrix_builder
from taichi.tools.util import obsolete
from taichi.types import any_arr, primitive_types, template

from taichi import _logging

if util.has_pytorch():
if has_pytorch():
import torch


@@ -535,7 +535,7 @@ def func__(*args):
tmps = []
callbacks = []
has_external_arrays = False
has_torch = util.has_pytorch()
has_torch = has_pytorch()
ndarray_use_torch = impl.get_runtime().ndarray_use_torch

actual_argument_slot = 0
@@ -654,7 +654,7 @@ def func__(*args):
@staticmethod
def match_ext_arr(v):
has_array = isinstance(v, np.ndarray)
if not has_array and util.has_pytorch():
if not has_array and has_pytorch():
has_array = isinstance(v, torch.Tensor)
return has_array

3 changes: 3 additions & 0 deletions python/taichi/lang/util.py
Original file line number Diff line number Diff line change
@@ -224,3 +224,6 @@ def wrapped(*args, **kwargs):
return func(*args, **kwargs)

return wrapped


__all__ = []
3 changes: 2 additions & 1 deletion tests/python/test_ast_refactor.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import numpy as np
import pytest
from taichi._testing import approx
from taichi.lang.util import has_pytorch

import taichi as ti

@@ -891,7 +892,7 @@ def foo(n: ti.template(), m: ti.template()) -> ti.i32:
foo(5, 3)


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda, ti.opengl])
def test_ndarray():
n = 4
5 changes: 3 additions & 2 deletions tests/python/test_external_func.py
Original file line number Diff line number Diff line change
@@ -4,11 +4,12 @@
import tempfile

import pytest
from taichi.lang.util import has_clangpp

import taichi as ti


@pytest.mark.skipif(not ti.has_clangpp(), reason='Clang not installed.')
@pytest.mark.skipif(not has_clangpp(), reason='Clang not installed.')
@ti.test(arch=[ti.cpu, ti.cuda])
def test_source_builder_from_source():
source_bc = '''
@@ -44,7 +45,7 @@ def func_bc() -> ti.i32:
assert func_bc() == 11**8


@pytest.mark.skipif(not ti.has_clangpp(), reason='Clang not installed.')
@pytest.mark.skipif(not has_clangpp(), reason='Clang not installed.')
@ti.test(arch=[ti.cpu, ti.cuda])
def test_source_builder_from_file():
source_code = '''
5 changes: 3 additions & 2 deletions tests/python/test_f16.py
Original file line number Diff line number Diff line change
@@ -3,6 +3,7 @@
import numpy as np
import pytest
from taichi._testing import approx
from taichi.lang.util import has_pytorch

import taichi as ti

@@ -61,7 +62,7 @@ def init():
assert (z[i] == i * 3)


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=archs_support_f16)
def test_to_torch():
n = 16
@@ -79,7 +80,7 @@ def init():
assert (y[i] == 2 * i)


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=archs_support_f16)
def test_from_torch():
import torch
7 changes: 4 additions & 3 deletions tests/python/test_get_external_tensor_shape.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import numpy as np
import pytest
from taichi.lang.util import has_pytorch

import taichi as ti

if ti.has_pytorch():
if has_pytorch():
import torch


@@ -40,7 +41,7 @@ def func(x: ti.ext_arr()) -> ti.i32:
y_ref, y_hat)


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.parametrize('size', [[1, 2, 3, 4]])
@ti.test(exclude=ti.opengl)
def test_get_external_tensor_shape_access_torch(size):
@@ -55,7 +56,7 @@ def func(x: ti.ext_arr(), index: ti.template()) -> ti.i32:
idx, y_ref, y_hat)


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.parametrize('size', [[1, 2, 3, 4]])
@ti.test(arch=[ti.cpu, ti.cuda, ti.opengl])
def test_get_external_tensor_shape_access_ndarray(size):
9 changes: 5 additions & 4 deletions tests/python/test_image_io.py
Original file line number Diff line number Diff line change
@@ -3,6 +3,7 @@
import numpy as np
import pytest
from taichi._testing import make_temp_file
from taichi.lang.util import to_numpy_type

import taichi as ti

@@ -21,7 +22,7 @@ def test_image_io(resx, resy, comp, ext, is_field, dt):
shape = (resx, resy)
if is_field:
pixel_t = ti.field(dt, shape)
pixel = np.random.randint(256, size=shape, dtype=ti.to_numpy_type(dt))
pixel = np.random.randint(256, size=shape, dtype=to_numpy_type(dt))
if is_field:
pixel_t.from_numpy(pixel)
fn = make_temp_file(suffix='.' + ext)
@@ -43,12 +44,12 @@ def test_image_io(resx, resy, comp, ext, is_field, dt):
@ti.test(arch=ti.get_host_arch_list())
def test_image_io_vector(resx, resy, comp, ext, dt):
shape = (resx, resy)
pixel = np.random.rand(*shape, comp).astype(ti.to_numpy_type(dt))
pixel = np.random.rand(*shape, comp).astype(to_numpy_type(dt))
pixel_t = ti.Vector.field(comp, dt, shape)
pixel_t.from_numpy(pixel)
fn = make_temp_file(suffix='.' + ext)
ti.imwrite(pixel_t, fn)
pixel_r = (ti.imread(fn).astype(ti.to_numpy_type(dt)) + 0.5) / 256.0
pixel_r = (ti.imread(fn).astype(to_numpy_type(dt)) + 0.5) / 256.0
assert np.allclose(pixel_r, pixel, atol=2e-2)
os.remove(fn)

@@ -59,7 +60,7 @@ def test_image_io_vector(resx, resy, comp, ext, dt):
@ti.test(arch=ti.get_host_arch_list())
def test_image_io_uint(resx, resy, comp, ext, dt):
shape = (resx, resy)
np_type = ti.to_numpy_type(dt)
np_type = to_numpy_type(dt)
# When saving to disk, pixel data will be truncated into 8 bits.
# Be careful here if you want lossless saving.
np_max = np.iinfo(np_type).max // 256
35 changes: 18 additions & 17 deletions tests/python/test_ndarray.py
Original file line number Diff line number Diff line change
@@ -2,6 +2,7 @@

import numpy as np
import pytest
from taichi.lang.util import has_pytorch

import taichi as ti

@@ -28,7 +29,7 @@ def _test_scalar_ndarray(dtype, shape):

@pytest.mark.parametrize('dtype', data_types)
@pytest.mark.parametrize('shape', ndarray_shapes)
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=ti.get_host_arch_list(), ndarray_use_torch=True)
def test_scalar_ndarray_torch(dtype, shape):
_test_scalar_ndarray(dtype, shape)
@@ -57,7 +58,7 @@ def _test_vector_ndarray(n, dtype, shape):
@pytest.mark.parametrize('n', vector_dims)
@pytest.mark.parametrize('dtype', data_types)
@pytest.mark.parametrize('shape', ndarray_shapes)
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=ti.get_host_arch_list(), ndarray_use_torch=True)
def test_vector_ndarray_torch(n, dtype, shape):
_test_vector_ndarray(n, dtype, shape)
@@ -88,7 +89,7 @@ def _test_matrix_ndarray(n, m, dtype, shape):
@pytest.mark.parametrize('n,m', matrix_dims)
@pytest.mark.parametrize('dtype', data_types)
@pytest.mark.parametrize('shape', ndarray_shapes)
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=ti.get_host_arch_list(), ndarray_use_torch=True)
def test_matrix_ndarray_torch(n, m, dtype, shape):
_test_matrix_ndarray(n, m, dtype, shape)
@@ -103,7 +104,7 @@ def test_matrix_ndarray(n, m, dtype, shape):


@pytest.mark.parametrize('dtype', [ti.f32, ti.f64])
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
def test_default_fp_ndarray_torch(dtype):
ti.init(default_fp=dtype, ndarray_use_torch=True)

@@ -122,7 +123,7 @@ def test_default_fp_ndarray(dtype):


@pytest.mark.parametrize('dtype', [ti.i32, ti.i64])
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
def test_default_ip_ndarray_torch(dtype):
ti.init(default_ip=dtype, ndarray_use_torch=True)

@@ -191,7 +192,7 @@ def run(x: ti.any_arr(), y: ti.any_arr()):
assert b[i, j] == i * j + (i + j + 1) * 2


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_ndarray_2d_torch():
_test_ndarray_2d()
@@ -241,7 +242,7 @@ def _test_ndarray_copy_from_ndarray():
assert x[4][1, 0] == 6


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_ndarray_copy_from_ndarray_torch():
_test_ndarray_copy_from_ndarray()
@@ -349,7 +350,7 @@ def test_ndarray_rw_cache():
c_a[None] = c_b[10]


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_ndarray_deepcopy_torch():
_test_ndarray_deepcopy()
@@ -387,7 +388,7 @@ def _test_ndarray_numpy_io():
assert (x.to_numpy() == y.to_numpy()).all()


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=supported_archs_taichi_ndarray, ndarray_use_torch=True)
def test_ndarray_numpy_io_torch():
_test_ndarray_numpy_io()
@@ -411,7 +412,7 @@ def _test_matrix_ndarray_python_scope(layout):


@pytest.mark.parametrize('layout', layouts)
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=supported_archs_taichi_ndarray, ndarray_use_torch=True)
def test_matrix_ndarray_python_scope_torch(layout):
_test_matrix_ndarray_python_scope(layout)
@@ -440,7 +441,7 @@ def func(a: ti.any_arr()):


@pytest.mark.parametrize('layout', layouts)
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_matrix_ndarray_taichi_scope_torch(layout):
_test_matrix_ndarray_taichi_scope(layout)
@@ -469,7 +470,7 @@ def func(a: ti.any_arr()):


@pytest.mark.parametrize('layout', layouts)
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_matrix_ndarray_taichi_scope_struct_for_torch(layout):
_test_matrix_ndarray_taichi_scope_struct_for(layout)
@@ -482,7 +483,7 @@ def test_matrix_ndarray_taichi_scope_struct_for(layout):


@pytest.mark.parametrize('layout', layouts)
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_vector_ndarray_python_scope_torch(layout):
a = ti.Vector.ndarray(10, ti.i32, 5, layout=layout)
@@ -511,7 +512,7 @@ def test_vector_ndarray_python_scope(layout):


@pytest.mark.parametrize('layout', layouts)
@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_vector_ndarray_taichi_scope_torch(layout):
@ti.kernel
@@ -572,7 +573,7 @@ def func(a: ti.any_arr(element_dim=1)):
assert ti.get_runtime().get_num_compiled_functions() == 3


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_compiled_functions_torch():
_test_compiled_functions()
@@ -666,7 +667,7 @@ def func7(a: ti.any_arr(field_dim=2)):
func7(x)


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=ti.get_host_arch_list(), ndarray_use_torch=True)
def test_arg_not_match_torch():
_test_arg_not_match()
@@ -687,7 +688,7 @@ def _test_size_in_bytes():
assert b.get_nelement() == 50


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.cuda], ndarray_use_torch=True)
def test_size_in_bytes_torch():
_test_size_in_bytes()
7 changes: 4 additions & 3 deletions tests/python/test_torch_ad.py
Original file line number Diff line number Diff line change
@@ -2,14 +2,15 @@

import numpy as np
import pytest
from taichi.lang.util import has_pytorch

import taichi as ti

if ti.has_pytorch():
if has_pytorch():
import torch


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_torch_ad():
n = 32
@@ -49,7 +50,7 @@ def backward(ctx, outp_grad):
assert ret[j] == 4


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(sys.platform == 'win32', reason='not working on Windows.')
@ti.test(exclude=ti.opengl)
def test_torch_ad_gpu():
27 changes: 14 additions & 13 deletions tests/python/test_torch_io.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
import numpy as np
import pytest
from taichi.lang.util import has_pytorch

import taichi as ti

if ti.has_pytorch():
if has_pytorch():
import torch


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_io_devices():
n = 32
@@ -44,7 +45,7 @@ def store(y: ti.ext_arr()):
assert y[i] == (11 + i) * 2


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_io():
n = 32
@@ -84,7 +85,7 @@ def backward(ctx, outp_grad):
assert ret[i] == 4


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_io_2d():
n = 32
@@ -108,7 +109,7 @@ def forward(ctx, inp):
assert val == 2 * 2 * n * n


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_io_3d():
n = 16
@@ -134,7 +135,7 @@ def forward(ctx, inp):
assert val == 2 * 2 * n * n * n


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_io_simple():
n = 32
@@ -161,7 +162,7 @@ def test_io_simple():
assert (t2 == t3).all()


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_io_zeros():
mat = ti.Matrix.field(2, 6, dtype=ti.f32, shape=(), needs_grad=True)
@@ -175,7 +176,7 @@ def test_io_zeros():
assert zeros[1, 2] == 4


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_io_struct():
n = 16
@@ -195,7 +196,7 @@ def test_io_struct():
assert (t1[k] == t2[k]).all()


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_fused_kernels():
n = 12
@@ -207,7 +208,7 @@ def test_fused_kernels():
assert ti.get_runtime().get_num_compiled_functions() == s + 2


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_device():
n = 12
@@ -218,7 +219,7 @@ def test_device():
assert X.to_torch(device='cuda:0').device == torch.device('cuda:0')


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_shape_matrix():
n = 12
@@ -238,7 +239,7 @@ def test_shape_matrix():
assert (X == X1).all()


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_shape_vector():
n = 12
@@ -257,7 +258,7 @@ def test_shape_vector():
assert (X == X1).all()


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(exclude=ti.opengl)
def test_torch_zero():
@ti.kernel
3 changes: 2 additions & 1 deletion tests/python/test_type_check.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import numpy as np
import pytest
from taichi.lang.util import has_pytorch

import taichi as ti

@@ -43,7 +44,7 @@ def select():
select()


@pytest.mark.skipif(not ti.has_pytorch(), reason='Pytorch not installed.')
@pytest.mark.skipif(not has_pytorch(), reason='Pytorch not installed.')
@ti.test(arch=[ti.cpu, ti.opengl])
def test_subscript():
a = ti.ndarray(ti.i32, shape=(10, 10))