-
Notifications
You must be signed in to change notification settings - Fork 5
/
1.5.0-DEV-7206b56e94.log
442 lines (435 loc) · 58.5 KB
/
1.5.0-DEV-7206b56e94.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
Julia Version 1.5.0-DEV.485
Commit 7206b56e94 (2020-03-18 17:25 UTC)
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz
WORD_SIZE: 64
LIBM: libopenlibm
LLVM: libLLVM-9.0.1 (ORCJIT, skylake)
Environment:
JULIA_DEPOT_PATH = ::/usr/local/share/julia
JULIA_NUM_THREADS = 2
Resolving package versions...
Installed TimerOutputs ─────── v0.5.3
Installed VersionParsing ───── v1.2.0
Installed Conda ────────────── v1.4.1
Installed FFTW ─────────────── v1.1.0
Installed GPUArrays ────────── v2.0.1
Installed OffsetArrays ─────── v1.0.3
Installed PassiveTracerFlows ─ v0.2.0
Installed LLVM ─────────────── v1.3.4
Installed CEnum ────────────── v0.2.0
Installed AxisAlgorithms ───── v1.0.0
Installed CodecZlib ────────── v0.7.0
Installed Adapt ────────────── v1.0.1
Installed JLD2 ─────────────── v0.1.3
Installed StaticArrays ─────── v0.12.1
Installed Zlib_jll ─────────── v1.2.11+8
Installed CuArrays ─────────── v1.7.0
Installed CUDAapi ──────────── v2.1.0
Installed JSON ─────────────── v0.21.0
Installed Reexport ─────────── v0.2.0
Installed Parsers ──────────── v0.3.12
Installed WoodburyMatrices ─── v0.5.1
Installed AbstractFFTs ─────── v0.5.0
Installed BinaryProvider ───── v0.5.8
Installed DataStructures ───── v0.17.10
Installed FileIO ───────────── v1.2.3
Installed CUDAnative ───────── v2.9.1
Installed Requires ─────────── v1.0.1
Installed Ratios ───────────── v0.4.0
Installed OrderedCollections ─ v1.1.0
Installed MacroTools ───────── v0.5.4
Installed FourierFlows ─────── v0.4.1
Installed NNlib ────────────── v0.6.6
Installed CUDAdrv ──────────── v5.1.0
Installed Interpolations ───── v0.12.5
Installed TranscodingStreams ─ v0.9.5
#=#=# ######################################################################## 100.0%
Updating `~/.julia/environments/v1.5/Project.toml`
dc26d6a1 + PassiveTracerFlows v0.2.0
Updating `~/.julia/environments/v1.5/Manifest.toml`
621f4979 + AbstractFFTs v0.5.0
79e6a3ab + Adapt v1.0.1
13072b0f + AxisAlgorithms v1.0.0
b99e7846 + BinaryProvider v0.5.8
fa961155 + CEnum v0.2.0
3895d2a7 + CUDAapi v2.1.0
c5f51814 + CUDAdrv v5.1.0
be33ccc6 + CUDAnative v2.9.1
944b1d66 + CodecZlib v0.7.0
8f4d0f93 + Conda v1.4.1
3a865a2d + CuArrays v1.7.0
864edb3b + DataStructures v0.17.10
7a1cc6ca + FFTW v1.1.0
5789e2e9 + FileIO v1.2.3
2aec4490 + FourierFlows v0.4.1
0c68f7d7 + GPUArrays v2.0.1
a98d9a8b + Interpolations v0.12.5
033835bb + JLD2 v0.1.3
682c06a0 + JSON v0.21.0
929cbde3 + LLVM v1.3.4
1914dd2f + MacroTools v0.5.4
872c559c + NNlib v0.6.6
6fe1bfb0 + OffsetArrays v1.0.3
bac558e1 + OrderedCollections v1.1.0
69de0a69 + Parsers v0.3.12
dc26d6a1 + PassiveTracerFlows v0.2.0
c84ed2f1 + Ratios v0.4.0
189a3867 + Reexport v0.2.0
ae029012 + Requires v1.0.1
90137ffa + StaticArrays v0.12.1
a759f4b9 + TimerOutputs v0.5.3
3bb67fe8 + TranscodingStreams v0.9.5
81def892 + VersionParsing v1.2.0
efce3f68 + WoodburyMatrices v0.5.1
83775a58 + Zlib_jll v1.2.11+8
2a0f44e3 + Base64
ade2ca70 + Dates
8ba89e20 + Distributed
b77e0a4c + InteractiveUtils
76f85450 + LibGit2
8f399da3 + Libdl
37e2e46d + LinearAlgebra
56ddb016 + Logging
d6f4376e + Markdown
a63ad114 + Mmap
44cfe95a + Pkg
de0858da + Printf
3fa0cd96 + REPL
9a3f8284 + Random
ea8e919c + SHA
9e88b42a + Serialization
1a1011a3 + SharedArrays
6462fe0b + Sockets
2f01184e + SparseArrays
10745b16 + Statistics
8dfed614 + Test
cf7118a7 + UUIDs
4ec0a83e + Unicode
Building Conda → `~/.julia/packages/Conda/3rPhK/deps/build.log`
Building FFTW ─→ `~/.julia/packages/FFTW/loJ3F/deps/build.log`
Building NNlib → `~/.julia/packages/NNlib/FAI3o/deps/build.log`
Testing PassiveTracerFlows
Status `/tmp/jl_sXQEAE/Project.toml`
a2441757 Coverage v1.0.0
3a865a2d CuArrays v1.7.0
7a1cc6ca FFTW v1.1.0
2aec4490 FourierFlows v0.4.1
033835bb JLD2 v0.1.3
dc26d6a1 PassiveTracerFlows v0.2.0
189a3867 Reexport v0.2.0
37e2e46d LinearAlgebra
9a3f8284 Random
10745b16 Statistics
8dfed614 Test
Status `/tmp/jl_sXQEAE/Manifest.toml`
621f4979 AbstractFFTs v0.5.0
79e6a3ab Adapt v1.0.1
13072b0f AxisAlgorithms v1.0.0
b99e7846 BinaryProvider v0.5.8
fa961155 CEnum v0.2.0
3895d2a7 CUDAapi v2.1.0
c5f51814 CUDAdrv v5.1.0
be33ccc6 CUDAnative v2.9.1
944b1d66 CodecZlib v0.7.0
8f4d0f93 Conda v1.4.1
a2441757 Coverage v1.0.0
c36e975a CoverageTools v1.1.0
3a865a2d CuArrays v1.7.0
864edb3b DataStructures v0.17.10
7a1cc6ca FFTW v1.1.0
5789e2e9 FileIO v1.2.3
2aec4490 FourierFlows v0.4.1
0c68f7d7 GPUArrays v2.0.1
cd3eb016 HTTP v0.8.12
83e8ac13 IniFile v0.5.0
a98d9a8b Interpolations v0.12.5
033835bb JLD2 v0.1.3
682c06a0 JSON v0.21.0
929cbde3 LLVM v1.3.4
1914dd2f MacroTools v0.5.4
739be429 MbedTLS v0.7.0
872c559c NNlib v0.6.6
6fe1bfb0 OffsetArrays v1.0.3
bac558e1 OrderedCollections v1.1.0
69de0a69 Parsers v0.3.12
dc26d6a1 PassiveTracerFlows v0.2.0
c84ed2f1 Ratios v0.4.0
189a3867 Reexport v0.2.0
ae029012 Requires v1.0.1
90137ffa StaticArrays v0.12.1
a759f4b9 TimerOutputs v0.5.3
3bb67fe8 TranscodingStreams v0.9.5
81def892 VersionParsing v1.2.0
efce3f68 WoodburyMatrices v0.5.1
83775a58 Zlib_jll v1.2.11+8
2a0f44e3 Base64
ade2ca70 Dates
8ba89e20 Distributed
b77e0a4c InteractiveUtils
76f85450 LibGit2
8f399da3 Libdl
37e2e46d LinearAlgebra
56ddb016 Logging
d6f4376e Markdown
a63ad114 Mmap
44cfe95a Pkg
de0858da Printf
3fa0cd96 REPL
9a3f8284 Random
ea8e919c SHA
9e88b42a Serialization
1a1011a3 SharedArrays
6462fe0b Sockets
2f01184e SparseArrays
10745b16 Statistics
8dfed614 Test
cf7118a7 UUIDs
4ec0a83e Unicode
┌ Warning: Incompatibility detected between CUDA and LLVM 8.0+; disabling debug info emission for CUDA kernels
└ @ CUDAnative ~/.julia/packages/CUDAnative/JfXpo/src/CUDAnative.jl:88
testing on CPU
Test Summary: | Pass Total
TracerAdvDiff | 6 6
testing on GPU
[ Info: Building the CUDAnative run-time library for your sm_75 device, this might take a while...
TracerAdvDiff: Error During Test at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:34
Test threw exception
Expression: test_constvel(stepper, dt, nsteps, dev)
MethodError: no method matching Base.CodegenParams(; cached=false, track_allocations=false, code_coverage=false, static_alloc=false, prefer_specsig=true, module_setup=CUDAnative.var"#hook_module_setup#93"(Core.Box(#undef)), module_activation=CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), CUDAnative.var"#postprocess#92"(), Core.Box(nothing), DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}(Dict{Core.MethodInstance,Array{LLVM.Function,1}}()), Core.Box(#undef), Core.Box(#undef)), emit_function=CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.MethodInstance[]), emitted_function=CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.Box(nothing), Core.MethodInstance[]), gnu_pubnames=false, debug_info_kind=0)
Closest candidates are:
Base.CodegenParams(; track_allocations, code_coverage, static_alloc, prefer_specsig, gnu_pubnames, debug_info_kind, module_setup, module_activation, raise_exception, emit_function, emitted_function) at reflection.jl:986 got unsupported keyword argument "cached"
Stacktrace:
[1] kwerr(::NamedTuple{(:cached, :track_allocations, :code_coverage, :static_alloc, :prefer_specsig, :module_setup, :module_activation, :emit_function, :emitted_function, :gnu_pubnames, :debug_info_kind),Tuple{Bool,Bool,Bool,Bool,Bool,CUDAnative.var"#hook_module_setup#93",CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}},CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},Bool,Int32}}, ::Type{T} where T) at ./error.jl:157
[2] compile_method_instance(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:148
[3] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[4] irgen(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:165
[5] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[6] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:104 [inlined]
[7] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[8] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:103
[9] emit_function!(::LLVM.Module, ::VersionNumber, ::Function, ::Tuple{DataType}, ::String) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:144
[10] build_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:154
[11] (::CUDAnative.var"#139#142"{VersionNumber,String})() at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:189
[12] get!(::CUDAnative.var"#139#142"{VersionNumber,String}, ::Dict{String,LLVM.Module}, ::String) at ./dict.jl:450
[13] load_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:182
[14] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:99
[15] compile(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:52
[16] #compile#150 at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:33 [inlined]
[17] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:393 [inlined]
[18] cufunction(::GPUArrays.var"#25#26", ::Type{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}}}}}}; name::Nothing, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[19] cufunction(::Function, ::Type{T} where T) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[20] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:179 [inlined]
[21] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /home/pkgeval/.julia/packages/CuArrays/1njKF/src/gpuarray_interface.jl:62
[22] gpu_call(::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Int64) at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:151
[23] gpu_call at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:128 [inlined]
[24] copyto! at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/broadcast.jl:48 [inlined]
[25] copyto! at ./broadcast.jl:864 [inlined]
[26] copy(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:840
[27] materialize(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:820
[28] TwoDGrid(::Int64, ::Float64, ::Int64, ::Float64; x0::Float64, y0::Float64, nthreads::Int64, effort::UInt32, T::Type{T} where T, dealias::Float64, ArrayType::Type{T} where T) at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/domains.jl:145
[29] #TwoDGrid#74 at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/CuFourierFlows.jl:8 [inlined]
[30] Problem(; nx::Int64, Lx::Float64, ny::Int64, Ly::Float64, kap::Float64, eta::Float64, u::Function, v::Function, dt::Float64, stepper::String, steadyflow::Bool, T::Type{T} where T, dev::GPU) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/src/traceradvdiff.jl:43
[31] test_constvel(::String, ::Float64, ::Int64, ::GPU) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/test_traceradvdiff.jl:15
[32] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:34 [inlined]
[33] macro expansion at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1114 [inlined]
[34] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:28 [inlined]
[35] top-level scope at ./util.jl:234 [inlined]
[36] top-level scope at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:0
[ Info: Building the CUDAnative run-time library for your sm_75 device, this might take a while...
TracerAdvDiff: Error During Test at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:36
Test threw exception
Expression: test_timedependentvel(stepper, dt, tfinal, dev)
MethodError: no method matching Base.CodegenParams(; cached=false, track_allocations=false, code_coverage=false, static_alloc=false, prefer_specsig=true, module_setup=CUDAnative.var"#hook_module_setup#93"(Core.Box(#undef)), module_activation=CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), CUDAnative.var"#postprocess#92"(), Core.Box(nothing), DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}(Dict{Core.MethodInstance,Array{LLVM.Function,1}}()), Core.Box(#undef), Core.Box(#undef)), emit_function=CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.MethodInstance[]), emitted_function=CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.Box(nothing), Core.MethodInstance[]), gnu_pubnames=false, debug_info_kind=0)
Closest candidates are:
Base.CodegenParams(; track_allocations, code_coverage, static_alloc, prefer_specsig, gnu_pubnames, debug_info_kind, module_setup, module_activation, raise_exception, emit_function, emitted_function) at reflection.jl:986 got unsupported keyword argument "cached"
Stacktrace:
[1] kwerr(::NamedTuple{(:cached, :track_allocations, :code_coverage, :static_alloc, :prefer_specsig, :module_setup, :module_activation, :emit_function, :emitted_function, :gnu_pubnames, :debug_info_kind),Tuple{Bool,Bool,Bool,Bool,Bool,CUDAnative.var"#hook_module_setup#93",CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}},CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},Bool,Int32}}, ::Type{T} where T) at ./error.jl:157
[2] compile_method_instance(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:148
[3] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[4] irgen(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:165
[5] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[6] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:104 [inlined]
[7] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[8] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:103
[9] emit_function!(::LLVM.Module, ::VersionNumber, ::Function, ::Tuple{DataType}, ::String) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:144
[10] build_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:154
[11] (::CUDAnative.var"#139#142"{VersionNumber,String})() at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:189
[12] get!(::CUDAnative.var"#139#142"{VersionNumber,String}, ::Dict{String,LLVM.Module}, ::String) at ./dict.jl:450
[13] load_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:182
[14] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:99
[15] compile(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:52
[16] #compile#150 at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:33 [inlined]
[17] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:393 [inlined]
[18] cufunction(::GPUArrays.var"#25#26", ::Type{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}}}}}}; name::Nothing, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[19] cufunction(::Function, ::Type{T} where T) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[20] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:179 [inlined]
[21] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /home/pkgeval/.julia/packages/CuArrays/1njKF/src/gpuarray_interface.jl:62
[22] gpu_call(::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Int64) at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:151
[23] gpu_call at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:128 [inlined]
[24] copyto! at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/broadcast.jl:48 [inlined]
[25] copyto! at ./broadcast.jl:864 [inlined]
[26] copy(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:840
[27] materialize(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:820
[28] TwoDGrid(::Int64, ::Float64, ::Int64, ::Float64; x0::Float64, y0::Float64, nthreads::Int64, effort::UInt32, T::Type{T} where T, dealias::Float64, ArrayType::Type{T} where T) at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/domains.jl:145
[29] #TwoDGrid#74 at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/CuFourierFlows.jl:8 [inlined]
[30] Problem(; nx::Int64, Lx::Float64, ny::Int64, Ly::Float64, kap::Float64, eta::Float64, u::Function, v::Function, dt::Float64, stepper::String, steadyflow::Bool, T::Type{T} where T, dev::GPU) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/src/traceradvdiff.jl:43
[31] test_timedependentvel(::String, ::Float64, ::Float64, ::GPU; uvel::Float64, αv::Float64) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/test_traceradvdiff.jl:55
[32] test_timedependentvel(::String, ::Float64, ::Float64, ::GPU) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/test_traceradvdiff.jl:45
[33] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:36 [inlined]
[34] macro expansion at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1114 [inlined]
[35] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:28 [inlined]
[36] top-level scope at ./util.jl:234 [inlined]
[37] top-level scope at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:0
[ Info: Building the CUDAnative run-time library for your sm_75 device, this might take a while...
TracerAdvDiff: Error During Test at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:38
Test threw exception
Expression: test_diffusion(stepper, dt, tfinal, dev; steadyflow = true)
MethodError: no method matching Base.CodegenParams(; cached=false, track_allocations=false, code_coverage=false, static_alloc=false, prefer_specsig=true, module_setup=CUDAnative.var"#hook_module_setup#93"(Core.Box(#undef)), module_activation=CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), CUDAnative.var"#postprocess#92"(), Core.Box(nothing), DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}(Dict{Core.MethodInstance,Array{LLVM.Function,1}}()), Core.Box(#undef), Core.Box(#undef)), emit_function=CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.MethodInstance[]), emitted_function=CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.Box(nothing), Core.MethodInstance[]), gnu_pubnames=false, debug_info_kind=0)
Closest candidates are:
Base.CodegenParams(; track_allocations, code_coverage, static_alloc, prefer_specsig, gnu_pubnames, debug_info_kind, module_setup, module_activation, raise_exception, emit_function, emitted_function) at reflection.jl:986 got unsupported keyword argument "cached"
Stacktrace:
[1] kwerr(::NamedTuple{(:cached, :track_allocations, :code_coverage, :static_alloc, :prefer_specsig, :module_setup, :module_activation, :emit_function, :emitted_function, :gnu_pubnames, :debug_info_kind),Tuple{Bool,Bool,Bool,Bool,Bool,CUDAnative.var"#hook_module_setup#93",CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}},CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},Bool,Int32}}, ::Type{T} where T) at ./error.jl:157
[2] compile_method_instance(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:148
[3] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[4] irgen(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:165
[5] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[6] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:104 [inlined]
[7] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[8] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:103
[9] emit_function!(::LLVM.Module, ::VersionNumber, ::Function, ::Tuple{DataType}, ::String) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:144
[10] build_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:154
[11] (::CUDAnative.var"#139#142"{VersionNumber,String})() at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:189
[12] get!(::CUDAnative.var"#139#142"{VersionNumber,String}, ::Dict{String,LLVM.Module}, ::String) at ./dict.jl:450
[13] load_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:182
[14] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:99
[15] compile(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:52
[16] #compile#150 at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:33 [inlined]
[17] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:393 [inlined]
[18] cufunction(::GPUArrays.var"#25#26", ::Type{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}}}}}}; name::Nothing, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[19] cufunction(::Function, ::Type{T} where T) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[20] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:179 [inlined]
[21] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /home/pkgeval/.julia/packages/CuArrays/1njKF/src/gpuarray_interface.jl:62
[22] gpu_call(::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Int64) at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:151
[23] gpu_call at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:128 [inlined]
[24] copyto! at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/broadcast.jl:48 [inlined]
[25] copyto! at ./broadcast.jl:864 [inlined]
[26] copy(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:840
[27] materialize(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:820
[28] TwoDGrid(::Int64, ::Float64, ::Int64, ::Float64; x0::Float64, y0::Float64, nthreads::Int64, effort::UInt32, T::Type{T} where T, dealias::Float64, ArrayType::Type{T} where T) at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/domains.jl:145
[29] #TwoDGrid#74 at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/CuFourierFlows.jl:8 [inlined]
[30] Problem(; nx::Int64, Lx::Float64, ny::Int64, Ly::Float64, kap::Float64, eta::Float64, u::Function, v::Function, dt::Float64, stepper::String, steadyflow::Bool, T::Type{T} where T, dev::GPU) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/src/traceradvdiff.jl:43
[31] test_diffusion(::String, ::Float64, ::Float64, ::GPU; steadyflow::Bool) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/test_traceradvdiff.jl:91
[32] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:38 [inlined]
[33] macro expansion at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1114 [inlined]
[34] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:28 [inlined]
[35] top-level scope at ./util.jl:234 [inlined]
[36] top-level scope at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:0
[ Info: Building the CUDAnative run-time library for your sm_75 device, this might take a while...
TracerAdvDiff: Error During Test at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:40
Test threw exception
Expression: test_diffusion(stepper, dt, tfinal, dev; steadyflow = false)
MethodError: no method matching Base.CodegenParams(; cached=false, track_allocations=false, code_coverage=false, static_alloc=false, prefer_specsig=true, module_setup=CUDAnative.var"#hook_module_setup#93"(Core.Box(#undef)), module_activation=CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), CUDAnative.var"#postprocess#92"(), Core.Box(nothing), DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}(Dict{Core.MethodInstance,Array{LLVM.Function,1}}()), Core.Box(#undef), Core.Box(#undef)), emit_function=CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.MethodInstance[]), emitted_function=CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.Box(nothing), Core.MethodInstance[]), gnu_pubnames=false, debug_info_kind=0)
Closest candidates are:
Base.CodegenParams(; track_allocations, code_coverage, static_alloc, prefer_specsig, gnu_pubnames, debug_info_kind, module_setup, module_activation, raise_exception, emit_function, emitted_function) at reflection.jl:986 got unsupported keyword argument "cached"
Stacktrace:
[1] kwerr(::NamedTuple{(:cached, :track_allocations, :code_coverage, :static_alloc, :prefer_specsig, :module_setup, :module_activation, :emit_function, :emitted_function, :gnu_pubnames, :debug_info_kind),Tuple{Bool,Bool,Bool,Bool,Bool,CUDAnative.var"#hook_module_setup#93",CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}},CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},Bool,Int32}}, ::Type{T} where T) at ./error.jl:157
[2] compile_method_instance(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:148
[3] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[4] irgen(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:165
[5] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[6] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:104 [inlined]
[7] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[8] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:103
[9] emit_function!(::LLVM.Module, ::VersionNumber, ::Function, ::Tuple{DataType}, ::String) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:144
[10] build_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:154
[11] (::CUDAnative.var"#139#142"{VersionNumber,String})() at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:189
[12] get!(::CUDAnative.var"#139#142"{VersionNumber,String}, ::Dict{String,LLVM.Module}, ::String) at ./dict.jl:450
[13] load_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:182
[14] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:99
[15] compile(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:52
[16] #compile#150 at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:33 [inlined]
[17] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:393 [inlined]
[18] cufunction(::GPUArrays.var"#25#26", ::Type{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}}}}}}; name::Nothing, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[19] cufunction(::Function, ::Type{T} where T) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[20] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:179 [inlined]
[21] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /home/pkgeval/.julia/packages/CuArrays/1njKF/src/gpuarray_interface.jl:62
[22] gpu_call(::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Int64) at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:151
[23] gpu_call at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:128 [inlined]
[24] copyto! at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/broadcast.jl:48 [inlined]
[25] copyto! at ./broadcast.jl:864 [inlined]
[26] copy(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:840
[27] materialize(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:820
[28] TwoDGrid(::Int64, ::Float64, ::Int64, ::Float64; x0::Float64, y0::Float64, nthreads::Int64, effort::UInt32, T::Type{T} where T, dealias::Float64, ArrayType::Type{T} where T) at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/domains.jl:145
[29] #TwoDGrid#74 at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/CuFourierFlows.jl:8 [inlined]
[30] Problem(; nx::Int64, Lx::Float64, ny::Int64, Ly::Float64, kap::Float64, eta::Float64, u::Function, v::Function, dt::Float64, stepper::String, steadyflow::Bool, T::Type{T} where T, dev::GPU) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/src/traceradvdiff.jl:43
[31] test_diffusion(::String, ::Float64, ::Float64, ::GPU; steadyflow::Bool) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/test_traceradvdiff.jl:91
[32] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:40 [inlined]
[33] macro expansion at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1114 [inlined]
[34] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:28 [inlined]
[35] top-level scope at ./util.jl:234 [inlined]
[36] top-level scope at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:0
[ Info: Building the CUDAnative run-time library for your sm_75 device, this might take a while...
TracerAdvDiff: Error During Test at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:42
Test threw exception
Expression: test_hyperdiffusion(stepper, dt, tfinal, dev)
MethodError: no method matching Base.CodegenParams(; cached=false, track_allocations=false, code_coverage=false, static_alloc=false, prefer_specsig=true, module_setup=CUDAnative.var"#hook_module_setup#93"(Core.Box(#undef)), module_activation=CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), CUDAnative.var"#postprocess#92"(), Core.Box(nothing), DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}(Dict{Core.MethodInstance,Array{LLVM.Function,1}}()), Core.Box(#undef), Core.Box(#undef)), emit_function=CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.MethodInstance[]), emitted_function=CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}}(CUDAnative.CompilerJob(CUDAnative.Runtime.unbox_uint64, Tuple{Any}, v"7.5.0", false, nothing, nothing, nothing, nothing, nothing), Core.Box(nothing), Core.MethodInstance[]), gnu_pubnames=false, debug_info_kind=0)
Closest candidates are:
Base.CodegenParams(; track_allocations, code_coverage, static_alloc, prefer_specsig, gnu_pubnames, debug_info_kind, module_setup, module_activation, raise_exception, emit_function, emitted_function) at reflection.jl:986 got unsupported keyword argument "cached"
Stacktrace:
[1] kwerr(::NamedTuple{(:cached, :track_allocations, :code_coverage, :static_alloc, :prefer_specsig, :module_setup, :module_activation, :emit_function, :emitted_function, :gnu_pubnames, :debug_info_kind),Tuple{Bool,Bool,Bool,Bool,Bool,CUDAnative.var"#hook_module_setup#93",CUDAnative.var"#hook_module_activation#94"{CUDAnative.CompilerJob,CUDAnative.var"#postprocess#92",DataStructures.MultiDict{Core.MethodInstance,LLVM.Function}},CUDAnative.var"#hook_emit_function#97"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},CUDAnative.var"#hook_emitted_function#98"{CUDAnative.CompilerJob,Array{Core.MethodInstance,1}},Bool,Int32}}, ::Type{T} where T) at ./error.jl:157
[2] compile_method_instance(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:148
[3] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[4] irgen(::CUDAnative.CompilerJob, ::Core.MethodInstance, ::UInt64) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/irgen.jl:165
[5] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[6] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:104 [inlined]
[7] macro expansion at /home/pkgeval/.julia/packages/TimerOutputs/7Id5J/src/TimerOutput.jl:228 [inlined]
[8] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:103
[9] emit_function!(::LLVM.Module, ::VersionNumber, ::Function, ::Tuple{DataType}, ::String) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:144
[10] build_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:154
[11] (::CUDAnative.var"#139#142"{VersionNumber,String})() at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:189
[12] get!(::CUDAnative.var"#139#142"{VersionNumber,String}, ::Dict{String,LLVM.Module}, ::String) at ./dict.jl:450
[13] load_runtime(::VersionNumber) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/rtlib.jl:182
[14] codegen(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:99
[15] compile(::Symbol, ::CUDAnative.CompilerJob; libraries::Bool, dynamic_parallelism::Bool, optimize::Bool, strip::Bool, strict::Bool) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:52
[16] #compile#150 at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/compiler/driver.jl:33 [inlined]
[17] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:393 [inlined]
[18] cufunction(::GPUArrays.var"#25#26", ::Type{Tuple{CuArrays.CuKernelState,CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{CUDAnative.CuRefValue{typeof(^)},Base.Broadcast.Extruded{CUDAnative.CuDeviceArray{Float64,2,CUDAnative.AS.Global},Tuple{Bool,Bool},Tuple{Int64,Int64}},CUDAnative.CuRefValue{Val{2}}}}}}}}; name::Nothing, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[19] cufunction(::Function, ::Type{T} where T) at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:360
[20] macro expansion at /home/pkgeval/.julia/packages/CUDAnative/JfXpo/src/execution.jl:179 [inlined]
[21] _gpu_call(::CuArrays.CuArrayBackend, ::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Tuple{Tuple{Int64},Tuple{Int64}}) at /home/pkgeval/.julia/packages/CuArrays/1njKF/src/gpuarray_interface.jl:62
[22] gpu_call(::Function, ::CuArray{Float64,2,Nothing}, ::Tuple{CuArray{Float64,2,Nothing},Base.Broadcast.Broadcasted{Nothing,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},Base.Broadcast.Extruded{CuArray{Float64,2,Nothing},Tuple{Bool,Bool},Tuple{Int64,Int64}},Base.RefValue{Val{2}}}}}}}, ::Int64) at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:151
[23] gpu_call at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/abstract_gpu_interface.jl:128 [inlined]
[24] copyto! at /home/pkgeval/.julia/packages/GPUArrays/1wgPO/src/broadcast.jl:48 [inlined]
[25] copyto! at ./broadcast.jl:864 [inlined]
[26] copy(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Tuple{Base.OneTo{Int64},Base.OneTo{Int64}},typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:840
[27] materialize(::Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(+),Tuple{Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}},Base.Broadcast.Broadcasted{Base.Broadcast.ArrayStyle{CuArray},Nothing,typeof(CuArrays.culiteral_pow),Tuple{Base.RefValue{typeof(^)},CuArray{Float64,2,Nothing},Base.RefValue{Val{2}}}}}}) at ./broadcast.jl:820
[28] TwoDGrid(::Int64, ::Float64, ::Int64, ::Float64; x0::Float64, y0::Float64, nthreads::Int64, effort::UInt32, T::Type{T} where T, dealias::Float64, ArrayType::Type{T} where T) at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/domains.jl:145
[29] #TwoDGrid#74 at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/CuFourierFlows.jl:8 [inlined]
[30] TwoDGrid at /home/pkgeval/.julia/packages/FourierFlows/Es33H/src/CuFourierFlows.jl:8 [inlined]
[31] test_hyperdiffusion(::String, ::Float64, ::Float64, ::GPU; steadyflow::Bool) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/test_traceradvdiff.jl:135
[32] test_hyperdiffusion(::String, ::Float64, ::Float64, ::GPU) at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/test_traceradvdiff.jl:121
[33] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:42 [inlined]
[34] macro expansion at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Test/src/Test.jl:1114 [inlined]
[35] macro expansion at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:28 [inlined]
[36] top-level scope at ./util.jl:234 [inlined]
[37] top-level scope at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:0
Test Summary: | Pass Error Total
TracerAdvDiff | 1 5 6
ERROR: LoadError: Some tests did not pass: 1 passed, 0 failed, 5 errored, 0 broken.
in expression starting at /home/pkgeval/.julia/packages/PassiveTracerFlows/Lpy5T/test/runtests.jl:21
ERROR: Package PassiveTracerFlows errored during testing
Stacktrace:
[1] pkgerror(::String, ::Vararg{String,N} where N) at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/Types.jl:53
[2] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}; coverage::Bool, julia_args::Cmd, test_args::Cmd, test_fn::Nothing) at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/Operations.jl:1523
[3] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}; coverage::Bool, test_fn::Nothing, julia_args::Cmd, test_args::Cmd, kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/API.jl:316
[4] test(::Pkg.Types.Context, ::Array{Pkg.Types.PackageSpec,1}) at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/API.jl:303
[5] #test#68 at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/API.jl:297 [inlined]
[6] test at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/API.jl:297 [inlined]
[7] #test#67 at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/API.jl:296 [inlined]
[8] test at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/API.jl:296 [inlined]
[9] test(::String; kwargs::Base.Iterators.Pairs{Union{},Union{},Tuple{},NamedTuple{(),Tuple{}}}) at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/API.jl:295
[10] test(::String) at /workspace/srcdir/usr/share/julia/stdlib/v1.5/Pkg/src/API.jl:295
[11] top-level scope at none:13