Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PINNErrorVsTime Benchmark Updates #1159

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

ParamThakkar123
Copy link
Contributor

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas I got an error on running the iterations which said that the maxiters are less than 1000 so I set all maxiters to 1100. Actually the decision was a bit arbitrary but is that a good number ??

@ChrisRackauckas
Copy link
Member

https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/

It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help.

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Jan 19, 2025

https://docs.sciml.ai/SciMLBenchmarksOutput/v0.5/PINNErrorsVsTime/diffusion_et/

It's supposed to just show the error over time and then get cutoff. I don't see why making it longer would help.

Yes. Actually I set it to that number just to get rid of that error

@ChrisRackauckas
Copy link
Member

Wait what's the error?

@ParamThakkar123
Copy link
Contributor Author

Wait what's the error?

the error went off on running again

@ChrisRackauckas
Copy link
Member

what error?

@ParamThakkar123
Copy link
Contributor Author

maxiters should be a number greater than 1000

@ChrisRackauckas
Copy link
Member

can you please just show the error...

@ParamThakkar123
Copy link
Contributor Author

AssertionError: maxiters for CubaCuhre(0, 0, 0) should be larger than 1000

Stacktrace: [1] __solvebp_call(prob::IntegralProblem{false, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, NeuralPDE.var"#integrand#109"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p),
NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}}, Vector{Float64}, @Kwargs{}}, alg::CubaCuhre,
sensealg::Integrals.ReCallVJP{Integrals.ZygoteVJP}, lb::Vector{Float64}, ub::Vector{Float64}, p::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}; reltol::Float64, abstol::Float64, maxiters::Int64) @ IntegralsCuba C:\Users\Hp.julia\packages\IntegralsCuba\xueKH\src\IntegralsCuba.jl:139 [2] __solvebp_call @ C:\Users\Hp.julia\packages\IntegralsCuba\xueKH\src\IntegralsCuba.jl:134 [inlined] [3] #__solvebp_call#4 @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:95 [inlined] [4] __solvebp_call @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:94 [inlined] [5] #rrule#5 @ C:\Users\Hp.julia\packages\Integrals\d3rQd\ext\IntegralsZygoteExt.jl:17 [inlined] [6] rrule @ C:\Users\Hp.julia\packages\Integrals\d3rQd\ext\IntegralsZygoteExt.jl:14 [inlined] [7] rrule @ C:\Users\Hp.julia\packages\ChainRulesCore\U6wNx\src\rules.jl:144 [inlined] [8] chain_rrule_kw @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\chainrules.jl:236 [inlined] [9] macro expansion @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:0 [inlined] [10] _pullback @ C:\Users\Hp.julia\packages\Zygote\TWpme\src\compiler\interface2.jl:91 [inlined] [11] solve! @ C:\Users\Hp.julia\packages\Integrals\d3rQd\src\common.jl:84 [inlined] ... @ SciMLBase C:\Users\Hp.julia\packages\SciMLBase\szsYq\src\solve.jl:162 [55] solve(::OptimizationProblem{true, OptimizationFunction{true, AutoZygote, NeuralPDE.var"#full_loss_function#318"{NeuralPDE.var"#null_nonadaptive_loss#118", Vector{NeuralPDE.var"#106#110"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x0ca831bf, 0x51abc8eb, 0xda1f388f, 0xf472bcea, 0x7492cfcb), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#105#108"{QuadratureTraining{CubaCuhre, Float64}}, Float64}}, Vector{NeuralPDE.var"#106#110"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, QuadratureTraining{CubaCuhre, Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Vector{Float64}, Vector{Float64}, NeuralPDE.var"#105#108"{QuadratureTraining{CubaCuhre, Float64}}, Float64}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @Kwargs{}}, ::Optimisers.Adam; kwargs::@Kwargs{callback::var"#11#18"{var"#loss_function#17"{OptimizationProblem{true, OptimizationFunction{true, AutoZygote, NeuralPDE.var"#full_loss_function#318"{NeuralPDE.var"#null_nonadaptive_loss#118", Vector{NeuralPDE.var"#74#75"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0x700712a1, 0x6c1c91c2, 0xa5bfc01b, 0x66a91103, 0x0d12fcff), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Matrix{Real}}}, Vector{NeuralPDE.var"#74#75"{NeuralPDE.var"#219#220"{RuntimeGeneratedFunctions.RuntimeGeneratedFunction{(:cord, Symbol("##θ#228"), :phi, :derivative, :integral, :u, :p), NeuralPDE.var"#_RGF_ModTag", NeuralPDE.var"#_RGF_ModTag", (0xd9696f1d, 0xe356e73c, 0x32906e9c, 0x54a064bc, 0x0cbbe458), Expr}, NeuralPDE.var"#12#13", NeuralPDE.var"#279#286"{NeuralPDE.var"#279#280#287"{typeof(NeuralPDE.numeric_derivative)}, Dict{Symbol, Int64}, Dict{Symbol, Int64}, GridTraining{Float64}}, typeof(NeuralPDE.numeric_derivative), NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing}, Matrix{Real}}}, NeuralPDE.PINNRepresentation, Bool, Vector{Int64}, Int64, NeuralPDE.Phi{Chain{@NamedTuple{layer_1::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_2::Dense{true, typeof(tanh_fast), typeof(glorot_uniform), typeof(zeros32)}, layer_3::Dense{true, typeof(identity), typeof(glorot_uniform), typeof(zeros32)}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, Nothing, Bool, Nothing}, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, typeof(SciMLBase.DEFAULT_OBSERVED_NO_TIME), Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{ComponentArrays.Axis{(layer_1 = ViewAxis(1:60, Axis(weight = ViewAxis(1:50, ShapedAxis((10, 5))), bias = ViewAxis(51:60, ShapedAxis((10, 1))))), layer_2 = ViewAxis(61:170, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = ViewAxis(101:110, ShapedAxis((10, 1))))), layer_3 = ViewAxis(171:181, Axis(weight = ViewAxis(1:10, ShapedAxis((1, 10))), bias = ViewAxis(11:11, ShapedAxis((1, 1))))))}}}, SciMLBase.NullParameters, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, @Kwargs{}}}, Vector{Any}, Vector{Any}, Vector{Any}}, maxiters::Int64}) @ SciMLBase C:\Users\Hp.julia\packages\SciMLBase\szsYq\src\solve.jl:83 [56] allen_cahn(strategy::QuadratureTraining{CubaCuhre, Float64}, minimizer::Optimisers.Adam, maxIters::Int64) @ Main e:\SciMLBenchmarks.jl\benchmarks\PINNErrorsVsTime\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:105

@ChrisRackauckas
Copy link
Member

I see, that's for the sampling algorithm. You should only need that on Cuhre?

@ParamThakkar123
Copy link
Contributor Author

I see, that's for the sampling algorithm. You should only need that on Cuhre?

Yes. But as Cuhre was the first one in the line I thought setting to 1100 just for it would not solve the problem, so I set it to 1100 for all of them

@ParamThakkar123
Copy link
Contributor Author

The CI has passed here. And all the code seems to run perfectly. Can you please review ??

@ChrisRackauckas
Copy link
Member

@ArnoStrouwen SciML/Integrals.jl#124 can you remind me what the purpose behind this was?

@ArnoStrouwen
Copy link
Member

I don't remember myself, but that PR links to:
SciML/Integrals.jl#47

@ChrisRackauckas
Copy link
Member

Uninitialized memory in the original C: giordano/Cuba.jl#12 (comment) fantastic stuff numerical community, that's your classic method that everyone says when they say "all of the old stuff is robust" 😅

@ChrisRackauckas
Copy link
Member

Can you force latest majors and make sure the manifest resolves?

@ParamThakkar123
Copy link
Contributor Author

I bump forced the latest versions and resolved manifests but initially there were a lot of version conflicts. I removed IntegralsCuba and IntegralsCubature for a while to resolve them. The manifest resolved but adding both of them back again poses some more version conflicts

@ChrisRackauckas
Copy link
Member

Can you share the resolution errors?

@ParamThakkar123
Copy link
Contributor Author

image

image

@ChrisRackauckas These are the resolution errors that occur

@ChrisRackauckas
Copy link
Member

Oh those were turned into extensions. Change using IntegralsCuba, IntegralsCubature into using Integrals, Cuba, Cubature and change the dependencies to directly depending on Cuba and Cubature.

@ParamThakkar123
Copy link
Contributor Author

Sure !! 🫡

@ParamThakkar123
Copy link
Contributor Author

Should I consider running this again @ChrisRackauckas ??

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas
The maximum value in the times array is
66.796

This value comes from QuasiRandomTraining + BFGS algorithm

@ChrisRackauckas
Copy link
Member

It is just one time point? It's definitely an outlier. You got rid of the maximum time part: put that back?

@ParamThakkar123
Copy link
Contributor Author

No. It's an array of two values:
"11" => [66.5992, 66.796]

But I told you the maximum among the two. I haven't removed the maximum time part from the implementation. It's there but the whole code takes around 30 hours to execute for all the algorithms. I just ran this code for the most suspected outlier

@ParamThakkar123
Copy link
Contributor Author

Any suggestions on further changes on this implementation @ChrisRackauckas ??

@ParamThakkar123
Copy link
Contributor Author

Training with QuasiRandomTraining:
Dict{Any, Any} with 2 entries:
"12" => [9.60293, 10.3536]
"11" => [79.1825, 79.3383]

Training with QuadratureTraining CubaCuhre():
Dict{Any, Any} with 2 entries:
"12" => [0.0060337, 0.0161921]
"11" => [0.0056472, 0.0096335]

Training with Quadrature Training HCubatureJL()
Dict{Any, Any} with 2 entries:
"12" => [0.0048781, 0.0403565]
"11" => [0.0055059, 0.0097088]

Training Quadrature Training CubatureJLh()
Dict{Any, Any} with 2 entries:
"12" => [0.0099027, 0.0466078]
"11" => [0.0041709, 0.0073577]

Training with Quadrature Training CubaturwJLp()
Dict{Any, Any} with 2 entries:
"12" => [0.0048316, 0.292919]
"11" => [0.0078547, 0.0157797]

Training with GridTraining:
Dict{Any, Any} with 2 entries:
"12" => [0.0088805, 0.120971]
"11" => [0.0086563, 0.0147051]

Training with Stochastic Training:
Dict{Any, Any} with 2 entries:
"12" => [0.0083882, 0.0354763]
"11" => [0.0110269, 0.0161864]

@ParamThakkar123
Copy link
Contributor Author

It is just one time point? It's definitely an outlier. You got rid of the maximum time part: put that back?

So I must remove them, right ??

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas Any further code changes required here ?? Are we ready to merge this ??

@ChrisRackauckas
Copy link
Member

That plot hasn't changed.

@ParamThakkar123
Copy link
Contributor Author

It is just one time point? It's definitely an outlier. You got rid of the maximum time part: put that back?

Yeah. So What parameters should I consider for making the changes exactly ??

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Feb 22, 2025

The times array ?? or should I remove the outliers ?? or some algorithms ?

@ChrisRackauckas
Copy link
Member

The previous version had time caps to fix this. They should probably be re-added.

@ParamThakkar123
Copy link
Contributor Author

image

Dict{Any, Any} with 14 entries:
"32" => [0.0336096, 0.124406]
"12" => [8.69697, 9.07318]
"41" => [0.0038361, 0.0086331]
"11" => [115.273, 115.443]
"22" => [0.0079326, 0.114546]
"51" => [0.0037149, 0.008581]
"31" => [0.0042777, 0.0096933]
"61" => [0.0062327, 0.0128282]
"42" => [0.0071377, 0.0256434]
"62" => [0.0254246, 0.145936, 0.199219, 0.222898]
"52" => [0.0057367, 0.0159064, 0.0257179, 0.1677, 0.175674, 0.225474, 0.29907…
"21" => [0.0050241, 0.0088035]
"72" => [0.0043152, 0.137986, 0.261313, 0.279204, 0.307742, 0.319465, 0.33178…
"71" => [0.0051946, 0.520155, 0.525374, 0.529693, 0.535312, 0.5403, 0.56997, …

@ChrisRackauckas This is the new output after adding time caps

@ChrisRackauckas
Copy link
Member

Is there something under the legend? Move it out

@ParamThakkar123
Copy link
Contributor Author

Oh! I got why this was happening, actually the code is correct but the graphs were appearing too wonky or small due to the number of iterations being performed

@ParamThakkar123
Copy link
Contributor Author

When I reduced the iterations to 100 this is what I got

image

Which I think looks reasonable

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas Your views on this ? Is this how it should be ??

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas Could you please review ??

@ChrisRackauckas
Copy link
Member

Can you log the t there?

@ParamThakkar123
Copy link
Contributor Author

Can you log the t there?

Added the code to log t for each algorithm

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas All checks have passed the graph still appears the same though 😅 . Could you please review ?? I have logged t there like you said

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas All checks have passed the graph still appears the same though 😅

@ChrisRackauckas probably the graph is a bit zoomed out ??

@ParamThakkar123
Copy link
Contributor Author

ParamThakkar123 commented Feb 26, 2025

@ChrisRackauckas Further changes to the code doesn't seem to change the graph much. Any more tweaks you might suggest ??

@ParamThakkar123
Copy link
Contributor Author

rest the code works fine and updated to the latest bump

@ParamThakkar123
Copy link
Contributor Author

Are we ready to merge this ??

@ParamThakkar123
Copy link
Contributor Author

@ChrisRackauckas Could you please review ??

@ChrisRackauckas @avik-pal ???

@ChrisRackauckas
Copy link
Member

You keep pinging but the plot still isn't fixed.

@ParamThakkar123
Copy link
Contributor Author

You keep pinging but the plot still isn't fixed.

Yes. The plot isn't fixed. The reason why I was pinging was to know if there are some alternate methods to fix this, because I have tried a lot of variations of this still observed no significant changes. And every push I make on the repo, takes 48+ hours to pass through the CI and the results turn out to be something I didn't expect or get when I ran them locally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants