Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance decreasing after eval() calls #19013

Closed
tmptrash opened this issue Oct 18, 2016 · 19 comments
Closed

Performance decreasing after eval() calls #19013

tmptrash opened this issue Oct 18, 2016 · 19 comments
Labels
performance Must go faster

Comments

@tmptrash
Copy link

tmptrash commented Oct 18, 2016

I use tasks in my app and generate a lot of code with eval function. And I found that, every next eval call slows down application a little bit. Look at this code (and my comment also):

type Code
  code::Expr
  codeFn::Function
  task::Task
  Code(c::Expr, cf::Function) = new(c, cf)
end

function born(c::Code, task::Task)
  while true
    c.codeFn(task)
  end
end

function loop()
  local count::Int = 500
  local i::Int = 0
  local k::Int
  local l::Int
  local s::Float64 = time()
  local ex::Expr = :(function (t) local i::Int = 1; for i=1:3 yieldto(t, i) end end)
  local fn::Function
  local code::Code
  local curTask::Task = current_task()
  local t::Task
  local idx::Int
  local tasks::Array{Code, 1} = [(fn=eval(ex); code=Code(ex,fn); t=Task(()->born(code,curTask)); code.task=t; code) for i=1:count]

  while true
    for l = 1:count
      yieldto(tasks[l].task)
    end

    # mutation of random task/code
    idx = rand(1:count)
    tasks[idx].codeFn = eval(ex) # if i comment this line everything works fine
    tasks[idx].task   = Task(()->born(tasks[idx],curTask))

    if time() - s > 1.0
      println("times: ", i)
      s = time()
      i = 0
    end
    i += 1
  end
end

loop()

Here is an output:

julia> include("tmp\\taskPerf.jl")
times: 0
times: 89
times: 92
times: 102
times: 112
times: 99
times: 90
times: 90
times: 91
...

and 2 minutes later:

times: 27
times: 28
times: 28
times: 27
times: 28
times: 28
times: 28
times: 28
times: 26
times: 25
times: 27
times: 28
...

After few minutes of running, times will be equal to zero.

julia> versioninfo()
Julia Version 0.5.0
Commit 3c9d753 (2016-09-19 18:14 UTC)
Platform Info:
  System: NT (x86_64-w64-mingw32)
  CPU: Intel(R) Core(TM) i7-4700HQ CPU @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.7.1 (ORCJIT, haswell)

So, can you guys explain me why eval() makes these delays?
Thanks.

@yuyichao
Copy link
Contributor

This is expected you are creating more and more types.

@tmptrash
Copy link
Author

tmptrash commented Oct 18, 2016

Can you explain?
I evaluating ex only. What types are you talking about?

Thanks.

@yuyichao
Copy link
Contributor

each eval creates a new function and the compiler is optimized for the opposite case.

@JeffBezanson
Copy link
Member

Probably a duplicate of #18446

@tmptrash
Copy link
Author

Is it possible to fix this somehow by changing my code or this is special eval issue?

@tmptrash
Copy link
Author

Yes, #18446 looks very similar to me too.

@JeffBezanson
Copy link
Member

I would try to use closures instead of eval if possible.

@tmptrash
Copy link
Author

tmptrash commented Oct 18, 2016

This example is simple, but in my code i use many ASTs, modify them in real time and recompile using eval. So, for my case, it's impossible to escape from many eval calls :(

@jpsamaroo
Copy link
Member

@tmptrash I know this isn't a real fix for this issue, but I managed to implement a simple interpreter for ASTs for my own use-case, which is pretty fast and is stable in terms of memory usage (sub-linear memory growth per new "function", with growth decreasing over time).

For initial compilation, it evals small segments of the AST and caches the resulting function together with its associated Expr tree in a Dict for later use. It then swaps out the Expr segment with a user defined type CachedExpr, which references that AST segment and its function. Then, when the interpreter is run on this modified AST, the CachedExprs attached function is called, (passing in arguments as needed), and recursively calls other attached CachedExprs in the process. It's definitely not the solution to this problem, but it's the lesser of two evils in this case. It also requires that you define each Expr-segment that you'll be using, so it's a bit more work up-front. Still, if you're interested, I'd be happy to share my code and write some documentation detailing how it's used.

In the meantime, I'll be taking Jeff's suggestion and try to profile calls to eval to see what's going on.

@tmptrash
Copy link
Author

Thanks for advice, but it's really huge peace of work for me :( It's simplier just to reset the app from time to time...

@JeffBezanson JeffBezanson changed the title Perormance decreasing after eval() calls Performance decreasing after eval() calls Oct 20, 2016
@tmptrash
Copy link
Author

tmptrash commented Dec 2, 2016

Any updates regarding this issue?

@StefanKarpinski
Copy link
Member

I'm afraid that this is not a high priority use case for the time being – we're trying to get the 0.6 release in good shape by end of month. Honestly, even in the pre-1.0 time frame it's hard to see this getting prioritized, which means that it will get investigated if someone really wants to do it...

@tmptrash
Copy link
Author

tmptrash commented Dec 3, 2016

:(

@johnmyleswhite
Copy link
Member

@tmptrash, I know this isn't the answer you want, but note that @StefanKarpinski's response isn't that Julia couldn't optimize for this use case: he's just noting that you yourself would probably need to do the work on the compiler to make this happen. I'm pretty confident that a full 100% of the Julia user base would appreciate having another person in the community dedicate themselves to becoming an expert compiler developer.

@kshyatt kshyatt added the performance Must go faster label Dec 3, 2016
@StefanKarpinski
Copy link
Member

Just so. This is a fairly niche use case – one that needs to be driven forward by someone who has need for it. That person could be you.

@tmptrash
Copy link
Author

tmptrash commented Dec 5, 2016

I really want to contribute, but in real life i'm just a JavaScript developer :-D And It's hard for me to fix compiler's code :( I only found a workaround by reloading the process and continue with a backup... Anyway i'm waiting for a good news :)

@fredrikekre
Copy link
Member

Master:

julia> loop()
times: 100
times: 164
times: 162
ERROR (unhandled task failure): MethodError: no method matching (::##1887#1888)(::Task)
The applicable method may be too new: running in world age 22491, while current world is 22929.
Closest candidates are:
  #1887(::Any) at REPL[45]:7 (method too new to be called from this world context.)
Stacktrace:
 [1] born at ./REPL[44]:3 [inlined]
 [2] (::##10#13{Task})() at ./REPL[45]:13 # hangs here, then ^C
^Cfatal: error thrown and no exception handler available.
InterruptException()
jl_run_once at /home/fredrik/julia-master/src/jl_uv.c:132
process_events at ./libuv.jl:82 [inlined]
wait at ./event.jl:216
task_done_hook at ./task.jl:256
unknown function (ip: 0x7f1bfa4802eb)
jl_call_fptr_internal at /home/fredrik/julia-master/src/julia_internal.h:353 [inlined]
jl_call_method_internal at /home/fredrik/julia-master/src/julia_internal.h:372 [inlined]
jl_apply_generic at /home/fredrik/julia-master/src/gf.c:1923
jl_apply at /home/fredrik/julia-master/src/julia.h:1424 [inlined]
finish_task at /home/fredrik/julia-master/src/task.c:232
start_task at /home/fredrik/julia-master/src/task.c:275
unknown function (ip: 0xffffffffffffffff)

@martinholters
Copy link
Member

OP example would need to be updated with an invokelatest.

@vtjnash
Copy link
Member

vtjnash commented Feb 10, 2024

Closing as duplicate of #18446

@vtjnash vtjnash closed this as completed Feb 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
None yet
Development

No branches or pull requests

10 participants