Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation into the closed module Transformers breaks incremental compilation (enable_gpu) #108

Closed
Broever101 opened this issue Aug 5, 2022 · 5 comments

Comments

@Broever101
Copy link

module Outer

using Transformers
enable_gpu()

end

On the REPL:

using Outer

[ Info: Precompiling Outer [7d29fca4-1b1c-4df5-92e9-f71c99fd1fc4]
ERROR: LoadError: Evaluation into the closed module `Transformers` breaks incremental compilation because the side effects will not be permanent. This is likely due to some other module mutating `Transformers` with `eval` during precompilation - don't do this.
Stacktrace:
  [1] eval
    @ .\boot.jl:373 [inlined]
  [2] enable_gpu(t::Bool)
    @ Transformers C:\Users\Admin\.julia\packages\Transformers\dAmXK\src\Transformers.jl:45
  [3] enable_gpu()
    @ Transformers C:\Users\Admin\.julia\packages\Transformers\dAmXK\src\Transformers.jl:43
  [4] top-level scope
    @ C:\Users\Admin\Desktop\work\tweet-classification\Outer\src\Outer.jl:4
  [5] include
    @ .\Base.jl:418 [inlined]
  [6] include_package_for_output(pkg::Base.PkgId, input::String, depot_path::Vector{String}, dl_load_path::Vector{String}, load_path::Vector{String}, concrete_deps::Vector{Pair{Base.PkgId, UInt64}}, source::Nothing)
    @ Base .\loading.jl:1318
  [7] top-level scope
    @ none:1
  [8] eval
    @ .\boot.jl:373 [inlined]
  [9] eval(x::Expr)
    @ Base.MainInclude .\client.jl:453
 [10] top-level scope
    @ none:1
in expression starting at C:\Users\Admin\Desktop\work\tweet-classification\Outer\src\Outer.jl:1
ERROR: Failed to precompile Outer [7d29fca4-1b1c-4df5-92e9-f71c99fd1fc4] to C:\Users\Admin\.julia\compiled\v1.7\Outer\jl_BFE0.tmp.
Stacktrace:
 [1] error(s::String)
   @ Base .\error.jl:33
 [2] compilecache(pkg::Base.PkgId, path::String, internal_stderr::IO, internal_stdout::IO, ignore_loaded_modules::Bool)
   @ Base .\loading.jl:1466
 [3] compilecache(pkg::Base.PkgId, path::String)
   @ Base .\loading.jl:1410
 [4] _require(pkg::Base.PkgId)
   @ Base .\loading.jl:1120
 [5] require(uuidkey::Base.PkgId)
   @ Base .\loading.jl:1013
 [6] require(into::Module, mod::Symbol)
   @ Base .\loading.jl:997

The culprit is the @eval inside Transformers.jl

function enable_gpu(t::Bool=true)
    if t
        CUDA.functional() || error("CUDA not functional")
        @eval todevice(args...) = togpudevice(args...)
    else
        @eval todevice(args...) = tocpudevice(args...)
    end
end
@chengchingwen
Copy link
Owner

enable_gpu should not be used in precompilation. You can either put that in the beginning of your script or put that in __init__() of your module.

@Broever101
Copy link
Author

enable_gpu should not be used in precompilation. You can either put that in the beginning of your script or put that in __init__() of your module.

Putting it in init worked. Also, is there any inference example for pre-trained BERT?

@chengchingwen
Copy link
Owner

chengchingwen commented Aug 5, 2022

Currently no. What do you want to do? we can try to add that to the example.

Here is a simple example for using the NER model from huggingface (with Transformers.jl v0.1.20 and Julia v1.7).

using Transformers.Basic
using Transformers.HuggingFace
tkr = hgf"dslim/bert-base-NER:tokenizer"
bert_model = hgf"dslim/bert-base-NER:fortokenclassification"
cfg =  hgf"dslim/bert-base-NER:config"

a = encode(tkr, ["My name is Wolfgang and I live in Berlin"])
y = Flux.onecold(bert_model(a.input.tok; token_type_ids = a.input.segment).logits)

julia> [decode(tkr, a.input.tok);; map(i->cfg.id2label[i-1], y)]
11×2 Matrix{String}:
 "[CLS]"     "O"
 "My"        "O"
 "name"      "O"
 "is"        "O"
 "Wolfgang"  "B-PER"
 "and"       "O"
 "I"         "O"
 "live"      "O"
 "in"        "O"
 "Berlin"    "B-LOC"
 "[SEP]"     "O"

@Broever101
Copy link
Author

Broever101 commented Aug 5, 2022

Currently no. What do you want to do? we can try to add that to the example.

Here is a simple example for using the NER model from huggingface (with Transformers.jl v0.1.20 and Julia v1.7).

using Transformers.Basic
using Transformers.HuggingFace
tkr = hgf"dslim/bert-base-NER:tokenizer"
bert_model = hgf"dslim/bert-base-NER:fortokenclassification"
cfg =  hgf"dslim/bert-base-NER:config"

a = encode(tkr, ["My name is Wolfgang and I live in Berlin"])
y = Flux.onecold(bert_model(a.input.tok; token_type_ids = a.input.segment).logits)

julia> [decode(tkr, a.input.tok);; map(i->cfg.id2label[i-1], y)]
11×2 Matrix{String}:
 "[CLS]"     "O"
 "My"        "O"
 "name"      "O"
 "is"        "O"
 "Wolfgang"  "B-PER"
 "and"       "O"
 "I"         "O"
 "live"      "O"
 "in"        "O"
 "Berlin"    "B-LOC"
 "[SEP]"     "O"

I want to do sentiment analysis. I've got the classifier code running (like on that cola dataset) without the gpu. I want something like

pipeline("This is a positive comment")
>Sentiment: 1 (positive)

@chengchingwen
Copy link
Owner

Unfortunately right now we don't have an inference api. Let's track that in another issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants