Skip to content

Commit

Permalink
Merge branch 'JuliaLogging:master' into fix-MultilineChartContent-not…
Browse files Browse the repository at this point in the history
…-defined
  • Loading branch information
ldeso authored Oct 10, 2023
2 parents 2ad5f15 + 37bae6b commit 00f4adb
Show file tree
Hide file tree
Showing 134 changed files with 10,970 additions and 9,796 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/UnitTest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
fail-fast: false
matrix:
os: [macos-latest, ubuntu-latest]
julia_version: ["1.3", "1", "nightly"]
julia_version: ["1.6", "1", "nightly"]

runs-on: ${{ matrix.os }}
env:
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ test/test_logs
docs/Manifest.toml

gen/proto
gen/protojl
gen/protojl
13 changes: 8 additions & 5 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "TensorBoardLogger"
uuid = "899adc3e-224a-11e9-021f-63837185c80f"
authors = ["Filippo Vicentini <[email protected]>"]
version = "0.1.20"
version = "0.1.23"

[deps]
CRC32c = "8bf52ea8-c179-5cab-976a-9e18b702a9bc"
Expand All @@ -14,17 +14,20 @@ StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
[compat]
FileIO = "1.2.3"
ImageCore = "0.8.1, 0.9"
ProtoBuf = "0.10, 0.11"
ProtoBuf = "1.0.11"
Requires = "0.5, 1"
StatsBase = "0.27, 0.28, 0.29, 0.30, 0.31, 0.32, 0.33"
julia = "1.3"
StatsBase = "0.27, 0.28, 0.29, 0.30, 0.31, 0.32, 0.33, 0.34"
julia = "1.6"

[extras]
Minio = "4281f0d9-7ae0-406e-9172-b7277c1efa20"
Cairo = "159f3aea-2a34-519c-b102-8c37f9878175"
Fontconfig = "186bb1d3-e1f7-5a2c-a377-96d770f13627"
Gadfly = "c91e804a-d5a3-530f-b6f0-dfbca275c004"
ImageMagick = "6218d12a-5da1-5696-b52f-db25d2ecc6d1"
LightGraphs = "093fc24a-ae57-5d10-9952-331d41423f4d"
Logging = "56ddb016-857b-54e1-b83d-db4d58db5568"
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
Minio = "4281f0d9-7ae0-406e-9172-b7277c1efa20"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
PyPlot = "d330b81b-6aea-500a-939a-2ce795aea3ee"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ logger in Julia:

You can log to TensorBoard any type. Numeric types will be logged as scalar,
arrays will be binned into histograms, images and audio will be logged as such,
and we even support [Plots](https://github.com/JuliaPlots/Plots.jl) and
[PyPlot](https://github.com/JuliaPlots/Plots.jl) figures!
and we even support [Plots](https://github.com/JuliaPlots/Plots.jl),
[PyPlot](https://github.com/JuliaPlots/Plots.jl) and [Gadfly](https://github.com/GiovineItalia/Gadfly.jl) figures!

For details about how types are logged by default, or how to customize this behaviour for your custom types,
refer to the documentation or the examples folder.
Expand Down Expand Up @@ -71,7 +71,7 @@ end
```

## Integration with third party packages
We also support native logging of the types defined by a few third-party packages, such as `Plots` and `PyPlot` plots.
We also support native logging of the types defined by a few third-party packages, such as `Plots`, `PyPlot` and `Gadfly` plots.
If there are other libraries that you think we should include in the list, please open an issue.

## Roadmap
Expand Down
4 changes: 3 additions & 1 deletion docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,13 @@ makedocs(
"Backends" => "custom_behaviour.md",
"Reading back data" => "deserialization.md",
"Extending" => "extending_behaviour.md",
"Explicit Interface" => "explicit_interface.md"
"Explicit Interface" => "explicit_interface.md",
"Hyperparameter logging" => "hyperparameters.md"
],
"Examples" => Any[
"Flux.jl" => "examples/flux.md"
"Optim.jl" => "examples/optim.md"
"Hyperparameter tuning" => "examples/hyperparameter_tuning.md"
]
],
format = Documenter.HTML(
Expand Down
2 changes: 1 addition & 1 deletion docs/src/custom_behaviour.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ value is sent to:

- `::AbstractVector{<:Real}` -> [Histogram backend](https://www.tensorflow.org/guide/tensorboard_histograms) as a vector
- `::StatsBase.Histogram` -> [Histogram backend](https://www.tensorflow.org/guide/tensorboard_histograms)
- `(bin_edges, weights)::Tuple{AbstractVector,AbstractVector}` where `length(bin_edges)==length(weights)+1`, is interpreted as an histogram. (*Will be deprecated.* Please use `TBHistogram(edges, weights)` for this).
<!-- - `(bin_edges, weights)::Tuple{AbstractVector,AbstractVector}` where `length(bin_edges)==length(weights)+1`, is interpreted as an histogram. (*Will be deprecated.* Please use `TBHistogram(edges, weights)` for this). -->
- `::Real` -> Scalar backend
- `::AbstractArray{<:Colorant}` -> [Image backend](https://www.tensorflow.org/tensorboard/r2/image_summaries)
- `::Any` -> Text Backend
Expand Down
53 changes: 53 additions & 0 deletions docs/src/examples/hyperparameter_tuning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Hyperparameter tuning

We will start this example by setting up a simple random walk experiment, and seeing the effect of the hyperparameter `bias` on the results.

First, import the packages we will need with:
```julia
using TensorBoardLogger, Logging
using Random
```
Next, we will create a function which runs the experiment and logs the results, include the hyperparameters stored in the `config` dictionary.
```julia
function run_experiment(id, config)
logger = TBLogger("random_walk/run$id", tb_append)

# Specify all the metrics we want to track in a list
metric_names = ["scalar/position"]
write_hparams!(logger, config, metric_names)

epochs = config["epochs"]
sigma = config["sigma"]
bias = config["bias"]
with_logger(logger) do
x = 0.0
for i in 1:epochs
x += sigma * randn() + bias
@info "scalar" position = x
end
end
nothing
end
```
Now we can write a script which runs an experiment over a set of parameter values.
```julia
id = 0
for bias in LinRange(-0.1, 0.1, 11)
for epochs in [50, 100]
config = Dict(
"bias"=>bias,
"epochs"=>epochs,
"sigma"=>0.1
)
run_experiment(id, config)
id += 1
end
end
```

Below is an example of the dashboard you get when you open Tensorboard with the command:
```sh
tensorboard --logdir=random_walk
```

![tuning plot](tuning.png)
Binary file added docs/src/examples/tuning.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/src/explicit_interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ See [TensorBoard Custom Scalar page](https://github.com/tensorflow/tensorboard/t

For example, to combine in the same plot panel the two curves logged under tags `"Curve/1"` and `"Curve/2"` you can run once the command:
```julia
layout = Dict("Cat" => Dict("Curve" => ("Multiline", ["Curve/1", "Curve/2"])))
layout = Dict("Cat" => Dict("Curve" => (tb_multiline, ["Curve/1", "Curve/2"])))

log_custom_scalar(lg, layout)

Expand Down
2 changes: 1 addition & 1 deletion docs/src/extending_behaviour.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ At the end of this step, every pair in `objects` will be logged to a specific
backend, according to the following rules:

- `::AbstractVector{<:Real}` -> [Histogram backend](https://www.tensorflow.org/guide/tensorboard_histograms) as a vector
- `::Tuple{AbstractVector,AbstractVector}` [Histogram backend](https://www.tensorflow.org/guide/tensorboard_histograms) as an histogram
<!-- - `::Tuple{AbstractVector,AbstractVector}` [Histogram backend](https://www.tensorflow.org/guide/tensorboard_histograms) as an histogram -->
- `::Real` -> Scalar backend
- `::AbstractArray{<:Colorant}` -> [Image backend](https://www.tensorflow.org/tensorboard/r2/image_summaries)
- `::Any` -> Text Backend
Expand Down
10 changes: 10 additions & 0 deletions docs/src/hyperparameters.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Hyperparameter logging

In additition to logging the experiments, you may wish to also visualise the effect of hyperparameters on some plotted metrics. This can be done by logging the hyperparameters via the `write_hparams!` function, which takes a dictionary mapping hyperparameter names to their values (currently limited to `Real`, `Bool` or `String` types), along with the names of any metrics that you want to view the effects of.

You can see how the HParams dashboard in Tensorboard can be used to tune hyperparameters on the [tensorboard website](https://www.tensorflow.org/tensorboard/hyperparameter_tuning_with_hparams).

## API
```@docs
write_hparams!
```
10 changes: 7 additions & 3 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,11 +111,15 @@ at [Reading back TensorBoard data](@ref)
We also support logging custom types from a the following third-party libraries:
- [Plots.jl](https://github.com/JuliaPlots/Plots.jl): the `Plots.Plot` type will be rendered to PNG at the resolution specified by the object and logged as an image
- [PyPlot.jl](https://github.com/JuliaPy/PyPlot.jl): the `PyPlot.Figure` type will be rendered to PNG at the resolution specified by the object and logged as an image
- [Gadfly.jl](https://github.com/GiovineItalia/Gadfly.jl): the `Gadfly.Plot` type will be rendered to PNG at the resolution specified by the object and logged as an image. `Cairo` and `Fontconfig` packages must be imported for this functionality to work as it is required by `Gadfly`.
- [Tracker.jl](https://github.com/FluxML/Tracker.jl): the `TrackedReal` and `TrackedArray` types will be logged as vector data
- [ValueHistories.jl](https://github.com/JuliaML/ValueHistories.jl): the `MVHistory` type is used to store the deserialized content of .proto files.

## Explicit logging

In alternative, you can also log data to TensorBoard through its functional interface,
by calling the relevant method with a tag string and the data. For information
on this interface refer to [Explicit interface](@ref)...
As an alternative, you can also log data to TensorBoard through its functional interface, by calling the relevant method with a tag string and the data. For information on this interface refer to [Explicit interface](@ref).

## Hyperparameter tuning

Many experiments rely on hyperparameters, which can be difficult to tune. Tensorboard allows you to visualise the effect of your hyperparameters on your metrics, giving you an intuition for the correct hyperparameters for your task. For information on this API, see the [Hyperparameter logging](@ref) manual page.

14 changes: 14 additions & 0 deletions examples/Gadfly.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
using TensorBoardLogger #import the TensorBoardLogger package
using Logging #import Logging package
using Gadfly, Cairo, Fontconfig

logger = TBLogger("Gadflylogs", tb_append) #create tensorboard logger

################log scalars example: y = x²################
#using logger interface
x = rand(100)
y = rand(100)
p = plot(x=x, y=y, Geom.point);
with_logger(logger) do
@info "gadfly" plot=p
end
38 changes: 38 additions & 0 deletions examples/HParams.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
using TensorBoardLogger #import the TensorBoardLogger package
using Logging #import Logging package
using Random # Exports randn

# Run 10 experiments to see a plot
for j in 1:10
logger = TBLogger("random_walks/run$j", tb_append)

sigma = 0.1
epochs = 200
bias = (rand()*2 - 1) / 10 # create a random bias
use_seed = false
# Add in the a dummy loss metric
with_logger(logger) do
x = 0.0
for i in 1:epochs
x += sigma * randn() + bias
@info "scalar" loss = x
end
end

# Hyperparameter is a dictionary of parameter names to their values. This
# supports numerical types, bools and strings. Non-bool numerical types
# are converted to Float64 to be displayed.
hparams_config = Dict{String, Any}(
"sigma"=>sigma,
"epochs"=>epochs,
"bias"=>bias,
"use_seed"=>use_seed,
"method"=>"MC"
)
# Specify a list of tags that you want to show up in the hyperparameter
# comparison
metrics = ["scalar/loss"]

# Write the hyperparameters and metrics config to the logger.
write_hparams!(logger, hparams_config, metrics)
end
9 changes: 5 additions & 4 deletions examples/Histograms.jl
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,10 @@ with_logger(logger) do
x0 = 0.5+i/30; s0 = 0.5/(i/20);
edges = collect(-5:0.1:5)
centers = collect(edges[1:end-1] .+0.05)
histvals = [exp(-((c-x0)/s0)^2) for c = centers]
histvals = s0 * randn(length(centers)) .+ x0
data_tuple = (edges, histvals)
@info "histogram/loggerinterface" autobin=rand(10).+0.1*i manualbin=data_tuple
@info "histogram/loggerinterface" autobin=s0 .* randn(100) .+ x0
@info "histogram/loggerinterface" manualbin=data_tuple
end
end

Expand All @@ -21,8 +22,8 @@ for i in 1:100
x0 = 0.5+i/30; s0 = 0.5/(i/20);
edges = collect(-5:0.1:5)
centers = collect(edges[1:end-1] .+0.05)
histvals = [exp(-((c-x0)/s0)^2) for c = centers]
histvals = s0 * randn(length(centers)) .+ x0
data_tuple = (edges, histvals)
log_histogram(logger, "histogram/explicitinterface/autobin", rand(10).+0.1*i, step = i) #automatic bins
log_histogram(logger, "histogram/explicitinterface/autobin", s0 .* randn(100) .+ x0, step = i) #automatic bins
log_histogram(logger, "histogram/explicitinterface/manualbin", data_tuple, step = i) #manual bins
end
25 changes: 25 additions & 0 deletions examples/Scalars.jl
Original file line number Diff line number Diff line change
Expand Up @@ -23,3 +23,28 @@ with_logger(logger) do
@info "scalar/complex" y = z
end
end


################control step increments with context################
with_logger(logger) do
for epoch in 1:10
for i=1:100
# increments global_step by default
with_TBLogger_hold_step() do
# all of these are logged at the same global_step
# and the logger global_step is only then increased
@info "train1/scalar" val=i
@info "train2/scalar" val2=i/2
@info "train3/scalar" val3=100-i
end
end
# step increment at end can be disabled for easy train/test sync
with_TBLogger_hold_step(;step_at_end=false) do
# all of these are logged at the same global_step
# and the logger global_step is only then increased
@info "test1/scalar" epoch=epoch
@info "test2/scalar" epoch2=epoch^2
@info "test3/scalar" epoch3=epoch^3
end
end
end
2 changes: 1 addition & 1 deletion gen/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,5 @@ FilePathsBase = "48062228-2e41-5def-b9a4-89aafe57970f"
Glob = "c27321d9-0574-5035-807b-f59d2c89b15c"
ProtoBuf = "3349acd9-ac6a-5e09-bcdb-63829b23a429"

[comapt]
[compat]
ProtoBuf = "0.9.1"
54 changes: 18 additions & 36 deletions gen/compile_proto.jl
Original file line number Diff line number Diff line change
Expand Up @@ -21,63 +21,45 @@ pbpath =dirname(dirname(PosixPath(pathof(ProtoBuf))))/p"gen"
cur_path = cwd()
TBL_root = dirname(cur_path)

src_dir = cur_path/"proto"
out_dir = cur_path/"protojl"
# src_dir = cur_path/"proto"
src_dir = PosixPath(".")/"proto"
out_dir = cur_path/"protojl"

## Clean the output directory
rm(out_dir, force=true, recursive=true)

## First module
function process_module(cur_module::AbstractString; base_module::AbstractString=cur_module, input_path=cur_module)
# Include search paths
includes = [src_dir, src_dir/base_module]

# Output folder
module_out_dir = out_dir/cur_module
module_out_dir = out_dir/cur_module

# Input files
infiles = glob("*.proto", src_dir/input_path)
infiles = split.(string.(glob("*.proto", src_dir/input_path)), '/') .|> (a -> a[3:end]) .|> a -> joinpath(a...)

mkpath(module_out_dir)
includes_str=["--proto_path=$path" for path=includes]
run(ProtoBuf.protoc(`$includes_str --julia_out=$module_out_dir $infiles`))

nothing
relative_paths = string.(infiles)
search_directories = joinpath(@__DIR__, "proto")
output_directory = string(module_out_dir)
# println("relative_paths=$relative_paths")
# println("search_directories=$search_directories")
# println("output_directory=$output_directory")
ProtoBuf.protojl(relative_paths ,search_directories ,output_directory)
files_to_include = [string(module_out_dir/basename(file)) for file in infiles]
return files_to_include
end

#process_module("tensorflow", input_path="tensorflow/core/protobuf")

process_module("tensorboard", input_path="tensorboard/compat/proto")
files_to_include = process_module("tensorboard", input_path="tensorboard/compat/proto")

#plugins = ["audio", "custom_scalar", "hparams", "histogram", "image", "scalar", "text"]
plugins = ["custom_scalar", "hparams", "text"]
for plugin in plugins
process_module("tensorboard/plugins/$plugin", base_module="tensorboard")
end

append!(files_to_include, (process_module("tensorboard/plugins/$plugin", base_module="tensorboard") for plugin in plugins)...)

## this fails but would be better
#cur_module = "tensorboard"
#base_module = cur_module
#
## Include search paths
#includes = [src_dir, src_dir/base_module]
#
## Output folder
#module_out_dir = out_dir/("$cur_module"*"2")
#
## Input files
#infiles = glob("*.proto", src_dir/cur_module/"compat/proto")
#
#for plugin in plugins
# plugin_proto_files = glob("*.proto", src_dir/cur_module/"plugins/$plugin")
# append!(infiles, plugin_proto_files)
#end
#
#mkpath(module_out_dir)
#includes_str=["--proto_path=$path" for path=includes]
#run(ProtoBuf.protoc(`$includes_str --julia_out=$module_out_dir $infiles`))

# files_to_include contains all the proto files, can be used for printing and inspection
println("generated code for \n$files_to_include")

# Finally move the output directory to the src folder
mv(out_dir, TBL_root/"src"/"protojl")
Loading

0 comments on commit 00f4adb

Please sign in to comment.