Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add high-level CUDA support, doc improvements, NLDD performance increases, CSR backend as default #77

Merged
merged 122 commits into from
Nov 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
122 commits
Select commit Hold shift + click to select a range
e8b310c
Improve EDIT/TRANXYZ support
moyner Oct 16, 2024
6c209b2
Add docs to function
moyner Oct 18, 2024
4f86848
Fix to co2-brine extra_out in setup
moyner Oct 23, 2024
ea7a430
Move extensions into folder
moyner Oct 23, 2024
ef6c0a1
Add a basic reservoir version of CUDA linear solver
moyner Oct 23, 2024
f778b4f
Update utils.jl
moyner Oct 24, 2024
e2ff69f
Generalize CUDA a bit
moyner Oct 24, 2024
93bb1e1
Put in place some Schur stuff
moyner Oct 24, 2024
b55ba61
Put in place some more factorization updates
moyner Oct 24, 2024
e3c230e
Schur additions
moyner Oct 24, 2024
ca0572f
Update mul
moyner Oct 24, 2024
267a1ca
Rough draft of schur mostly on GPU
moyner Oct 24, 2024
391b16f
Make GPU and CPU operations independent in schur multiply
moyner Oct 24, 2024
a5d12e8
Add timing to CUDA calls
moyner Oct 24, 2024
281e652
Update interface
moyner Oct 25, 2024
e5e20ea
Draft extension to upcoming AMGX release
moyner Oct 25, 2024
5b3251b
Add some AMGX init code
moyner Oct 25, 2024
d914ceb
More CPR testing
moyner Oct 25, 2024
253ff5c
More work on GPU CPR
moyner Oct 25, 2024
7ee843d
CPR apply seems to run once...
moyner Oct 25, 2024
03a5655
Fixes to kernel launches
moyner Oct 25, 2024
eba4fb6
Sort-of-working GPU CPR with wells
moyner Oct 25, 2024
d0cffe6
Update format for AMGX config
moyner Oct 25, 2024
3833251
Update copy of buffer in CUDA
moyner Oct 27, 2024
b5ab2cc
Updates to CPR code
moyner Oct 27, 2024
e54d62b
Updates to CUDA
moyner Oct 27, 2024
f7f7d39
Fix for wells
moyner Oct 28, 2024
a2c05c7
Some CUDA code backup
moyner Oct 28, 2024
017c1ef
Use CUDA threads in CPR code
moyner Oct 28, 2024
c6f15b1
Use view instead of custom kernel for CPR dp
moyner Oct 28, 2024
b1a070c
Performance tweaks
moyner Oct 28, 2024
a58af9c
Remove some code duplication
moyner Oct 28, 2024
96c9495
Reorder destructors for AMGX
moyner Oct 28, 2024
0824cfc
Bump Jutul compat & version
moyner Oct 28, 2024
c3d398c
Add simple preconditioner version of AMGX
moyner Oct 28, 2024
240b37c
Clean up linear solver set option and re-enable disabled test
moyner Oct 29, 2024
9a76302
Merge pull request #76 from sintefmath/cuda
moyner Oct 29, 2024
1961f8b
Update cuda_utils.jl
moyner Oct 30, 2024
ed22c95
Factor well groups into function
moyner Oct 30, 2024
d9d0207
Reverse well splitting in partitioner
moyner Oct 30, 2024
c5a671c
Replace NLDD partitioner with builtin code
moyner Oct 30, 2024
3df9008
Fix bug in partial update for CPR
moyner Nov 4, 2024
92d1cca
Handle THCONR
moyner Nov 5, 2024
4f09647
Fix bug in partial update for CPR
moyner Nov 4, 2024
f31db92
Merge branch 'nldd-updates' of https://github.com/sintefmath/JutulDar…
moyner Nov 5, 2024
fcd3f68
Fix to non-historical RESV producers
moyner Nov 5, 2024
b9b63bf
NLDD: Do secondary update on full model (WIP)
moyner Nov 5, 2024
8652ccc
Make timings more accurate for CUDA
moyner Nov 6, 2024
0fd726b
Add sync point to CUDA AMGX
moyner Nov 6, 2024
10e2461
NLDD: Refactor change checking
moyner Nov 6, 2024
026da81
Speed up the NLDD adaptivity check
moyner Nov 6, 2024
c3b3360
NLDD: Fix secondary timing
moyner Nov 6, 2024
8ab543e
NLDD: Performance fixes
moyner Nov 6, 2024
09dd7db
Improve EDIT/TRANXYZ support
moyner Oct 16, 2024
a0fc7a3
Add docs to function
moyner Oct 18, 2024
c7a9b66
Fix to co2-brine extra_out in setup
moyner Oct 23, 2024
e008f35
Move extensions into folder
moyner Oct 23, 2024
673773d
Add a basic reservoir version of CUDA linear solver
moyner Oct 23, 2024
d81d571
Update utils.jl
moyner Oct 24, 2024
da35c27
Generalize CUDA a bit
moyner Oct 24, 2024
f91a17c
Put in place some Schur stuff
moyner Oct 24, 2024
8a46b24
Put in place some more factorization updates
moyner Oct 24, 2024
9a655d6
Schur additions
moyner Oct 24, 2024
64a36af
Update mul
moyner Oct 24, 2024
3dd23f0
Rough draft of schur mostly on GPU
moyner Oct 24, 2024
5dbc7ad
Make GPU and CPU operations independent in schur multiply
moyner Oct 24, 2024
12f39d7
Add timing to CUDA calls
moyner Oct 24, 2024
5e0e995
Update interface
moyner Oct 25, 2024
06eb25a
Draft extension to upcoming AMGX release
moyner Oct 25, 2024
9311196
Add some AMGX init code
moyner Oct 25, 2024
4a07224
More CPR testing
moyner Oct 25, 2024
6b71aff
More work on GPU CPR
moyner Oct 25, 2024
9d5cedc
CPR apply seems to run once...
moyner Oct 25, 2024
8d67381
Fixes to kernel launches
moyner Oct 25, 2024
f07a7ce
Sort-of-working GPU CPR with wells
moyner Oct 25, 2024
07f11ac
Update format for AMGX config
moyner Oct 25, 2024
baaf79f
Update copy of buffer in CUDA
moyner Oct 27, 2024
6d8a074
Updates to CPR code
moyner Oct 27, 2024
e0d4ce3
Updates to CUDA
moyner Oct 27, 2024
85e84cf
Fix for wells
moyner Oct 28, 2024
8c54b68
Some CUDA code backup
moyner Oct 28, 2024
af5ba61
Use CUDA threads in CPR code
moyner Oct 28, 2024
c2ae393
Use view instead of custom kernel for CPR dp
moyner Oct 28, 2024
778d604
Performance tweaks
moyner Oct 28, 2024
136f68d
Remove some code duplication
moyner Oct 28, 2024
4fa318e
Reorder destructors for AMGX
moyner Oct 28, 2024
2afcb24
Bump Jutul compat & version
moyner Oct 28, 2024
5fec6d5
Add simple preconditioner version of AMGX
moyner Oct 28, 2024
a4e685f
Clean up linear solver set option and re-enable disabled test
moyner Oct 29, 2024
52c1911
Update cuda_utils.jl
moyner Oct 30, 2024
9fda4e8
Factor well groups into function
moyner Oct 30, 2024
207d72e
Reverse well splitting in partitioner
moyner Oct 30, 2024
177d775
Replace NLDD partitioner with builtin code
moyner Oct 30, 2024
04e1276
Handle THCONR
moyner Nov 5, 2024
e27dc27
Fix bug in partial update for CPR
moyner Nov 4, 2024
1906d44
Fix to non-historical RESV producers
moyner Nov 5, 2024
f26c85e
NLDD: Do secondary update on full model (WIP)
moyner Nov 5, 2024
11b4b2c
Make timings more accurate for CUDA
moyner Nov 6, 2024
64d5971
Add sync point to CUDA AMGX
moyner Nov 6, 2024
b585b5b
NLDD: Refactor change checking
moyner Nov 6, 2024
5ebc9bd
Speed up the NLDD adaptivity check
moyner Nov 6, 2024
b1a23d9
NLDD: Fix secondary timing
moyner Nov 6, 2024
35960fa
NLDD: Performance fixes
moyner Nov 6, 2024
a559e1e
Merge branch 'nldd-updates' into dev
moyner Nov 10, 2024
665f3b8
Pin memory in CUDA
moyner Nov 10, 2024
56d086c
Add detailed timing to GPU AMG
moyner Nov 11, 2024
a5cebe9
Partial update support for CUDA-CPR
moyner Nov 11, 2024
d4be2c9
Update example and reduce output.
moyner Nov 11, 2024
faff8cd
Update example
moyner Nov 11, 2024
0ae56a8
AMGX: Use resetup!
moyner Nov 11, 2024
5ff62fd
AMGX cleanup and support for resetup!
moyner Nov 12, 2024
40acd8b
Make AMGX constructor more user friendly
moyner Nov 12, 2024
20a0a37
Switch default backend to CSR
moyner Nov 12, 2024
1f0ac8d
AMGX: unpin memory in finalizer
moyner Nov 12, 2024
f833e0a
Adjust default pre/post sweeps in AMGX
moyner Nov 12, 2024
37e3f94
Update tests
moyner Nov 12, 2024
21d69dd
Add CUDA support to high level interface + docs
moyner Nov 12, 2024
b9b60dd
Fix missing thread batch for CSR CPR
moyner Nov 12, 2024
4267f03
Update cpr.jl
moyner Nov 12, 2024
ab667cc
Update two_phase_gravity_segregation.jl
moyner Nov 12, 2024
ac54bfb
Update examples to look a bit nicer
moyner Nov 12, 2024
ff7e0c0
Update example and set next version
moyner Nov 12, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 11 additions & 3 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "JutulDarcy"
uuid = "82210473-ab04-4dce-b31b-11573c4f8e0a"
authors = ["Olav Møyner <[email protected]>"]
version = "0.2.35"
version = "0.2.36"

[deps]
AlgebraicMultigrid = "2169fc97-5a83-5252-b627-83903c6c433c"
Expand Down Expand Up @@ -34,20 +34,26 @@ TimerOutputs = "a759f4b9-e2f1-59dc-863e-4aeb61b1ea8f"
Tullio = "bc48ee85-29a4-5162-ae0b-a64e1601d4bc"

[weakdeps]
AMGX = "c963dde9-0319-47f5-bf0c-b07d3c80ffa6"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
GLMakie = "e9467ef8-e4e7-5192-8a1a-b1aee30e663a"
HYPRE = "b5ffcf37-a2bd-41ab-a3da-4bd9bc8ad771"
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
Makie = "ee78f7c6-11fb-53f2-987a-cfe4a2b5a57a"
PartitionedArrays = "5a9dfac6-5c52-46f7-8278-5e2210713be9"

[extensions]
JutulDarcyAMGXExt = "AMGX"
JutulDarcyCUDAExt = "CUDA"
JutulDarcyGLMakieExt = "GLMakie"
JutulDarcyMakieExt = "Makie"
JutulDarcyPartitionedArraysExt = ["PartitionedArrays", "MPI", "HYPRE"]

[compat]
AlgebraicMultigrid = "0.5.1, 0.6.0"
Artifacts = "1"
AMGX = "0.2"
CUDA = "5"
DataStructures = "0.18.13"
Dates = "1"
DelimitedFiles = "1.6"
Expand All @@ -56,7 +62,7 @@ ForwardDiff = "0.10.30"
GLMakie = "0.10.13"
GeoEnergyIO = "1.1.12"
HYPRE = "1.6.0, 1.7"
Jutul = "0.2.40"
Jutul = "0.2.42"
Krylov = "0.9.1"
LazyArtifacts = "1"
LinearAlgebra = "1"
Expand All @@ -78,6 +84,8 @@ Tullio = "0.3.4"
julia = "1.7"

[extras]
AMGX = "c963dde9-0319-47f5-bf0c-b07d3c80ffa6"
CUDA = "052768ef-5323-5732-b1bb-66c8b64840ba"
GLMakie = "e9467ef8-e4e7-5192-8a1a-b1aee30e663a"
HYPRE = "b5ffcf37-a2bd-41ab-a3da-4bd9bc8ad771"
MPI = "da04e1cc-30fd-572f-bb4f-1f8673147195"
Expand All @@ -87,4 +95,4 @@ TestItemRunner = "f8b46487-2199-4994-9208-9a1283c18c0a"
TestItems = "1c621080-faea-4a02-84b6-bbd5e436b8fe"

[targets]
test = ["Test", "TestItemRunner", "TestItems", "HYPRE", "MPI", "PartitionedArrays"]
test = ["Test", "CUDA", "TestItemRunner", "TestItems", "HYPRE", "MPI", "PartitionedArrays"]
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,7 @@ function build_jutul_darcy_docs(build_format = nothing;
],
"Further reading" => [
"man/advanced/mpi.md",
"man/advanced/gpu.md",
"man/advanced/compiled.md",
"Jutul functions" => "ref/jutul.md",
"Bibliography" => "extras/refs.md"
Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ features:
link: /examples/intro_sensitivities

- icon: 🏃
title: High performance
details: Fast execution with support for MPI and thread parallelism
title: High performance on CPU & GPU
details: Fast execution with support for MPI, CUDA and thread parallelism
link: /man/advanced/mpi
---
````
Expand Down
57 changes: 57 additions & 0 deletions docs/src/man/advanced/gpu.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# GPU support

JutulDarcy includes experimental support for running linear solves on the GPU. For many simulations, the linear systems are the most compute-intensive part and a natural choice for acceleration. At the moment, the support is limited to CUDA GPUs through [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl). For the most efficient CPR preconditioner, [AMGX.jl](https://github.com/JuliaGPU/AMGX.jl) is required which is currently limited to Linux systems. Windows users may have luck by running Julia inside [WSL](https://learn.microsoft.com/en-us/windows/wsl/install).

## How to use

If you have installed JutulDarcy, you should start by adding the CUDA and optionally the AMGX packages using the package manager:

```julia
using Pkg
Pkg.add("CUDA") # Requires a CUDA-capable GPU
Pkg.add("AMGX") # Requires CUDA + Linux
```

Once the packages have been added to the same environment as JutulDarcy, you can load them to enable GPU support. Let us grab the first ten steps of the EGG benchmark model:

```julia
using Jutul, JutulDarcy
dpth = JutulDarcy.GeoEnergyIO.test_input_file_path("EGG", "EGG.DATA")
case = setup_case_from_data_file(dpth)
case = case[1:10]
```

### Running on CPU

If we wanted to run this on CPU we would simply call `simulate_reservoir`:

```julia
result_cpu = simulate_reservoir(case);
```

### Running on GPU with block ILU(0)

If we now load `CUDA` we can run the same simulation using the CUDA-accelerated linear solver. By itself, CUDA only supports the ILU(0) preconditioner. JutulDarcy will automatically pick this preconditioner when CUDA is requested without AMGX, but we write it explicitly here:

```julia
using CUDA
result_ilu0_cuda = simulate_reservoir(case, linear_solver_backend = :cuda, precond = :ilu0);
```

### Running on GPU with CPR AMGX-ILU(0)

Loading the AMGX package makes a pure GPU-based two-stage CPR available. Again, we are explicit in requesting CPR, but if both `CUDA` and `AMGX` are available and functional this is redundant:

```julia
using AMGX
result_amgx_cuda = simulate_reservoir(case, linear_solver_backend = :cuda, precond = :cpr);
```

In short, load `AMGX` and `CUDA` and run `simulate_reservoir(case, linear_solver_backend = :cuda)` to get GPU results. The EGG model is quite small, so if you want to see significant performance increases, a larger case will be necessary. `AMGX` also contains a large number of options that can be configured for advanced users.

## Technical details and limitations

The GPU implementation relies on assembly on CPU and pinned memory to transfer onto the CPU. This means that the performance can be significantly improved by launching Julia with multiple threads to speed up the non-GPU parts of the code. AMGX is currently single-GPU only and does not work with MPI. Currently, only `Float64` is supported for CPR, but pure ILU(0) solves support `Float32` as well.

!!! warning "Experimental status"
Multiple successive runs with different `AMGX` instances have resulted in crashes when old instances are garbage collected. This part of the code is still considered experimental, with contributions welcome if you are using it.
4 changes: 1 addition & 3 deletions docs/src/man/advanced/mpi.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ JutulDarcy can use threads by default, but advanced options can improve performa

## Overview of parallel support

There are two main ways of exploiting multiple cores in Jutul/JutulDarcy: Threads are automatically used for assembly and can be used for parts of the linear solve. If you require the best performance, you have to go to MPI where the linear solvers can use a parallel [BoomerAMG preconditioner](https://hypre.readthedocs.io/en/latest/solvers-boomeramg.html) via [HYPRE.jl](https://github.com/fredrikekre/HYPRE.jl).
There are two main ways of exploiting multiple cores in Jutul/JutulDarcy: Threads are automatically used for assembly and can be used for parts of the linear solve. If you require the best performance, you have to go to MPI where the linear solvers can use a parallel [BoomerAMG preconditioner](https://hypre.readthedocs.io/en/latest/solvers-boomeramg.html) via [HYPRE.jl](https://github.com/fredrikekre/HYPRE.jl). In addition, there is experimental GPU support described in [GPU support](@ref).

### MPI parallelization

Expand All @@ -25,7 +25,6 @@ Starting Julia with multiple threads (for example `julia --project. --threads=4`

Threads are easy to use and can give a bit of benefit for large models.


### Mixed-mode parallelism

You can mix the two approaches: Adding multiple threads to each MPI process can use threads to speed up assembly and property evaluations.
Expand All @@ -42,7 +41,6 @@ A few hints when you are looking at performance:

Example: 200k cell model on laptop: 1 process 235 s -> 4 processes 145s


## Solving with MPI in practice

There are a few adjustments needed before a script can be run in MPI.
Expand Down
7 changes: 4 additions & 3 deletions examples/co2_sloped.jl
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ krog_t = so.^2
krog = PhaseRelativePermeability(so, krog_t, label = :og)

# Higher resolution for second table:
sg = range(0, 1, 50)
sg = range(0, 1, 50);

# Evaluate Brooks-Corey to generate tables:
tab_krg_drain = brooks_corey_relperm.(sg, n = 2, residual = 0.1)
Expand Down Expand Up @@ -251,7 +251,7 @@ wd, states, t = simulate_reservoir(state0, model, dt,
parameters = parameters,
forces = forces,
max_timestep = 90day
)
);
# ## Plot the CO2 mole fraction
# We plot the overall CO2 mole fraction. We scale the color range to log10 to
# account for the fact that the mole fraction in cells made up of only the
Expand Down Expand Up @@ -339,7 +339,8 @@ for cell in 1:nc
x, y, z = centers[:, cell]
is_inside[cell] = sqrt((x - 720.0)^2 + 20*(z-70.0)^2) < 75
end
plot_cell_data(mesh, is_inside)
fig, ax, plt = plot_cell_data(mesh, is_inside)
fig
# ## Plot inventory in ellipsoid
# Note that a small mobile dip can be seen when free CO2 passes through this region.
inventory = co2_inventory(model, wd, states, t, cells = findall(is_inside))
Expand Down
22 changes: 18 additions & 4 deletions examples/data_input_file.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,28 +5,42 @@
# regular JutulDarcy simulation, allowing modification and use of the case in
# differentiable workflows.
#
# We begin by loading the SPE9 dataset via the GeoEnergyIO package.
# We begin by loading the SPE9 dataset via the GeoEnergyIO package. This package
# includes a set of open datasets that can be used for testing and benchmarking.
# The SPE9 dataset is a 3D model with a corner-point grid and a set of wells
# produced by the Society of Petroleum Engineers. The specific version of the
# file included here is taken from the [OPM
# tests](https://github.com/OPM/opm-tests) repository.
using JutulDarcy, GeoEnergyIO
pth = GeoEnergyIO.test_input_file_path("SPE9", "SPE9.DATA")
pth = GeoEnergyIO.test_input_file_path("SPE9", "SPE9.DATA");
# ## Set up and run a simulation
# If we do not need the case, we could also have done:
# ws, states = simulate_data_file(pth)
case = setup_case_from_data_file(pth)
ws, states = simulate_reservoir(case)
ws, states = simulate_reservoir(case);
# ## Show the input data
# The input data takes the form of a Dict:
case.input_data
# We can also examine the for example RUNSPEC section, which is also represented
# as a Dict.
case.input_data["RUNSPEC"]
# ## Plot the simulation model
# These plot are interactive when run outside of the documentations.
# These plot are normally interactive, but if you are reading the published
# online documentation static screenshots will be inserted instead.
using GLMakie
plot_reservoir(case.model, states)
# ## Plot the well responses
# We can plot the well responses (rates and pressures) in an interactive viewer.
# Multiple wells can be plotted simultaneously, with options to select which
# units are to be used for plotting.
plot_well_results(ws)
# ## Plot the field responses
# Similar to the wells, we can also plot field-wide measurables. We plot the
# field gas production rate and the average pressure as the initial selection.
# If you are running this case interactively you can select which measurables to
# plot.
#
# We observe that the field pressure steadily decreases over time, as a result
# of the gas production. The drop in pressure is not uniform, as during the
# period where little gas is produced, the decrease in field pressure is slower.
plot_reservoir_measurables(case, ws, states, left = :fgpr, right = :pres)
16 changes: 5 additions & 11 deletions examples/five_spot_ensemble.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,7 @@
# injector in one corner and the producer in the opposing corner, with a
# significant volume of fluids injected into the domain.
using JutulDarcy, Jutul
nx = 50
#-
nx = 50;
# ## Setup
# We define a function that, for a given porosity field, computes a solution
# with an estimated permeability field. For assumptions and derivation of the
Expand Down Expand Up @@ -55,10 +54,9 @@ function simulate_qfs(porosity = 0.2)
forces = setup_reservoir_forces(model, control = controls)
return simulate_reservoir(state0, model, dt, parameters = parameters, forces = forces, info_level = -1)
end
#-
# ## Simulate base case
# This will give the solution with uniform porosity of 0.2.
ws, states, report_time = simulate_qfs()
ws, states, report_time = simulate_qfs();
# ### Plot the solution of the base case
# We observe a radial flow pattern initially, before coning occurs near the
# producer well once the fluid has reached the opposite corner. The uniform
Expand All @@ -75,7 +73,6 @@ ax = Axis(fig[1, 2])
h = contourf!(ax, get_sat(states[nt]))
Colorbar(fig[1, end+1], h)
fig
#-
# ## Create 10 realizations
# We create a small set of realizations of the same model, with porosity that is
# uniformly varying between 0.05 and 0.3. This is not especially sophisticated
Expand All @@ -89,11 +86,10 @@ wells = []
report_step = nt
for i = 1:N
poro = 0.05 .+ 0.25*rand(Float64, (nx*nx))
ws, states, rt = simulate_qfs(poro)
push!(wells, ws)
push!(saturations, get_sat(states[report_step]))
ws_i, states_i, rt = simulate_qfs(poro)
push!(wells, ws_i)
push!(saturations, get_sat(states_i[report_step]))
end
#-
# ### Plot the oil rate at the producer over the ensemble
using Statistics
fig = Figure()
Expand All @@ -106,15 +102,13 @@ end
xlims!(ax, [mean(report_time), report_time[end]])
ylims!(ax, 0, 0.0075)
fig
#-
# ### Plot the average saturation over the ensemble
avg = mean(saturations)
fig = Figure()
h = nothing
ax = Axis(fig[1, 1])
h = contourf!(ax, avg)
fig
#-
# ### Plot the isocontour lines over the ensemble
fig = Figure()
h = nothing
Expand Down
2 changes: 1 addition & 1 deletion examples/model_coarsening.jl
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ fine_case = setup_case_from_data_file(data_pth);
fine_model = fine_case.model
fine_reservoir = reservoir_domain(fine_model)
fine_mesh = physical_representation(fine_reservoir)
ws, states = simulate_reservoir(fine_case, info_level = 1);
ws, states = simulate_reservoir(fine_case, info_level = -1);
# ## Coarsen the model and plot partition
# We coarsen the model using a partition size of 20x20x2 and the IJK method
# where the underlying structure of the mesh is used to subdivide the blocks.
Expand Down
13 changes: 7 additions & 6 deletions examples/optimize_simple_bl.jl
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
using Jutul
using JutulDarcy
using LinearAlgebra
using GLMakie
# # Example demonstrating optimzation of parameters against observations
# We create a simple test problem: A 1D nonlinear displacement. The observations
# are generated by solving the same problem with the true parameters. We then
# match the parameters against the observations using a different starting guess
# for the parameters, but otherwise the same physical description of the system.
using Jutul
using JutulDarcy
using LinearAlgebra
using GLMakie

function setup_bl(;nc = 100, time = 1.0, nstep = 100, poro = 0.1, perm = 9.8692e-14)
T = time
tstep = repeat([T/nstep], nstep)
Expand Down Expand Up @@ -39,10 +40,10 @@ poro_ref = 0.1
perm_ref = 9.8692e-14
# ## Set up and simulate reference
model_ref, state0_ref, parameters_ref, forces, tstep = setup_bl(nc = N, nstep = Nt, poro = poro_ref, perm = perm_ref)
states_ref, = simulate(state0_ref, model_ref, tstep, parameters = parameters_ref, forces = forces, info_level = -1)
states_ref, = simulate(state0_ref, model_ref, tstep, parameters = parameters_ref, forces = forces, info_level = -1);
# ## Set up another case where the porosity is different
model, state0, parameters, = setup_bl(nc = N, nstep = Nt, poro = 2*poro_ref, perm = 1.0*perm_ref)
states, rep = simulate(state0, model, tstep, parameters = parameters, forces = forces, info_level = -1)
states, rep = simulate(state0, model, tstep, parameters = parameters, forces = forces, info_level = -1);
# ## Plot the results
fig = Figure()
ax = Axis(fig[1, 1], title = "Saturation")
Expand Down
1 change: 0 additions & 1 deletion examples/relperms.jl
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,6 @@ lines(simulate_bl(model, prm), axis = (title = "Parametric LET function simulati
# ### Check out the parameters
# The LET parameters are now numerical parameters in the reservoir:
rmodel = reservoir_model(model)
display(rmodel)

# ## Conclusion
# We have explored a few aspects of relative permeabilities in JutulDarcy. There
Expand Down
2 changes: 0 additions & 2 deletions examples/two_phase_buckley_leverett.jl
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ end
n, n_f = 100, 1000
states, model, report = solve_bl(nc = n)
print_stats(report)
#-
# ## Run refined version (1000 cells, 1000 steps)
# Using a grid with 100 cells will not yield a fully converged solution. We can
# increase the number of cells at the cost of increasing the runtime a bit. Note
Expand All @@ -61,7 +60,6 @@ print_stats(report)
# use an iterative solver.
states_refined, _, report_refined = solve_bl(nc = n_f);
print_stats(report_refined)
#-
# ## Plot results
# We plot the saturation front for the base case at different times together
# with the final solution for the refined model. In this case, refining the grid
Expand Down
Loading
Loading