Replies: 6 comments
-
Assumed that scipy implements it in a reasonable fashion (with regards to the sparsity etc.), I don't mind that the example is modified. |
Beta Was this translation helpful? Give feedback.
-
Yes. I'm not immediately sure how to check that, but will look into it. Perhaps repeatedly refine with each method until the machine falls over? Or I could read the source. This touches on a big area which hasn't been addressed much yet: scaling up. Most or all of the examples to date only demonstrate formal correctness, whereas very often the approaches that work best for small problems have to be swapped to scale up; e.g. switching from direct linear algebraic solvers to iterative and the need for preconditioning. I wonder whether there are good testing frameworks for this kind of thing, that would record not only correctness but speed and the consumption of memory. As these wouldn't have to run on-line, they might not introduce an on-going dependency. In general I do think that it is reasonable to offer multiple ways of solving the same problem, one method being simpler and clearer if only suitable for small problems, another that scales better but is more complicated. Of course iterative methods are usually slower than direct where the latter are applicable. And small problems do arise in practice; e.g. most one- and many two-dimensional problems can be treated as small. I shall give this general matter some thought. |
Beta Was this translation helpful? Give feedback.
-
More specifically, another thing is that |
Beta Was this translation helpful? Give feedback.
-
On further reflection, I think that the use of a canned numerical continuation algorithm, e.g. scikit-fem/docs/examples/ex23.py Lines 85 to 87 in c8ad891 is better than a simple Newton iteration. The former will typically contain a robust implementation of the latter and in real examples:
For example: the Reynolds number for the backward facing step (ex24). |
Beta Was this translation helpful? Give feedback.
-
Taking another look at this, there is an unforeseen difficulty with
Indeed, passing a function
which is because which mucks things up by turning that into an array of one object which is the Jacobian.
Perhaps this was forewarned in the stipulation
|
Beta Was this translation helpful? Give feedback.
-
Following the (offline) discussion of the 'action' of a bilinear form (Kirby & Logg, 'Finite element variational forms') and then the tantalizing last paragraph of Hannukainen & Jantunen about the then planned future work on 'matrix free implementation of the finite element assembly' (did they ever get to that?), I was reminded of ‘Jacobian-free Newton–Krylov methods’ and took another look at this using def residual(u: np.ndarray) -> np.ndarray:
res = asm(rhs, basis, w=basis.interpolate(u))
res[D] = 0.
return res
x = root(residual, x, method='krylov', options={'disp': True}).x Complete file: ex10_jfnk.py.txt |
Beta Was this translation helpful? Give feedback.
-
#118 referred to the translation of one of the FEniCS tutorial examples into scikit-fem https://github.com/gdmcbain/fenics-tuto-in-skfem/tree/master/05_poisson_nonlinear. As discussed there, it's a nonlinear Poisson equation like
docs/examples/ex10.py
.In ex10, Newton iteration is coded from scratch. I wondered whether it would be possible to use a canned implementation such as scipy.optimize.root. This was done.
Actually in the scikit-fem translation, the optional Jacobian is not passed to
root
; it solves in a reasonable time (4 seconds) without for this very small problem. It shouldn't be that hard to add the Jacobian (by hand as suggested in #27 or perhaps using SymPy #118).Would ex10 be better for using a canned nonlinear solver? I tend to prefer to encapsulate algorithms like Newton iteration than recode them each time.
SciPy is already a dependency.
Beta Was this translation helpful? Give feedback.
All reactions