-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Referee reports, SISC October 2015 (decision: reject) #18
Comments
Referee # 1, comment # 1:
The book by Saad (inventor of GMRES) does indeed present the Krylov subspace in terms of the residuals (p. 157). But other textbooks write it in terms of the right-hand vector b, e.g., Trefethen and Bau, p. 266. Of course, they are equivalent when the initial guess x_0 is zero—which is our case. We have changed how it's written in the paper to match Saad's presentation. See commit b205d95. |
Referee # 1, comment # 3:
Because the coefficient matrix A is not always symmetric. Element A_ij is the integral over panel j with the Green's function evaluated at the collocation point of panel i. Due to different shape and orientation of panels, A_ij ≠ A_ji. |
Referee # 1, comment # 4:
The referee may not appreciate a citation to Wikipedia, but here goes: "…the Laplacian … is usually denoted by the symbols ∇·∇, ∇2, or Δ." Plenty of textbooks adopt the ∇2 notation, in fact. One of our favorites is "Vectors, tensors, and the basic equations of fluid mechanics" by Aris. Wonderful little book! Even the classic old "Foundations of Potential Theory" by Kellogg (1929) uses the ∇2 notation. The best answer we can find to "since when" ∇2 is used to denote the Laplacian operator is: 1873, "A treatise on electricity and magnetism," Vol. 1, by James Clerk Maxwell, page 29. See: http://visualiseur.bnf.fr/CadresFenetre?O=NUMM-95175&I=3&M=tdm |
Referee # 1, comment # 5:
In version 1 of the manuscript—arxiv:1506.05957v1—,we chose p (value of 10 in Laplace and 16 in Poisson) from experience, to give accuracy close to singe precision at the finest mesh. Then, p was kept the same at coarser meshes simply from a criterion of "changing one thing at a time." The iterative tolerance is tighter for the grid-convergence analysis to ensure that discretization error is what dominates. The tests of residual-over-p do not use the finest mesh, and thus the tolerance can be loosened. (In later versions of the manuscript, we have made extra effort to choose these parameters.) The value of ϴ_MAC=0.5 is standard in fast multipole method evaluations: it gives an acceptance criterion based on well-separated boxes. In version 2 of the manuscript—1506.05957v2—, we matched the tolerance in the grid-convergence and the residual-over-p result, to avoid any misunderstanding. It does not affect the result in any substantial way. In version 3 of the manuscript, we did two things. For the grid-convergence analysis, we show results with "tight" parameters, for high accuracy—high p, high k, small tolerance—, and "loose" parameters—reducing p, etc., until accuracy begins to deteriorate, but no more than 10%. For other tests, we made an effort to look for the optimal combination of parameters to match the discretization error according to grid coarseness. The whole process is very detailed, and we report it through additional supplementary materials (Jupyter notebooks). |
Referee # 1, comment # 6:
The referee is right to point out that we don't state the norm of the relative error used for the grid-convergence study. In version 2 of the manuscript, we added this information: we used the L2-norm of the error with respect to the analytical solution. The referee is also right to point out that we should report the actual error, rather than only the residuals. Responding to comment # 11, we added the residual history in version 2 of the manuscript (it was, indeed, a mistake with the figure version used). But n version 3 of the manuscript, we removed the result showing residual history altogether, and instead report the relative error for each test case (in table form).
The tests show that only a few dozen iterations are needed to converge (without using a preconditioner), indicating that the system is not badly conditioned. Furthermore, we report the actual error, proving that the system is indeed converging to the right solution. |
Referee # 1, comment # 7:
We were not simulating ill-conditioned problems, but simply exploring how the speed-up might be in cases that required more iterations. Nevertheless, we removed this result in version 2 of the manuscript. It made no major claims: is only showed a trend to higher speed-ups. |
Referee # 1, comment # 8:
It is related to the bound. As explained in the text below the FMM error bound, we take r / a ≥ 2, which comes a geometrical consideration, and that gives the base-2 log in equation 2.19. |
Referee # 1, comment # 12:
The footnotes appeared on the next pages — this is just a LaTeX quirk and we had no control over it (but would have been fixed in copy-editing). |
Decision : Reject (October 14, 2015)
Refers to v1 of arXiv:1506.05957.
SISC Associate Editor Comments
Referee # 1:
This paper by Layton and Barba is basically centred around the following ideas:
The BEM for PDEs leads to dense linear systems, which however contains a lot of distance-based degradation of the importance of the coupling between two unknowns. This naturally leads to the idea of using the FMM to accelerate matrix-vector products.
Results from the previous decade indicate that Krylov solvers can be inexact, or better put, that matrix-vector products can be computed with some error without severely compromising the result. The results also indicate that the error can increase (within certain bounds) over iterations.
The contribution of this paper is the combination of these (existing) ideas, and to evaluate the approach with extensive numerical tests.
I think the idea is nice and deserves publication, despite not being earth-shatteringly novel. In its present form however, I cannot recommend acceptance one key reason: The paper is "too sloppy" for SISC, both in terms of language and more importantly, of maths.
Now, the sloppy math: Assuming that most of this can be addressed without severe impact on the results, the speedup results are nice, by the way.
Some minor comments:
Referee # 3:
A BEM solution strategy with relaxed application of the multipole method is proposed when solving the dense and large linear system. The idea of relaxing the accuracy of matrix-vector products was first introduced by Bouras & Fraysse, and then completely analyzed and developed in [17,18].
Since then, its applicability in multipole, multilevel and other complex strategies has been explored with success. Therefore, it is not surprising that also for this application the approach works well.
The idea is interesting, and the extensive experimental analysis also nice. However, it is not clear to me whether the contents of this manuscript is sufficiently novel to grant publication in SISC. The experimental discussion is not backed up by any theoretical discussion. Given the fact that the strategy has been used quite a bit, a deeper analysis would be welcome. Therefore, this manuscript
might be more suitable for a more application-oriented journal, or for a proceedings.
From a more technical point of view, I do not see any need to include Algorithm 1 and 2 for the given readership.
The text was updated successfully, but these errors were encountered: