From f126d3cfaba645b2aaf46aea522634a288bd5f4e Mon Sep 17 00:00:00 2001 From: Martin Thoma Date: Fri, 1 Jul 2022 06:56:12 +0200 Subject: [PATCH] PyPDF2 and PyMuPDF update --- README.md | 12 +- benchmark.py | 4 +- cache.json | 138 ++++----- read/results/pymupdf/1601.03642.txt | 8 + read/results/pymupdf/1602.06541.txt | 16 ++ read/results/pymupdf/1707.09725.txt | 134 +++++++++ read/results/pymupdf/2201.00021.txt | 10 + read/results/pymupdf/2201.00022.txt | 11 + read/results/pymupdf/2201.00029.txt | 12 + read/results/pymupdf/2201.00037.txt | 33 +++ read/results/pymupdf/2201.00069.txt | 15 + read/results/pymupdf/2201.00151.txt | 12 + read/results/pymupdf/2201.00178.txt | 16 ++ read/results/pymupdf/2201.00200.txt | 7 + read/results/pymupdf/2201.00201.txt | 9 + read/results/pymupdf/2201.00214.txt | 22 ++ read/results/pymupdf/GeoTopo-book.txt | 117 ++++++++ read/results/pypdf2/1601.03642.txt | Bin 29155 -> 29163 bytes read/results/pypdf2/1602.06541.txt | 47 ++-- read/results/pypdf2/1707.09725.txt | 387 +++++++++++++++++--------- read/results/pypdf2/2201.00021.txt | Bin 47163 -> 47173 bytes read/results/pypdf2/2201.00022.txt | Bin 44520 -> 44531 bytes read/results/pypdf2/2201.00029.txt | 29 +- read/results/pypdf2/2201.00037.txt | Bin 99958 -> 99991 bytes read/results/pypdf2/2201.00069.txt | Bin 53112 -> 53127 bytes read/results/pypdf2/2201.00151.txt | 35 ++- read/results/pypdf2/2201.00178.txt | Bin 45421 -> 45437 bytes read/results/pypdf2/2201.00200.txt | 20 +- read/results/pypdf2/2201.00201.txt | 26 +- read/results/pypdf2/2201.00214.txt | 65 +++-- read/results/pypdf2/GeoTopo-book.txt | 350 +++++++++++++++-------- 31 files changed, 1139 insertions(+), 396 deletions(-) diff --git a/README.md b/README.md index 8d481c7..8e0e65a 100644 --- a/README.md +++ b/README.md @@ -30,8 +30,8 @@ This benachmark is about reading pure PDF files - notscanned documents and not d | pdfminer.six | 2022-05-24 | MIT/X | 20220524 | | | pdfplumber | 2022-05-31 | MIT | 0.7.1 | | | pdftotext | - | GPL | 0.86.1 | build-essential libpoppler-cpp-dev pkg-config python3-dev | -| PyMuPDF | 2022-05-05 | GNU AFFERO GPL 3.0 / Commerical | 1.19.6 | MuPDF | -| PyPDF2 | 2022-06-14 | BSD 3-Clause | 2.2.0 | | +| PyMuPDF | 2022-06-27 | GNU AFFERO GPL 3.0 / Commerical | 1.20.0 | MuPDF | +| PyPDF2 | 2022-06-30 | BSD 3-Clause | 2.4.1 | | | Tika | 2020-03-21 | Apache v2 | 1.24 | Apache Tika | @@ -39,11 +39,11 @@ This benachmark is about reading pure PDF files - notscanned documents and not d | # | Library | Average | [ 1 ](https://arxiv.org/pdf/2201.00214.pdf) | [ 2 ](https://github.com/py-pdf/sample-files/raw/main/009-pdflatex-geotopo/GeoTopo.pdf) | [ 3 ](https://arxiv.org/pdf/2201.00151.pdf) | [ 4 ](https://arxiv.org/pdf/1707.09725.pdf) | [ 5 ](https://arxiv.org/pdf/2201.00021.pdf) | [ 6 ](https://arxiv.org/pdf/2201.00037.pdf) | [ 7 ](https://arxiv.org/pdf/2201.00069.pdf) | [ 8 ](https://arxiv.org/pdf/2201.00178.pdf) | [ 9 ](https://arxiv.org/pdf/2201.00201.pdf) | [ 10 ](https://arxiv.org/pdf/1602.06541.pdf) | [ 11 ](https://arxiv.org/pdf/2201.00200.pdf) | [ 12 ](https://arxiv.org/pdf/2201.00022.pdf) | [ 13 ](https://arxiv.org/pdf/2201.00029.pdf) | [ 14 ](https://arxiv.org/pdf/1601.03642.pdf) | | :- | :-------------------------------------------------------- | :------ | :---------------------------------------------- | :------------------------------------------------------------------------------------------ | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | -| 1 | [PyMuPDF ](https://pypi.org/project/PyMuPDF/) | 0.1s | 0.4s | 0.2s | 0.2s | 0.2s | 0.1s | 0.1s | 0.0s | 0.0s | 0.0s | 0.0s | 0.0s | 0.0s | 0.0s | 0.0s | +| 1 | [PyMuPDF ](https://pypi.org/project/PyMuPDF/) | 0.1s | 0.4s | 0.3s | 0.2s | 0.2s | 0.1s | 0.1s | 0.0s | 0.1s | 0.0s | 0.0s | 0.0s | 0.0s | 0.0s | 0.0s | | 2 | [pypdfium2 ](https://pypi.org/project/pypdfium2/) | 0.2s | 0.5s | 0.6s | 0.2s | 0.5s | 0.1s | 0.2s | 0.1s | 0.1s | 0.0s | 0.1s | 0.0s | 0.1s | 0.0s | 0.0s | | 3 | [Tika ](https://pypi.org/project/tika/) | 0.2s | 1.0s | 0.5s | 0.4s | 0.4s | 0.1s | 0.2s | 0.1s | 0.1s | 0.1s | 0.1s | 0.1s | 0.1s | 0.1s | 0.0s | | 4 | [pdftotext ](https://poppler.freedesktop.org/) | 0.3s | 0.7s | 0.9s | 0.3s | 0.8s | 0.1s | 0.3s | 0.2s | 0.1s | 0.0s | 0.1s | 0.1s | 0.1s | 0.0s | 0.0s | -| 5 | [PyPDF2 ](https://pypi.org/project/PyPDF2/) | 2.6s | 20.6s | 4.3s | 5.8s | 1.6s | 0.7s | 0.8s | 0.3s | 0.3s | 0.3s | 0.4s | 0.5s | 0.2s | 0.4s | 0.1s | +| 5 | [PyPDF2 ](https://pypi.org/project/PyPDF2/) | 2.3s | 17.3s | 4.0s | 5.2s | 2.0s | 0.4s | 0.7s | 0.3s | 0.3s | 0.3s | 0.4s | 0.4s | 0.2s | 0.3s | 0.1s | | 6 | [pdfminer.six ](https://pypi.org/project/pdfminer.six/) | 7.1s | 41.7s | 20.8s | 10.9s | 8.4s | 1.7s | 3.5s | 1.3s | 2.1s | 1.5s | 2.0s | 1.6s | 1.6s | 1.2s | 0.7s | | 7 | [pdfplumber ](https://pypi.org/project/pdfplumber/) | 7.9s | 53.7s | 13.5s | 14.1s | 8.0s | 2.7s | 4.2s | 2.3s | 1.8s | 1.6s | 3.0s | 1.9s | 1.6s | 1.1s | 1.1s | | 8 | [Borb ](https://pypi.org/project/borb/) | 63.2s | 208.5s | 301.5s | 2.8s | 108.9s | 26.5s | 30.1s | 95.8s | 28.0s | 23.5s | 11.4s | 8.9s | 28.6s | 6.4s | 3.8s | @@ -53,7 +53,7 @@ This benachmark is about reading pure PDF files - notscanned documents and not d | # | Library | Average | [ 1 ](https://arxiv.org/pdf/2201.00214.pdf) | [ 2 ](https://github.com/py-pdf/sample-files/raw/main/009-pdflatex-geotopo/GeoTopo.pdf) | [ 3 ](https://arxiv.org/pdf/2201.00151.pdf) | [ 4 ](https://arxiv.org/pdf/1707.09725.pdf) | [ 5 ](https://arxiv.org/pdf/2201.00021.pdf) | [ 6 ](https://arxiv.org/pdf/2201.00037.pdf) | [ 7 ](https://arxiv.org/pdf/2201.00069.pdf) | [ 8 ](https://arxiv.org/pdf/2201.00178.pdf) | [ 9 ](https://arxiv.org/pdf/2201.00201.pdf) | [ 10 ](https://arxiv.org/pdf/1602.06541.pdf) | [ 11 ](https://arxiv.org/pdf/2201.00200.pdf) | [ 12 ](https://arxiv.org/pdf/2201.00022.pdf) | [ 13 ](https://arxiv.org/pdf/2201.00029.pdf) | [ 14 ](https://arxiv.org/pdf/1601.03642.pdf) | | :- | :-------------------------------------------------- | :------ | :---------------------------------------------- | :------------------------------------------------------------------------------------------ | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | :---------------------------------------------- | -| 1 | [PyPDF2 ](https://pypi.org/project/PyPDF2/) | 7.3s | 20.6s | 4.3s | 5.8s | 1.6s | 0.7s | 0.8s | 0.3s | 0.3s | 0.3s | 0.4s | 0.5s | 0.2s | 0.4s | 0.1s | +| 1 | [PyPDF2 ](https://pypi.org/project/PyPDF2/) | 5.9s | 17.3s | 4.0s | 5.2s | 2.0s | 0.4s | 0.7s | 0.3s | 0.3s | 0.3s | 0.4s | 0.4s | 0.2s | 0.3s | 0.1s | ## Text Extraction Quality @@ -62,7 +62,7 @@ This benachmark is about reading pure PDF files - notscanned documents and not d | 1 | [pypdfium2 ](https://pypi.org/project/pypdfium2/) | 98% | 99% | 97% | 95% | 97% | 98% | 96% | 99% | 96% | 99% | 99% | 98% | 98% | 99% | 99% | | 2 | [PyMuPDF ](https://pypi.org/project/PyMuPDF/) | 97% | 98% | 97% | 94% | 95% | 98% | 96% | 99% | 95% | 99% | 98% | 98% | 98% | 98% | 99% | | 3 | [Tika ](https://pypi.org/project/tika/) | 97% | 99% | 99% | 94% | 99% | 98% | 97% | 94% | 99% | 99% | 93% | 98% | 94% | 98% | 96% | -| 4 | [PyPDF2 ](https://pypi.org/project/PyPDF2/) | 96% | 97% | 86% | 93% | 94% | 97% | 94% | 96% | 93% | 98% | 98% | 97% | 97% | 98% | 99% | +| 4 | [PyPDF2 ](https://pypi.org/project/PyPDF2/) | 96% | 97% | 87% | 93% | 94% | 97% | 94% | 96% | 93% | 98% | 98% | 97% | 97% | 98% | 99% | | 5 | [pdftotext ](https://poppler.freedesktop.org/) | 93% | 96% | 93% | 91% | 92% | 92% | 96% | 96% | 94% | 97% | 83% | 94% | 97% | 97% | 79% | | 6 | [pdfminer.six ](https://pypi.org/project/pdfminer.six/) | 90% | 95% | 79% | 87% | 90% | 86% | 94% | 96% | 91% | 92% | 92% | 94% | 86% | 98% | 86% | | 7 | [pdfplumber ](https://pypi.org/project/pdfplumber/) | 74% | 93% | 84% | 61% | 94% | 61% | 93% | 61% | 86% | 57% | 59% | 67% | 59% | 97% | 67% | diff --git a/benchmark.py b/benchmark.py index 0990b10..05a5728 100644 --- a/benchmark.py +++ b/benchmark.py @@ -461,7 +461,7 @@ def get_text_extraction_score(doc: Document, library_name: str): version=PyPDF2.__version__, watermarking_function=pypdf2_watermarking, license="BSD 3-Clause", - last_release_date="2022-06-14", + last_release_date="2022-06-30", ), "pdfminer": Library( "pdfminer.six", @@ -490,7 +490,7 @@ def get_text_extraction_score(doc: Document, library_name: str): watermarking_function=None, dependencies="MuPDF", license="GNU AFFERO GPL 3.0 / Commerical", - last_release_date="2022-05-05", + last_release_date="2022-06-27", ), "pdftotext": Library( "pdftotext", diff --git a/cache.json b/cache.json index 62ccf27..59e5966 100644 --- a/cache.json +++ b/cache.json @@ -222,103 +222,103 @@ }, "pymupdf": { "1601.03642": { - "read": 0.021236896514892578 + "read": 0.023758411407470703 }, "1602.06541": { - "read": 0.04923844337463379 + "read": 0.048156023025512695 }, "1707.09725": { - "read": 0.18419218063354492 + "read": 0.19010353088378906 }, "2201.00021": { - "read": 0.0506289005279541 + "read": 0.052664756774902344 }, "2201.00022": { - "read": 0.03642082214355469 + "read": 0.03505825996398926 }, "2201.00029": { - "read": 0.023745298385620117 + "read": 0.02772665023803711 }, "2201.00037": { - "read": 0.10241866111755371 + "read": 0.09012722969055176 }, "2201.00069": { - "read": 0.04053354263305664 + "read": 0.04471254348754883 }, "2201.00151": { - "read": 0.21468496322631836 + "read": 0.17026329040527344 }, "2201.00178": { - "read": 0.043529510498046875 + "read": 0.06370210647583008 }, "2201.00200": { - "read": 0.03151369094848633 + "read": 0.02881169319152832 }, "2201.00201": { - "read": 0.03332996368408203 + "read": 0.031729936599731445 }, "2201.00214": { - "read": 0.41216444969177246 + "read": 0.3913586139678955 }, "GeoTopo-book": { - "read": 0.24680590629577637 + "read": 0.275968074798584 } }, "pypdf2": { "1601.03642": { - "read": 0.13181662559509277 + "read": 0.11479783058166504 }, "1602.06541": { - "read": 0.44907617568969727, - "watermark": 1.4312434196472168 + "read": 0.3940451145172119, + "watermark": 1.0545539855957031 }, "1707.09725": { - "read": 1.632399320602417, - "watermark": 6.472326278686523 + "read": 1.992173433303833, + "watermark": 5.0714006423950195 }, "2201.00021": { - "read": 0.744328498840332, - "watermark": 0.911888599395752 + "read": 0.43038320541381836, + "watermark": 0.9083621501922607 }, "2201.00022": { - "read": 0.22876715660095215, - "watermark": 0.8920032978057861 + "read": 0.20012378692626953, + "watermark": 0.7139835357666016 }, "2201.00029": { - "read": 0.3686819076538086, - "watermark": 0.11223363876342773 + "read": 0.33080124855041504, + "watermark": 0.08133554458618164 }, "2201.00037": { - "read": 0.8160533905029297, - "watermark": 2.120100975036621 + "read": 0.6840682029724121, + "watermark": 1.6109654903411865 }, "2201.00069": { - "read": 0.32007670402526855, - "watermark": 1.1236357688903809 + "read": 0.2880227565765381, + "watermark": 0.9331471920013428 }, "2201.00151": { - "read": 5.836317300796509, - "watermark": 16.197925567626953 + "read": 5.154994010925293, + "watermark": 13.142579793930054 }, "2201.00178": { - "read": 0.33651185035705566, - "watermark": 1.1913495063781738 + "read": 0.337890625, + "watermark": 0.8712546825408936 }, "2201.00200": { - "read": 0.48441600799560547, - "watermark": 0.53401780128479 + "read": 0.4142639636993408, + "watermark": 0.4322509765625 }, "2201.00201": { - "read": 0.2817673683166504, - "watermark": 0.7810215950012207 + "read": 0.25218749046325684, + "watermark": 0.6003615856170654 }, "2201.00214": { - "read": 20.626332998275757, - "watermark": 56.71852135658264 + "read": 17.280361890792847, + "watermark": 45.37559413909912 }, "GeoTopo-book": { - "read": 4.307093143463135, - "watermark": 13.88920783996582 + "read": 4.041578054428101, + "watermark": 11.496635913848877 } }, "tika": { @@ -448,36 +448,36 @@ "GeoTopo-book": 0.9338226113014887 }, "pymupdf": { - "1601.03642": 0.98920850946833, - "1602.06541": 0.9831085375326601, - "1707.09725": 0.9500410452707244, - "2201.00021": 0.9814699454128877, - "2201.00022": 0.9807169262536218, - "2201.00029": 0.9780213199185531, - "2201.00037": 0.9610911701363962, - "2201.00069": 0.9901584319162327, - "2201.00151": 0.9405941590124848, - "2201.00178": 0.9541360733822826, - "2201.00200": 0.9819674770568346, - "2201.00201": 0.9865230158884503, - "2201.00214": 0.9783667787959633, - "GeoTopo-book": 0.9654720602997376 + "1601.03642": 0.9891412666231401, + "1602.06541": 0.9830822024204595, + "1707.09725": 0.950309201655275, + "2201.00021": 0.9813663284804848, + "2201.00022": 0.9802966619965753, + "2201.00029": 0.9776700191570882, + "2201.00037": 0.9601950117423229, + "2201.00069": 0.9902783350964871, + "2201.00151": 0.9404806212467183, + "2201.00178": 0.9543100759072476, + "2201.00200": 0.9818752599310429, + "2201.00201": 0.9864366899689843, + "2201.00214": 0.9784680209521233, + "GeoTopo-book": 0.9658770842721947 }, "pypdf2": { - "1601.03642": 0.988204501231209, - "1602.06541": 0.980342863635718, - "1707.09725": 0.9402626747634515, - "2201.00021": 0.9670789764515855, - "2201.00022": 0.97262612848869, - "2201.00029": 0.9768263473053892, - "2201.00037": 0.9390333156211463, - "2201.00069": 0.9634704048681239, - "2201.00151": 0.9336876943135092, - "2201.00178": 0.9294026050543948, - "2201.00200": 0.9749882273797511, - "2201.00201": 0.9825060668375452, - "2201.00214": 0.9725277368821147, - "GeoTopo-book": 0.8648303132107446 + "1601.03642": 0.9882405605964084, + "1602.06541": 0.9804445075623092, + "1707.09725": 0.940554454886422, + "2201.00021": 0.9670187219754766, + "2201.00022": 0.9726852781683069, + "2201.00029": 0.9767748114449898, + "2201.00037": 0.9389782426736301, + "2201.00069": 0.963615494442941, + "2201.00151": 0.9335953951986522, + "2201.00178": 0.9295827985946015, + "2201.00200": 0.9749233170101705, + "2201.00201": 0.9824198955328604, + "2201.00214": 0.9727259608446774, + "GeoTopo-book": 0.8653162506825754 }, "tika": { "1601.03642": 0.9558922725104059, diff --git a/read/results/pymupdf/1601.03642.txt b/read/results/pymupdf/1601.03642.txt index 70d8184..ab22ff4 100644 --- a/read/results/pymupdf/1601.03642.txt +++ b/read/results/pymupdf/1601.03642.txt @@ -115,6 +115,7 @@ optimization technique called gradient descent. The gradient descent algorithm takes a function which has to be derivable, starts at any point of the surface of this error function and arXiv:1601.03642v1 [cs.CV] 12 Jan 2016 + 2 makes a step in the direction which goes downwards. Hence it tries to find a minimum of this high-dimensional function. @@ -186,6 +187,7 @@ choice of features it is possible to separate the general style of an image in terms of local image appearance from the content of an image. They support their claim by applying the style of different artists to an arbitrary image of their choice. + 3 (a) Original Image (b) Style image @@ -266,6 +268,7 @@ With that training data, the models can generate similar texts. New works which look like Shakespeare plays, new Wikipedia articles, new Linux code and new papers about algebraic geometry can thus automatically be generated. At a first + 4 glance, they do look authentic. The syntax was mostly used correctly, the formatting looks as expected, the sentences are @@ -365,6 +368,7 @@ can be found at [Vit15]. C. Audio Synthesization Audio synthesization is generating new audio files. This can either be music or speech. With the techniques described before, + 5 neural networks can be trained to generate music note by note. However, it is desirable to allow multiple notes being played @@ -547,6 +551,7 @@ W.H.Freeman & Co Ltd, 1976. M. D. Zeiler and R. Fergus, “Visualizing and understanding con- volutional networks,” in Computer Vision–ECCV 2014. Springer, 2014, pp. 818–833. + 6 APPENDIX A AUTOMATICALLY GENERATED TEXTS @@ -592,6 +597,7 @@ was swear to advance to the resources for those Socialism’s rule, was starting to signing a major tripad of aid exile.]] C. Linux Code, 1 /* + 7 * Increment the size file of the new incorrect UI_FILTER group information * of the size generatively. @@ -657,6 +663,7 @@ Inc., 675 Mass Ave, Cambridge, MA 02139, USA. #include #include #include + 8 #include #include @@ -689,3 +696,4 @@ PUT_PARAM_RAID(2, sel) = get_state_state(); set_pid_sum((unsigned long)state, current_state_str(), (unsigned long)-1->lr_full; low; } + diff --git a/read/results/pymupdf/1602.06541.txt b/read/results/pymupdf/1602.06541.txt index 8c9d5ba..f95bfd4 100644 --- a/read/results/pymupdf/1602.06541.txt +++ b/read/results/pymupdf/1602.06541.txt @@ -95,6 +95,7 @@ class affiliation segmentation [LRAL08]. Similarly, recent publications in pixel-level object segmentation used layered models [YHRF12]. arXiv:1602.06541v2 [cs.CV] 11 May 2016 + 2 C. Input Data The available data which can be used for the @@ -197,6 +198,7 @@ than 30 % with a priori knowledge only possible. For example, a system might learn that a certain position of the image is most of the time “sky” while another position is most of the time “road”. + 3 P2 The manually labeled images could have a more coarse labeling. For example, a human classifier @@ -320,6 +322,7 @@ Once every year from 2005 to 2012 [EVGW+b]. 1pattern analysis, statistical modelling and computational learning, an EU network of excellence 2Visual Object Classes + 4 Beginning with 2007, a segmentation challenge was added [EVGW+a]. @@ -423,6 +426,7 @@ However, there are alternatives. Namely MRFs and Conditional Random Fields (CRFs) which take the information of the complete image and segment it in an holistic approach. + 5 V. TRADITIONAL APPROACHES Image segmentation algorithms which use traditional @@ -528,6 +532,7 @@ image to a low-resolution variant. Another way of doing dimensionality reduction is principal component analysis (PCA), which is applied by [COWR11]. The idea behind PCA is to find a hyperplane on which all + 6 feature vectors can be projected with a minimal loss of information. A detailed description of PCA is given @@ -631,6 +636,7 @@ segment white blood cells. As the authors describe, the segmentation by watershed transform has two flaws: Over-segmentation due to local minima and thick watersheds due to plateaus. + 7 C. Random Decision Forests Random Decision Forests were first proposed @@ -762,6 +768,7 @@ m � i=1 αiyi = 0 + 8 4) Not every dataset is linearly separable. This prob- lem is approached by transforming the feature @@ -903,6 +910,7 @@ if x1 = x2 According to [Mur12], the most common way of inference over the posterior MRF in computer vision problems is Maximum A Posteriori (MAP) estimation. + 9 Detailed introductions to MRFs are given by [BKR11], [Mur12]. MRFs are used by [ZBS01] and @@ -1015,6 +1023,7 @@ Since AlexNet was developed, a lot of different neural networks have been proposed. One interesting example is [PC13], where a recurrent CNN for semantic segmentation is presented. + 10 Another notable paper is [LSD14]. The algorithm presented there makes use of a classifying network such @@ -1094,6 +1103,7 @@ example, an image captioning system which was trained on photographs of professional photographers might not have photos from the point of view of a child. This is visualized in Figure 4(f). + 11 VIII. DISCUSSION Ohta et al. wrote [OKS78] 38 years ago. It is one @@ -1236,6 +1246,7 @@ segmentation via adaptive k-mean clustering and knowledge-based morphological operations with biomedical applications,” Image Processing, IEEE Transactions on, vol. 7, no. 12, pp. 1673–1683, Dec. + 12 1998. [Online]. Available: http://ieeexplore.ieee.org/ xpls/abs_all.jsp?arnumber=730379 @@ -1485,6 +1496,7 @@ sensing, vol. 23, no. 4, pp. 725–749, 2002. S. Hu, E. Hoffman, and J. Reinhardt, “Automatic lung segmentation for accurate quantitation of volumetric x-ray ct images,” Medical Imaging, IEEE + 13 Transactions on, vol. 20, no. 6, pp. 490–498, Jun. 2001. @@ -1792,6 +1804,7 @@ Learning, vol. 3, no. 4, pp. 319–342, 1989. [MSB12] G. Moser, S. B. Serpico, and J. A. Benediktsson, “Markov random field models for supervised land + 14 cover classification from very high resolution multispectral remote sensing images,” in Advances @@ -2087,6 +2100,7 @@ object instances and occlusion ordering,” in Computer Vision and + 15 Pattern Recognition @@ -2245,6 +2259,7 @@ PCA principal component analysis. 5 RBF radial basis function. 8 SIFT scale-invariant feature transform. 5 SVM Support Vector Machine. 4, 6–8 + 16 APPENDIX A TABLES @@ -2307,3 +2322,4 @@ Warwick-QU 3 [CSM09] Table I: An overview over publicly available image databases with a semantic segmentation ground trouth. + diff --git a/read/results/pymupdf/1707.09725.txt b/read/results/pymupdf/1707.09725.txt index 9932856..44e5d03 100644 --- a/read/results/pymupdf/1707.09725.txt +++ b/read/results/pymupdf/1707.09725.txt @@ -19,17 +19,21 @@ Research Period: 03. May 2017 KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association www.kit.edu arXiv:1707.09725v1 [cs.CV] 31 Jul 2017 + + Analysis and Optimization of Convolutional Neural Network Architectures by Martin Thoma Master Thesis August 2017 + Master Thesis, FZI Department of Computer Science, 2017 Gutachter: Prof. Dr.–Ing. R. Dillmann, Prof. Dr.–Ing. J. M. Zöllner Abteilung Technisch Kognitive Assistenzsysteme FZI Research Center for Information Technology + Affirmation Ich versichere wahrheitsgemäß, die Arbeit selbstständig angefertigt, alle benutzten Hilfs- mittel vollständig und genau angegeben und alles kenntlich gemacht zu haben, was aus @@ -38,6 +42,8 @@ Karlsruhe, Martin Thoma August 2017 v + + Abstract Convolutional Neural Networks (CNNs) dominate various computer vision tasks since Alex Krizhevsky showed that they can be trained effectively and reduced the top-5 error @@ -55,6 +61,7 @@ has only one million learned parameters for an input size of 32 × 32 × 3 and 1 which beats the state of the art on the benchmark dataset Asirra, GTSRB, HASYv2 and STL-10 was developed. vii + Zusammenfassung Modelle welche auf Convolutional Neural Networks (CNNs) basieren sind in verschiedenen Aufgaben der Computer Vision dominant seit Alex Krizhevsky gezeigt hat dass diese @@ -74,6 +81,7 @@ experimentell bestätigt. Andere Beobachtungen, wie beispielsweise der positive gelernter Farbraumtransformationen konnten nicht bestätigt werden. Ein Modell welches weniger als eine Millionen Parameter nutzt und auf den Benchmark-Datensätzen Asirra, GTSRB, HASYv2 und STL-10 den Stand der Technik neu definiert wurde entwickelt. + Acknowledgment I would like to thank Stephan Gocht and Marvin Teichmann for the many inspiring conversations we had about various topics, including machine learning. @@ -82,6 +90,7 @@ study without having to worry about anything besides my studies. Thank you! Finally, I want to thank Timothy Gebhard, Daniel Schütz and Yang Zhang for proof-reading my masters thesis and Stephan Gocht for giving me access to a GTX 1070. ix + This work can be cited the following way: @MastersThesis{Thoma:2017, Title @@ -107,6 +116,7 @@ Url } A DVD with a digital version of this master thesis and the source code as well as the used data is part of this work. + Contents 1 Introduction @@ -215,6 +225,7 @@ Genetic approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 xi + 3.5 Convolutional Neural Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . @@ -325,6 +336,7 @@ C Calculating Network Characteristics 87 C.1 Parameter Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 + C.2 FLOPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 C.3 Memory Footprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . @@ -359,6 +371,8 @@ H Bibliography I Glossary 119 + + 1. Introduction Computer vision is the academic field which aims to gain a high-level understanding of the low-level information given by raw pixels from digital images. @@ -389,6 +403,7 @@ CNNs as well as nine methods for analysis of CNNs is given in Chapter 2. 1Classification is also called identification if the classes are humans. Another name is object recognition, although the classes can be humans and animals as well. 1 + 1. Introduction Despite the fact that most researchers and developers do not use topology learning, a couple of algorithms have been proposed for this task. Five classes of topology learning algorithms @@ -404,6 +419,7 @@ simple as possible. The described techniques are relevant to all six described c vision problems due to the fact that Encoder-Decoder architectures are one component of state-of-the-art algorithms for all six of them. 2 + 2. Convolutional Neural Networks In the following, it is assumed that the reader knows what a multilayer perceptron (MLP) is and how they are designed for classification problems, what activation functions are and @@ -539,6 +555,7 @@ I′ ∈ R7×7 Figure 2.1.: Visualization of the application of a linear k × k × 1 image filter. For each pixel of the output image, k2 multiplications and k2 additions of the products have to be calculated. 3 + 2. Convolutional Neural Networks One important detail is how boundaries are treated. There are four common ways of boundary treatment: @@ -576,6 +593,7 @@ this is by learning image filters in so called convolutional layers. While MLPs vectorize the input, the input of a layer in a CNN are feature maps. A feature map is a matrix m ∈ Rw×h, but typically the width equals the height (w = h). For an RGB 4 + 2.2. CNN Layer Types input image, the number of feature maps is d = 3. Each color channel is a feature map. Since AlexNet [KSH12] almost halved the error in the ImageNet challenge, CNNs are @@ -608,6 +626,7 @@ like MLPs. In fact, every CNN has an equivalent MLP which computes the same func if only the flattened output is compared. 1also called activation maps or channels 5 + 2. Convolutional Neural Networks This is easier to see when the filtering operation is denoted formally: o(i)(x) = b + @@ -663,6 +682,7 @@ apply Figure 2.2.: Application of a single convolutional layer with n filters of size k × k × 3 with stride s = 1 to input data of size width × height with three channels. 6 + 2.2. CNN Layer Types A convolutional layer with n filters of size kw × kh and SAME padding after d(i−1) feature maps of size sx × sy has n · d(i−1) · (kw · kh) parameters if no bias is used. In contrast, a fully @@ -696,6 +716,7 @@ in Table 2.1, spatial pyramid pooling as introduced in [HZRS14] and generalizing functions as introduced in [LGT16]. 2But convolutional layers only have equivalent fully connected layers if the output feature map is 1 × 1 7 + 2. Convolutional Neural Networks Name Definition @@ -806,6 +827,7 @@ for the dimension i and the zero matrix � for all other dimensions i = 1, . . . , d(i−1). 8 + 2.2. CNN Layer Types 2.2.3. Dropout Dropout is a technique used to prevent overfitting and co-adaptations of neurons by setting @@ -843,6 +865,7 @@ those lower layers parameters are also adapted. This leads to the parameters in layers being worse. A very low learning rate has to be chosen to adjust for the fact that the input features might drastically change over time. 9 + 2. Convolutional Neural Networks One way to approach this problem is by normalizing mini-batches as described in [IS15]. A Batch Normalization layer with d-dimensional input x = (x(1), . . . , x(d)) is first normalized @@ -896,6 +919,7 @@ which includes ℓ2 normalization as described in [WWQ13]. Those two normalizati however, are superseded by Batch Normalization. 3also called inference time 10 + 2.3. CNN Blocks 2.3. CNN Blocks This section describes more complex building blocks than simple layers. CNN blocks act @@ -923,6 +947,7 @@ Figure 2.4.: ResNet module Image source: [HZRS15a] [HM16] provides some insights why deep residual networks are successful. 11 + 2. Convolutional Neural Networks 2.3.2. Aggregation Blocks Two common ways to add more parameters to neural networks are increasing their depth @@ -954,6 +979,7 @@ The hyperparameters of an aggregation block are: • The cardinality C ∈ N≥1. Note that a cardinality of C = 1 is equivalent in every aspect to using the group network without an aggregation block. 12 + 2.3. CNN Blocks 2.3.3. Dense Blocks Dense blocks are collections of convolutional layers which are introduced in [HLW16]. The @@ -985,6 +1011,7 @@ Dense block have five hyperparameters: • The number k of filters added per layer (called growth rate in the paper) It might be necessary use 1 × 1 convolutions to reduce the number of L · k feature maps. 13 + 2. Convolutional Neural Networks 2.4. Transition Layers Transition layers are used to overcome constraints imposed by resource limitations or @@ -999,6 +1026,7 @@ Global pooling is another type of transition layer. It applies pooling over the feature map size to shrink the input to a constant 1 × 1 feature map and hence allows one network to have different input sizes. 14 + 2.5. Analysis Techniques 2.5. Analysis Techniques CNNs have dozens of hyperparameters and ways to tune them. @@ -1032,6 +1060,7 @@ its quality decreases. On the other hand, this can show differences in the distribution of validation data which are not covered by the training set and thus indicate the need to collect more data. 15 + 2. Convolutional Neural Networks 2.5.2. Confusion Matrices A confusion matrix is a matrix (c)ij ∈ NK×K @@ -1085,6 +1114,7 @@ typical quality metrics. Other quality metrics can be found in [OHIL16]. In case that the number of training epochs are used as the examined hyperparameter, validation curves give an indicator if training longer improves the model’s performance. By 16 + 2.5. Analysis Techniques plotting the error on the training set as well as the error on a validation set, one can also estimate if overfitting might become a problem. See Figure 2.7 for an example. @@ -1159,6 +1189,7 @@ k is the output of the classification algorithm which depends on the weights. λ1, λ2 ∈ [0, ∞) weights the regularization and is typically smaller than 0.1. 17 + 2. Convolutional Neural Networks Figure 2.8.: Example for a validation curve (plotted loss function) with plateaus. The dark orange curve is smoothed, but the non-smoothed curve is also plotted in light orange. @@ -1181,6 +1212,7 @@ by zero or taking the logarithm of zero. In both cases, adding a small constant • If the loss-epoch validation curve has a plateau at the beginning, the weight initializa- tion might be bad. 18 + 2.5. Analysis Techniques Quality criteria There are several quality criteria for classification models. Most quality criteria are based @@ -1223,6 +1255,7 @@ As reducing the floating point accuracy allows to process more data on a given analysis under this aspect is also highly relevant in some scenarios. However, the following focuses on the quality of the classification result. 19 + 2. Convolutional Neural Networks 2.5.4. Learning Curves A learning curve is a plot where the horizontal axis displays the number of training samples @@ -1270,6 +1303,7 @@ The major drawback of this analysis technique is its computational intensity. In get one point on the training curve and one point on the testing curve, a complete training has to be executed. On the full data set, this can be several days on high-end computers. 20 + 2.5. Analysis Techniques 2.5.5. Input-feature based model explanations Understanding which clues the model took to come to its prediction is crucial to check if @@ -1304,6 +1338,7 @@ image I0 is assigned a score Sc(I0) for a class c of interest. CNNs are non-line but they can be approximated by the first order Taylor expansion Sc(I) ≈ wT I + b where w is the derivative of Sc at I0. 21 + 2. Convolutional Neural Networks 2.5.6. Argmax Method The argmax method has two variants: @@ -1335,6 +1370,7 @@ high or low values. More recent work like [NYC16] tries to make the reconstructions appearance look more natural. 22 + 2.5. Analysis Techniques 2.5.8. Filter comparison One question which might lead to some insight is how robust the features are which @@ -1377,6 +1413,7 @@ The order of the weight updates as well as possible implications highly depend o and the training algorithm. See Appendix B.5 for a short overview of training algorithms for neural networks. 23 + 2. Convolutional Neural Networks 2.6. Accuracy boosting techniques There are techniques which can almost always be applied to improve accuracy of CNN @@ -1396,6 +1433,7 @@ an ensemble, it increases the computational cost of inference. Pretraining the classifier on another dataset to obtain start from a good position or finetuning a model which was originally created for another task is also a common technique. 24 + 2.6. Accuracy boosting techniques Figure 2.10.: Occlusion sensitivity analysis by [ZF14]: The left column shows three example images, where a gray square occluded a part of the image. This gray squares center (x, y) was @@ -1407,10 +1445,12 @@ gives the class with the highest predicted probability. In the case of the Pomer it always predicts the correct class if the head is visible. However, if the head of the dog is occluded, it predicts other classes. 25 + 2. Convolutional Neural Networks Figure 2.11.: Filter visualization from [ZF14]: The filters themselves as well as the input feature maps which caused the highest activation are displayed. 26 + 3. Topology Learning The topology of a neural network is crucial for the number of parameters, the number of floating point operations (FLOPs), the required memory, as well as the features being @@ -1436,6 +1476,7 @@ defined by the problem. Create a minimal, fully connected network for those. nected to all inputs. They are not connected to other candidate nodes and not connected to the output nodes. 27 + 3. Topology Learning 4. Correlation Maximization: Train the weights of the candidates by maximizing S, the correlation between candidates output value V with the networks residual error: @@ -1472,6 +1513,7 @@ follows a normal distribution: wij ∼ N(µij, σ2 ij) 28 + 3.2. Pruning approaches Hence every connection has two learned parameters: µij and σ2 ij. @@ -1514,6 +1556,7 @@ One family of pruning criterions uses the Hessian matrix. For example, Optimal B Damage (OBD) as introduced in [LDS+89]. For every single parameter k, OBD calculates the effect on the objective function of deleting k. The authors call the effect of the deletion 29 + 3. Topology Learning of parameter k the saliency sk. The parameters with the lowest saliency are deleted, which means they are set to 0 and are not updated anymore. @@ -1553,6 +1596,7 @@ but the state of the art is 17.18 % [HLW16]. Reinforcement learning is a sub-field of machine learning, which focuses on the question how to choose actions that lead to high rewards. 30 + 3.5. Convolutional Neural Fabrics One can think of the search for good neural network topologies as a reinforcement learning problem. The agent is a recurrent neural network which can generate bitstrings. Those @@ -1576,8 +1620,10 @@ of filters as the layer before (iii) double the number of filters than the lay They always use ReLU as an activation function and they always use filters of size 3 × 3. They don’t use pooling at all. 31 + 3. Topology Learning 32 + 4. Hierarchical Classification Designing a classifier for a new dataset is hard for two main reasons: Many design choices are not clearly superior to others and evaluating one design choice takes much time. Especially @@ -1599,6 +1645,7 @@ In this example, the problem has 17 classes. The hierarchical approach introduce 7 clusters of classes and thus uses 8 classifiers. Such a hierarchy of classifiers needs clusters of classes. 33 + 4. Hierarchical Classification 4.1. Advantages of classifier hierarchies Having a classifier hierarchy has five advantages: @@ -1635,6 +1682,7 @@ clustering as in [XZY+14]. Those clusterings, however, are hard to interpret and them do not allow a human to improve the found clustering manually. The confusion matrix (c)ij ∈ Nk×k states how often class i was present and class j was 34 + 4.2. Clustering classes predicted. The more often this confusion happens, the more similar those two classes are to the classifier. Based on the confusion matrix, the classes can be clustered as explained in @@ -1681,6 +1729,7 @@ cluster decreases the score. Hence it is beneficial to implement block moving. One advantage of permutating the classes in order to minimize Equation (4.1) in comparison to spectral clustering as used in [XZY+14] is that the adjusted confusion matrix can be 35 + 4. Hierarchical Classification split into many much smaller matrices along the diagonal. In the case of many classes (e.g., 1000 classes of ImageNet or 369 classes of HASYv2) this permutation makes it possible to @@ -1706,6 +1755,7 @@ information about the similarity of classes. One possible solution to this probl the prediction of the class in contrast to using only the argmax in order to find a useful permutation. 36 + 5. Experimental Evaluation All experiments are implemented using Keras 2.0 [Cho15] with Tensorflow 1.0 [AAB+16] and cuDNN 5.1 [CWV+14] as the backend. The experiments were run on different machines @@ -1738,6 +1788,7 @@ SVHN (Street View House Numbers) exists in two formats. For the following experi the cropped digit format was used. It contains the 10 digits cropped from photos of Google Street View. The images are in color and of size 32 px × 32 px. The state of the art 37 + 5. Experimental Evaluation achieves an accuracy of 98.41 % [HLW16]. According to [NWC+11a], human performance is at 98.0 %. @@ -1773,6 +1824,7 @@ the dataset. If the input image is larger than 32 px × 32 px, for each power of Conv-Block(2) is added at the input. For MNIST, the images are bilinearly upsampled to 32 px × 32 px. 38 + 5.1. Baseline Model and Training setup # Type @@ -1961,6 +2013,7 @@ BN + Softmax Figure 5.1.: Architecture of the baseline model. C 32@3× 3/1 is a convolutional layer with 32 filters of kernel size 3 × 3 with stride 1. 39 + 5. Experimental Evaluation 5.1.1. Baseline Evaluation The results for the baseline model evaluated on eight datasets are given in Table 5.2. The @@ -2106,6 +2159,7 @@ six Nvidia GPUs and one CPU. The weights for DenseNet-40-12 are taken from [Maj1 Weights the baseline model can be found at [Tho17b]. The optimized Tensorflow build makes use of SSE4.X, AVX, AVX2 and FMA instructions. 40 + 5.1. Baseline Model and Training setup 5.1.2. Weight distribution The distribution of filter weights by layer is visualized in Figure 5.2 and the distribution @@ -2137,6 +2191,7 @@ Finally, the distribution of filter weight ranges is plotted in Figure 5.6 for layer. The ranges are calculated for each channel and filter separately. The smaller the values are, the less information is lost if the filters are replaced by smaller filters. 41 + 5. Experimental Evaluation Figure 5.2.: Violin plots of the distribution of filter weights of a baseline model trained on CIFAR- 100. The weights of the first layer are relatively evenly spread in the interval [−0.4, +0.4]. @@ -2151,17 +2206,20 @@ While the first layers biases are in [−0.1, +0.1], after each max-pooling lay which contains 95 % of the weights and is centered around the mean becomes smaller. In the last three convolutional layer, most bias weights are in [−0.005, +0.005]. 42 + 5.1. Baseline Model and Training setup Figure 5.4.: Violin plots of the distribution of the γ parameter of Batch Normalization layers of a baseline model trained on CIFAR-100. Figure 5.5.: The distribution of the β parameter of Batch Normalization layers of a baseline model trained on CIFAR-100. 43 + 5. Experimental Evaluation Figure 5.6.: The distribution of the range of values (max - min) of filters by channel and layer. For each filter, the range of values is recorded by channel. The smaller this range is, the less information is lost if a n × n filter is replaced by a (n − 1) × (n − 1) filter. 44 + 5.1. Baseline Model and Training setup 5.1.3. Training behavior Due to early stopping, the number of epochs which a model was trained differ. The number @@ -2226,6 +2284,7 @@ expected that the absolute value of weight updates during epochs (sum, max, and decrease in later training stages. The intuition was that weights need to be adjusted in a coarse way first. After that, the intuition was that only slight modifications are applied by 45 + 5. Experimental Evaluation the SGD based training algorithm (ADAM). The mean, max and sum of weight updates as displayed in Figures 5.8 to 5.10, however, do not show such a clear pattern. The biggest @@ -2237,10 +2296,12 @@ their weight updates are larger. This pattern does not occur when SGD is used as optimizer. Figure 5.8.: Mean weight updates of the baseline model between epochs by layer. 46 + 5.1. Baseline Model and Training setup Figure 5.9.: Maximum weight updates of the baseline model between epochs by layer. Figure 5.10.: Sum of weight updates of the baseline model between epochs by layer. 47 + 5. Experimental Evaluation 5.2. Confusion Matrix Ordering The visualization of the confusion matrix can give valuable information about which part @@ -2269,6 +2330,7 @@ maximum size of 50 × 50 are displayed, the ordered method can show only 8 matri because the off-diagonal matrices are almost 0. Without sorting, 64 matrices have to be displayed. 48 + 5.2. Confusion Matrix Ordering Figure 5.12.: The first image shows the confusion matrix for the test of GTSRB set after optimization to Equation (4.1). The diagonal elements are set to 0 in order to make other elements @@ -2277,9 +2339,11 @@ and the color of the signs. The second image shows the same, but with baseline model. Best viewed in electronic form. 49 + Figure 5.13.: The first 50 entries of the confusion matrix of the HASYv2 dataset. The diagonal elements are set to 0 in order to make other elements easier to see. The top image shows arbitrary class ordering, the bottom image shows the optimized ordering. + 5.3. Spectral Clustering vs CMO 5.3. Spectral Clustering vs CMO This section evaluates the clustering quality of CMO in comparison to the clustering quality @@ -2299,6 +2363,7 @@ The results for the HASYv2 dataset are qualitatively similar (see Table 5.5). It noted that the number of clusters was determined by using the semi-automatic method based on CMO as described in Section 4.2. 51 + 5. Experimental Evaluation Cluster Spectral clustering @@ -2470,6 +2535,7 @@ Total 25 Table 5.5.: Differences in spectral clustering and CMO. 52 + 5.4. Hierarchy of Classifiers 5.4. Hierarchy of Classifiers In a first step, a classifier is trained on the 100 classes of CIFAR-100. The fine-grained root @@ -2572,6 +2638,7 @@ gives the percentage that the root classifiers argmax prediction is within the cluster, but not necessarily the correct class. The columns class identified | cluster only consider data points where the root classifier correctly identified the cluster. 53 + 5. Experimental Evaluation 5.5. Increased width for faster learning More filters in one layer could simplify the optimization problem as each filter needs smaller @@ -2683,6 +2750,7 @@ m13 Table 5.8.: Training time in epochs and wall-clock time for the baseline and models m9, m11, m13 as well as their accuracies. 54 + 5.6. Weight updates 5.6. Weight updates Section 5.5 shows that wider networks learn faster. One hypothesis why this happens is @@ -2704,6 +2772,7 @@ and A.4). Figure 5.15.: Mean weight updates between epochs by layer. The model is the baseline model, but with layer 5 reduced to 3 filters. 55 + 5. Experimental Evaluation 5.7. Multiple narrow layers vs One wide layer On a given feature map size one can have an arbitrary number of convolutional layers with @@ -2747,6 +2816,7 @@ the training time per epoch was reduced. For the GTX 980, it was reduced from 22 the baseline model to 15 s of the model with one less convolutional layer, one less Batch Normalization and one less activation layer. The inference time was also reduced from 6 ms 56 + 5.8. Batch Normalization to 4 ms for 1 image and from 32 ms to 23 ms for 128 images. Due to the loss in accuracy of more then one percentage point of the mean model and the increased standard deviation of @@ -2785,6 +2855,7 @@ to have at least one convolutional layer at this feature map scale. In [CUH15], the authors write that Batch Normalization does not improve ELU networks. Hence the effect of removing Batch Normalization from the baseline is investigated in this 57 + 5. Experimental Evaluation experiment. As before, 10 models are trained on CIFAR-100. The training setup and the model mno-bn @@ -2818,6 +2889,7 @@ however, be possible to remove Batch Normalization for the experiments to iterat through different ideas if the relative performance changes behave the same with or without Batch Normalization. 58 + 5.9. Batch size 5.9. Batch size The mini-batch size m ∈ N≥1 influences @@ -2903,6 +2975,7 @@ Removing the biases did not have a noticeable effect on the filter weight rang weight distribution or the distribution of the remaining biases. Also, the γ and β parameters of the Batch Normalization layers did not noticeably change. 59 + 5. Experimental Evaluation 5.11. Learned Color Space Transformation In [MSM16] it is described that placing one convolutional layer with 10 filters of size 1 × 1 @@ -2935,6 +3008,7 @@ functions for CNNs is ReLU [KSH12], but others such as ELU [CUH15], parametrized rectified linear unit (PReLU) [HZRS15b], softplus [ZYL+15] and softsign [BDLB09] have been proposed. The baseline uses ELU. 60 + 5.13. Activation Functions Activation functions differ in the range of values and the derivative. The definitions and other comparisons of eleven activation functions are given in Table B.3. @@ -2973,6 +3047,7 @@ Similarly, ReLU was adjusted to have a negative output: ReLU−(x) = max(−1, x) = ReLU(x + 1) − 1 The results of ReLU− are much worse on the training set, but perform similar on the test 61 + 5. Experimental Evaluation set. The result indicates that the possibility of hard zero and thus a sparse representation is either not important or similar important as the possibility to produce negative outputs. @@ -3060,6 +3135,7 @@ No Table 5.10.: Properties of activation functions. 1The dying ReLU problem is similar to the vanishing gradient problem. 62 + 5.13. Activation Functions Function Single model @@ -3271,6 +3347,7 @@ activation functions on GTX 970 GPUs on CIFAR-100. It was expected that the identity is the fastest function. This result is likely an implementation specific problem of Keras 2.0.4 or Tensorflow 1.1.0. 63 + 5. Experimental Evaluation Function Single model @@ -3365,6 +3442,7 @@ the non-smoothed validation set. • Higher accuracy: Using smoothed labels for the optimization could lead to a higher accuracy of the base-classifier due to a smoothed error surface. It might be less likely 64 + 5.14. Label smoothing that the classifier gets into bad local minima. • Label noise: Depending on the way how the labels are obtained, it might not always @@ -3384,6 +3462,7 @@ smoothing has a positive effect on the training speed. Hinton et al. called this method distillation in [HVD15]. Hinton et al. used smooth and hard labels for training, this work only used smoothed labels. 65 + 5. Experimental Evaluation 5.15. Optimized Classifier In comparison to the baseline classifier, the following changes are applied to the optimized @@ -3551,6 +3630,7 @@ the feature map size to 1 × 1. If the input feature map is bigger than 32 × 32 power of two there are two Convolution + BN + ELU blocks and one Max pooling block added. This is the framed part in the table. 66 + 5.15. Optimized Classifier 32 × 32 Input @@ -3712,6 +3792,7 @@ evaluated on six Nvidia GPUs and one CPU. The weights for DenseNet-40-12 are tak from [Maj17]. Weights the baseline model can be found at [Tho17b]. The optimized Tensorflow build makes use of SSE4.X, AVX, AVX2 and FMA instructions. 67 + 5. Experimental Evaluation 5.16. Early Stopping vs More Data A separate validation set is necessary for two reasons: (1) Early stopping and (2) preventing @@ -3786,6 +3867,7 @@ pattern, the number of epochs increases with lower model regularization (see Tab 3Only 1 model is trained due to the long training time of 581 epochs and 12 hours for this model. 4Only 3 models are in this ensemble due to the long training time of more than 8 hours per model. 68 + 5.17. Regularization Dataset Early Stopping @@ -3873,8 +3955,10 @@ std Table 5.20.: Training time in epochs of models with early stopping on training loss by different choices of ℓ2 model regularization applied to the optimized model. 69 + 5. Experimental Evaluation 70 + 6. Conclusion and Outlook This master thesis gave an extensive overview over the design patterns of CNNs in Chapter 2, the methods how CNNs can be analyzed and the principle directions of topology learning @@ -3906,6 +3990,7 @@ one percentage point. • Wider networks learn in fewer epochs. This, however, does not mean that the 71 + 6. Conclusion and Outlook wall-clock time is lower due to increased computation in forward- and backward passes. @@ -3943,6 +4028,7 @@ biggest effect? Can this question be answered before a deeper network is traine • Is label smoothing helpful for noisy labels? 1The baseline is better than the optimized model on Asirra and on HASYv2. 72 + • How does the choice of activation functions influence residual architectures? Could the results be the same for different activation functions in architectures with hundreds of layers? @@ -3957,7 +4043,9 @@ convergence, total training time, memory consumption, accuracy of the models and deviation of the models was not evaluated. This, and the stopping criterion for training might be crucial for the models quality. 73 + 74 + A. Figures, Tables and Algorithms (a) Original image (b) Smoothing filter @@ -3997,6 +4085,7 @@ bias Table A.1.: 99-percentile intervals for filter weights and bias weights by layer of a baseline model trained on CIFAR-100. 75 + Figure A.2.: The distribution of bias weights of a model without batch normalization trained on CIFAR-100. Algorithm 1 Simulated Annealing for minimizing Equation (4.1). @@ -4032,6 +4121,7 @@ i ← randomInteger(1, . . . , n − (e − s)) Move Block (s, . . . , e) to position i return bestM 76 + Figure A.3.: Maximum weight updates between epochs by layer. The model is the baseline model, but with layer 5 reduced to 3 filters. Function @@ -4138,6 +4228,7 @@ ELU Table A.2.: Test accuracy of adjusted baseline models trained with different activation functions on HASYv2. For LReLU, α = 0.3 was chosen. 77 + Figure A.4.: Sum of weight updates between epochs by layer. The model is the baseline model, but with layer 5 reduced to 3 filters. Function @@ -4234,6 +4325,7 @@ ELU Table A.3.: Test accuracy of adjusted baseline models trained with different activation functions on STL-10. For LReLU, α = 0.3 was chosen. 78 + B. Hyperparameters Hyperparameters are parameters of models which are not optimized automatically (e.g., by gradient descent), but by methods like random search [BB12], grid search [LBOM98] or @@ -4260,6 +4352,7 @@ improves the network. – Linear discriminant analysis (LDA) • Zero Components Analysis (ZCA) whitening (used by [KH09]) 79 + B.2. Data augmentation Data augmentation techniques aim at making artificially more data from real data items by applying invariances. For computer vision, they include: @@ -4313,6 +4406,7 @@ Less common, but also reasonable are: • Lens distortion (used by [WYS+15]) 1Vertical flipping combined with 180◦ rotation is equivalent to horizontal flipping 80 + B.3. Initialization Weight initializations are usually chosen to be small and centered around zero. One way to characterize many initialization schemes is by @@ -4393,6 +4487,7 @@ However, regularization terms weighted with a constant λ ∈ (0, +∞) are some • Weight decay: ℓ2 (e.g., λ = 0.0005 as in [MSM16]) • Orthogonality regularization (|(W T · W − I)|, see [VTKP17]) 81 + B.5. Optimization Techniques Most relevant optimization techniques for CNNs are based on SGD, which updates the weights according to the rule @@ -4437,6 +4532,7 @@ until the learning rate is decreased by Decay Scheduling. • Adam and AdaMax [KB14] 82 + • Nadam [Doz15] Some of those are explained in [Rud16]. Other first-order gradient optimization methods are: @@ -4453,6 +4549,7 @@ However, there are alternatives which do not use gradient information: on [Tho14b] There are also approaches which learn the optimization algorithm [ADG+16, LM16]. 83 + B.6. Network Design CNNs have the following hyperparameters: • Depth: The number of layers @@ -4582,6 +4679,7 @@ Softmax is the standard activation function for the last layer of a classificat as it produces a probability distribution. See Figure B.1 for a plot of some of them. 2α is a hyperparameter in leaky ReLU, but a learnable parameter in the parametric ReLU function. 84 + −2.0 −1.5 −1.0 @@ -4621,7 +4719,9 @@ Regularization techniques are: • Dense-Sparse-Dense training (see [HPN+16]) • Soft targets (see [HVD15]) 85 + 86 + C. Calculating Network Characteristics C.1. Parameter Numbers • A fully connected layer with n nodes, k inputs has n · (k + 1) parameters. The +1 is @@ -4654,6 +4754,7 @@ operations. The total number of FLOPs is (2·n·m·ki−1 −1)·(ki ·w ·h)+ki This is, of course, a naive way of calculating a convolution. There are other ways of calculating convolutions [LG16]. 87 + • A fully connected layer with n nodes after k feature maps of size w×h needs 2n(k·w·h) FLOPs. The total number of FLOPs is 2n · (k · w · h) + n · nϕ. • As Dropout is only calculated during training, the number of FLOPs was set to 0. @@ -4689,6 +4790,7 @@ At inference time, every two consecutive layers have to fit into memory. When t pass of layer A to layer B is calculated, the memory can be freed if no skip connections are used. 88 + D. Common Architectures In the following, some of the most important CNN architectures are explained. Understand- ing the development of these architectures helps understanding critical insights the machine @@ -4699,6 +4801,7 @@ Inception-v4 is also covered. The summation row gives the sum of all floats for the output size column. This allows conclusions about the maximum mini-batch size which can be in memory for training. 89 + D.1. LeNet-5 One of the first CNNs used was LeNet-5 [LBBH98]. LeNet-5 uses two times the common pattern of a single convolutional layer with tanh as a non-linear activation function followed @@ -4772,6 +4875,7 @@ After layer 7, the softmax function is applied. One can see that convolutional l need much fewer parameters, but an order of magnitude more FLOPs per parameter than fully connected layers. 90 + D.2. AlexNet The first CNN which achieved major improvements on the ImageNet dataset was AlexNet [KSH12]. Its architecture is shown in Figure D.2 and described in Table D.2. It has about 60·106 param- @@ -4887,6 +4991,7 @@ Contrast Normalization and max pooling. The calculated number of parameters was checked against the downloaded version. It also has 60 965 224 parameters. 91 + D.3. VGG-16 D Another widespread architecture is the VGG-16 (D) [SZ14]. VGG comes from the Visual Geometry Group in Oxford which developed this architecture. It has 16 layers which can @@ -4931,6 +5036,7 @@ Fully Connected 1000 Figure D.3.: Architecture of VGG-16 D. C 512@3 × 3/1 is a convolutional layer with 512 filters of kernel size 3 × 3 with stride 1. All convolutional layers use SAME padding. 92 + # Type Filters @ @@ -5101,6 +5207,7 @@ during training time, the number of FLOPs is 0. The dropout probability is 0.5. The calculated number of parameters was checked against the downloaded version. It also has 138 357 544 parameters. 93 + D.4. GoogleNet, Inception v2 and v3 The large number of parameters and operations is a problem when such models should get applied in practice to thousands of images. In order to reduce the computational cost while @@ -5119,6 +5226,7 @@ Inception v3 introduced Batch Normalization to the network [SVI+15]. Figure D.5.: Inception v2 module Image source: [SVI+15] 94 + D.5. Inception-v4 Inception-v4 as described in [SIV16] consists of four main building blocks: The stem, Inception A, Inception B and Inception C. To quote the authors: Inception-v4 is a deeper, @@ -5185,7 +5293,9 @@ Softmax 42 679 816 Table D.4.: Inception-v4 network. 95 + 96 + E. Datasets Well-known benchmark datasets for classification problems in computer vision are listed in Table E.1. The best results known to me are given in Table E.2. However, every semantic @@ -5295,6 +5405,7 @@ SVHN, have additional unlabeled data which is not given in this table. 2The dimensions are only calculated for the validation set. 3Asirra is a CAPTCHA created by Microsoft and was used in the “Cats vs Dogs” competition on Kaggle 97 + Dataset Model type / name Result @@ -5378,6 +5489,7 @@ cI ← crop(x, (i, j), (i + w, j + h)) D.append((cI, cL)) return (DC) 98 + F. List of Tables 2.1 Pooling types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . @@ -5460,6 +5572,7 @@ D.3 VGG-16 D architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . D.4 Inception-v4 network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 99 + E.1 Image Benchmark datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 @@ -5467,6 +5580,7 @@ E.2 State of the Art results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 100 + G. List of Figures 2.1 Application of a single image filter (Convolution) . . . . . . . . . . . . . . . @@ -5559,6 +5673,7 @@ A.1 Image Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Bias weight distribution without BN . . . . . . . . . . . . . . . . . . . . . . 76 101 + A.3 Maximum weight updates of baseline with bottleneck . . . . . . . . . . . . . 77 A.4 Sum of weight updates of baseline with bottleneck @@ -5579,6 +5694,7 @@ D.4 Inception module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D.5 Inception v2 module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 102 + H. Bibliography [AAB+16] M. Abadi, A. Agarwal et al., “Tensorflow: Large-scale machine learning on @@ -5637,6 +5753,7 @@ B. Baker, O. Gupta et al., “Designing neural network architectures using reinforcement learning,” arXiv preprint arXiv:1611.02167, Nov. 2016. [Online]. Available: https://arxiv.org/abs/1611.02167 103 + [BM93] U. Bodenhausen @@ -5707,6 +5824,7 @@ S. Chetlur, C. Woolley et al., “cuDNN: Efficient primitives for deep learning,” arXiv preprint arXiv:1410.0759, Oct. 2014. [Online]. Available: https://arxiv.org/abs/1410.0759 104 + [DBB+01] C. Dugas, Y. Bengio et al., @@ -5792,6 +5910,7 @@ Oct. 2007. [Online]. 105 + Available: https://www.microsoft.com/en-us/research/publication/asirra-a- captcha-that-exploits-interest-aligned-manual-image-categorization/ [EKS+96] @@ -5847,6 +5966,7 @@ Available: https://arxiv.org/abs/1311.2524 P. P. Greg Griffin, Alex Holub, “Caltech-256 object category dataset,” Apr. 2007. [Online]. Available: http://authors.library.caltech.edu/7694/ 106 + [GG16] Y. Gal and Z. Ghahramani, “Bayesian convolutional neural networks with Bernoulli approximate variational inference,” arXiv preprint arXiv:1506.02158, @@ -5922,6 +6042,7 @@ A. G. Howard, “Some improvements on deep convolutional neural network based image classification,” arXiv preprint arXiv:1312.5402, Dec. 2013. [Online]. Available: https://arxiv.org/abs/1312.5402 107 + [HPK11] J. Han, J. Pei, and M. Kamber, Data mining: concepts and techniques. Elsevier, 2011. @@ -5988,6 +6109,7 @@ https://arxiv.org/abs/1502.01852 [Ima12] “Imagenet large scale visual recognition challenge 2012 (ILSVRC2012),” 108 + 2012. [Online]. Available: http://www.image-net.org/challenges/LSVRC/ 2012/nonpub-downloads [IS15] @@ -6039,6 +6161,7 @@ and neural network approximation,” IEEE Transactions on Information Theory, vol. 48, no. 1, pp. 264–275, Jan. 2002. [Online]. Available: http://ieeexplore.ieee.org/abstract/document/971754/ 109 + [KSH12] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with @@ -6126,6 +6249,7 @@ http: [LG16] A. Lavin and S. Gray, “Fast algorithms for convolutional neural networks,” in 110 + Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, Sep. 2016, pp. 4013–4021. [Online]. Available: https://arxiv.org/abs/1509.09308 [LGT16] @@ -6193,6 +6317,7 @@ D. Mishkin and J. Matas, “All you need is a good init,” arXiv 111 + preprint arXiv:1511.06422, Nov. @@ -6266,6 +6391,7 @@ weight-sharing,” Neural computation, vol. 4, no. 4, pp. 473–493, 1992. [NH02] R. T. Ng and J. Han, “CLARANS: A method for clustering objects for spatial 112 + data mining,” IEEE transactions on knowledge and data engineering, vol. 14, no. 5, pp. 1003–1016, 2002. [NWC+11a] @@ -6315,6 +6441,7 @@ M. T. Ribeiro, S. Singh, and C. Guestrin, “"why should i trust you?": Explaining the predictions of any classifier,” arXiv preprint arXiv:1602.04938, Feb. 2016. [Online]. Available: https://arxiv.org/abs/1602.04938 113 + [Rud16] S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747, Sep. 2016. [Online]. Available: @@ -6382,6 +6509,7 @@ IEEE, Sep. 2015, pp. [SM02] K. O. Stanley and R. Miikkulainen, “Evolving neural networks through 114 + augmenting topologies,” Evolutionary computation, vol. 10, no. 2, pp. 99–127, 2002. [Online]. Available: http://www.mitpressjournals.org/doi/abs/10.1162/ 106365602320169811 @@ -6443,6 +6571,7 @@ Dec. 2016. [Online]. Available: https: //www.tensorflow.org/tutorials/mnist/beginners/ 115 + [tf-16b] “tf.nn.dropout,” Dec. 2016. [Online]. Available: https://www.tensorflow.org/ api_docs/python/nn/activation_functions_#dropout @@ -6504,6 +6633,7 @@ R. J. Williams, “Simple statistical gradient-following algorithms for connec- tionist reinforcement learning,” Machine learning, vol. 8, no. 3-4, pp. 229–256, 1992. 116 + [WWQ13] X. Wang, L. Wang, and Y. Qiao, A Comparative Study of Encoding, Pooling and Normalization Methods for Action Recognition. @@ -6556,6 +6686,7 @@ Curran Associates, Inc., Oct. 2016, pp. 1082–1090. [Online]. Available: http://papers.nips.cc/paper/6340-doubly-convolutional- neural-networks.pdf 117 + [ZDGD14] N. Zhang, J. Donahue et al., “Part-based R-CNNs for fine-grained category detection,” in European Conference on Computer Vision (ECCV). @@ -6628,6 +6759,7 @@ H. Zheng, Z. Yang et al., “Improving deep neural networks using softplus units,” in International Joint Conference on Neural Networks (IJCNN), Jul. 2015, pp. 1–4. 118 + I. Glossary ANN artificial neural network. 4 ASO Automatic Structure Optimization. 29 @@ -6650,9 +6782,11 @@ NAG Nesterov Accellerated Momentum. 83 NEAT NeuroEvolution of Augmenting Topologies. 83 OBD Optimal Brain Damage. 29 119 + PCA principal component analysis. 79 PReLU parametrized rectified linear unit. 60, 61, 63, 64, 72, 77, 78, 84 ReLU rectified linear unit. 5, 13, 60, 61, 63, 64, 72, 77, 78, 84 SGD stochastic gradient descent. 5, 30, 45, 46, 82 ZCA Zero Components Analysis. 79 120 + diff --git a/read/results/pymupdf/2201.00021.txt b/read/results/pymupdf/2201.00021.txt index 08c9559..3cb4a10 100644 --- a/read/results/pymupdf/2201.00021.txt +++ b/read/results/pymupdf/2201.00021.txt @@ -73,6 +73,7 @@ been identified as masers, including the (5,3), (5,4), (6,1), (6,2), (9,4), (9,5), (9,7), (9,8), (10,7), (10,8), (10,9), and (11,9) transi- Article number, page 1 of 10 arXiv:2201.00021v3 [astro-ph.GA] 9 Apr 2022 + A&A proofs: manuscript no. mainArxiv tions (e.g., Mauersberger et al. 1987, 1988; Walsh et al. 2007; Henkel et al. 2013; Mei et al. 2020). Except for the NH3 (3,3) @@ -198,6 +199,7 @@ tional Science Foundation operated under cooperative agreement by As- sociated Universities, Inc. 4 https://casa.nrao.edu/ Article number, page 2 of 10 + Y. T. Yan (闫耀庭) et al.: Discovery of ammonia (9,6) masers in two high-mass star-forming regions P.A. = 58◦.79 and 1′′.33 × 1′′.06 at P.A. = 5◦.36 toward Cep A and G34.26+0.15, respectively. For the 1.36 cm (20–24 GHz) @@ -276,6 +278,7 @@ position of the NH3 (9,6) emission with an offset of (−0′′.28, in CASA, and can thus be considered, accounting for the uncer- tainties, as unresolved. Article number, page 3 of 10 + A&A proofs: manuscript no. mainArxiv Fig. 3. Cepheus A. White contours mark the 1.36 cm JVLA continuum map of Cep A; levels are −5, 5, 10, 20, 30, 40, 50, 70, 90, and 110 × 0.125 mJy beam−1. The background image is the Spitzer 4.5 µm emission, taken from the Galactic Legacy Infrared Mid-Plane @@ -305,6 +308,7 @@ found toward G34.26+0.15 (Fig. 4). The deconvolved NH3 (9,6) component sizes are (1′′.42±0′′.43)×(0′′.54±0′′.62) at P.A. = 97◦ (M1), (0′′.42 ± 0′′.27) × (0′′.15 ± 0′′.27) at P.A. = 150◦ (M2), and Article number, page 4 of 10 + Y. T. Yan (闫耀庭) et al.: Discovery of ammonia (9,6) masers in two high-mass star-forming regions (1′′.17 ± 0′′.34) × (0′′.27 ± 0′′.46) at P.A. = 53◦ (M3) and are thus comparable to or smaller than the beam size. @@ -436,6 +440,7 @@ with Effelsberg in 2020 January. The velocity is similar to that of the JVLA measurements on the NH3 (1,1) absorption line against continuum source C (∼ 7′′ resolution; Keto et al. 1987) Article number, page 5 of 10 + A&A proofs: manuscript no. mainArxiv and the NH3 (3,3) emission surrounding continuum source B as well as the head of C (1′′.4×1′′.2 resolution; Heaton et al. 1989). @@ -562,6 +567,7 @@ G34.26+0.15, three NH3 (9,6) maser spots are observed: one is close to the head of the cometary UC H ii region C, and the other two are emitted from a compact region to the west of the HC H ii Article number, page 6 of 10 + Y. T. Yan (闫耀庭) et al.: Discovery of ammonia (9,6) masers in two high-mass star-forming regions region A. We suggest that the (9,6) masers may be connected to outflowing gas. Higher angular resolution JVLA and VLBI ob- @@ -693,6 +699,7 @@ Zhang, Q. & Ho, P. T. P. 1995, ApJ, 450, L63 Zhang, Q., Hunter, T. R., Sridharan, T. K., & Cesaroni, R. 1999, ApJ, 527, L117 Zheng, X. W., Moran, J. M., & Reid, M. J. 2000, MNRAS, 317, 192 Article number, page 7 of 10 + A&A proofs: manuscript no. mainArxiv Appendix A: Table A.1. Summary of NH3 (9, 6) maser observations. @@ -936,6 +943,7 @@ C 178.0 5070 ± 660 Article number, page 8 of 10 + Y. T. Yan (闫耀庭) et al.: Discovery of ammonia (9,6) masers in two high-mass star-forming regions Table A.3. NH3 (9,6) maser positions derived from the JVLA observations. Source @@ -1007,6 +1015,7 @@ denoting the position of the NH3 (9,6) emission with a purple star at its center OH (Bartkiewicz et al. 2005), H2O (Sobolev et al. 2018), and CH3OH (Sanna et al. 2017) masers are presented as diamonds, circles, and squares, respectively. The color bar on the right-hand side indicates the velocity range (VLSR) of maser spots. Article number, page 9 of 10 + A&A proofs: manuscript no. mainArxiv Fig. A.2. 1.36 cm JVLA continuum map of G34.26+0.15 presented as gray shaded areas. The reference position is αJ2000 = 18h53m18s.560, and δJ2000 = 01◦14′58′′.201, the peak position, is marked by a red cross. The red ellipses show the positions of NH3 (9,6) emission with stars at their @@ -1015,3 +1024,4 @@ Mookerjea et al. (2007). Contour levels are -3, 3, 10, 20, 30, 40, 50, 70, 90, 1 et al. 2011), and CH3OH (Bartkiewicz et al. 2016) masers are presented as diamonds, circles, and squares, respectively. The color bar indicates the velocity range (VLSR) of maser spots. Article number, page 10 of 10 + diff --git a/read/results/pymupdf/2201.00022.txt b/read/results/pymupdf/2201.00022.txt index f311599..d97554f 100644 --- a/read/results/pymupdf/2201.00022.txt +++ b/read/results/pymupdf/2201.00022.txt @@ -81,6 +81,7 @@ body interactions lead to the merger of stellar-mass BHs (e.g., O’Leary et al. 2006; G¨urkan et al. 2006; Blecha et al. 2006; Freitag et al. 2006; Umbreit et al. 2012; Ro- arXiv:2201.00022v1 [astro-ph.GA] 31 Dec 2021 + 2 Rose et al. driguez et al. 2018; Rodriguez et al. 2019; Fragione et al. @@ -194,6 +195,7 @@ the eccentricity of the BH’s orbit about the SMBH on the collision rate, while n and σ are simply evaluated at the semimajor axis of the orbit (see below). Note + IMBH Formation in Galactic Nuclei 3 Figure 1. We plot the relevant timescales, including col- @@ -288,6 +290,7 @@ a factor of a few for steep density profiles. We include a safe- guard in our code which takes the ratio tcoll/∆t and rounds it to the nearest integer. We take this integer to be the number of collisions and increase the BH mass accordingly. + 4 Rose et al. 2.4. Mass Growth @@ -393,6 +396,7 @@ star’s mass. Eq. 7 does not apply for other values of α. When the collision timescale is shorter, corresponding to a larger index α in the density profile (see Figure 1), the growth + IMBH Formation in Galactic Nuclei 5 is very efficient and ∆m quickly approaches 1 M⊙. Con- @@ -511,6 +515,7 @@ average than the surrounding objects, BHs are ex- pected to segregate inwards in the GN (e.g., Shapiro & Marchant 1978; Cohn & Kulsrud 1978; Morris 1993; Miralda-Escud´e & Gould 2000; Baumgardt et al. 2004). + 6 Rose et al. Figure 3. On the right, we plot final masses of 500 BHs using different values of α in the density profile, shallow (α = 1) to @@ -587,6 +592,7 @@ cesses allow BHs that begin further from the SMBH to migrate inwards and grow more efficiently in mass. However, it also impedes the growth of BHs that are initially closer to the SMBH by allowing them to dif- + IMBH Formation in Galactic Nuclei 7 Figure 4. Similar to Figure 3, we plot the initial masses versus initial distance (grey) and final mass versus final distance (red) @@ -668,6 +674,7 @@ a single compact object (e.g., Stephan et al. 2016, 2019; Hoang et al. 2018). Additionally, to be susceptible to evaporation, BH binaries must have a wider configura- tion. Otherwise, they will be more tightly bound that + 8 Rose et al. the average kinetic energy of the surrounding objects, @@ -765,6 +772,7 @@ Blecha, L., Ivanova, N., Kalogera, V., et al. 2006, ApJ, 642, 427, doi: 10.1086/500727 Bondi, H. 1952, MNRAS, 112, 195, doi: 10.1093/mnras/112.2.195 + IMBH Formation in Galactic Nuclei 9 Bondi, H., & Hoyle, F. 1944, MNRAS, 104, 273, @@ -867,6 +875,7 @@ doi: 10.3847/1538-4365/aacb24 —. 2018b, ApJS, 237, 13, doi: 10.3847/1538-4365/aacb24 Lu, C. X., & Naoz, S. 2019, MNRAS, 484, 1506, doi: 10.1093/mnras/stz036 + 10 Rose et al. Lu, J. R., Ghez, A. M., Hornstein, S. D., et al. 2009, ApJ, @@ -967,6 +976,7 @@ Society, 457, 3356, doi: 10.1093/mnras/stw225 Vink, J. S., Higgins, E. R., Sander, A. A. C., & Sabhahit, G. N. 2021, MNRAS, 504, 146, doi: 10.1093/mnras/stab842 + IMBH Formation in Galactic Nuclei 11 Wang, H., Stephan, A. P., Naoz, S., Hoang, B.-M., & @@ -981,3 +991,4 @@ Zheng, X., Lin, D. N. C., & Mao, S. 2020, arXiv e-prints, arXiv:2011.04653. https://arxiv.org/abs/2011.04653 Zhu, Z., Li, Z., & Morris, M. R. 2018, ApJS, 235, 26, doi: 10.3847/1538-4365/aab14f + diff --git a/read/results/pymupdf/2201.00029.txt b/read/results/pymupdf/2201.00029.txt index ebe6f0f..36811b2 100644 --- a/read/results/pymupdf/2201.00029.txt +++ b/read/results/pymupdf/2201.00029.txt @@ -11,6 +11,7 @@ Thomas Huckans, Peter Stine Department of Physics and Engineering, Bloomsburg University of Pennsylvania, 400 E 2nd St., Bloomsburg, PA 17815 + 2 Abstract @@ -59,6 +60,7 @@ the analysts of the vast amounts of data being received (Howell et al., 2014). A deactivated in 2018, the data used in this paper came from observations during 2010 and 2012 of white dwarf KIC 8626021 and was obtained from the Kepler Asteroseismic Science Operations Center (KASOC). + 3 The DBV white dwarf KIC 8626021 has an atmosphere rich in helium. Building upon @@ -109,6 +111,7 @@ and Q13. + 4 @@ -128,6 +131,7 @@ between points. Q7 had forty-three interpolated points, and Q13 had sixty-six. + 5 @@ -146,6 +150,7 @@ binning process. + 6 @@ -169,6 +174,7 @@ starspot (Santos et al., 2017). + 7 Q7 Significant @@ -266,6 +272,7 @@ are relative to our significant frequency cutoff of 3𝝈, thus negative numbers + 8 First Iteration (µHz) @@ -387,6 +394,7 @@ product of the method, and we calculated for such errors when finding our averag + 9 First Iteration (µHz) @@ -489,6 +497,7 @@ relates to the rotation of the white dwarf. Through the re-binning process, the SNR clearly improves for both quarters, and for Q7 it improves by approximately 1.3 dB, except for the last data re-bin. In the last re-bin, the previous + 10 significant frequency disappears, which becomes increasingly likely after successive re-binning @@ -508,6 +517,7 @@ spots are darker, cooler, and modulate stellar light curves, and with confirmati the harmonic frequencies can be used to calculate the spot’s rotation rate, size, latitude, and contrast (Santos et al., 2017). Using the process of re-binning, a starspot signal, previously dominated by noise, may have been discovered. + 11 Acknowledgments @@ -580,6 +590,7 @@ curve. Astronomy Astrophysics, 599, A1. https://doi.org/10.1051/0004-6361/201629923 + 12 Winget, D.e., & Kepler, S.o. (2008). Pulsating white dwarf stars and precision asteroseismology. @@ -593,3 +604,4 @@ Astrophyics, 157-199. https://doi.org/10.1146/annurev.astro.46.060407.145250 Wolfram Research, Inc., Mathematica, Version 12.3.1, Champaign, IL (2021). + diff --git a/read/results/pymupdf/2201.00037.txt b/read/results/pymupdf/2201.00037.txt index 286ebe2..20dab89 100644 --- a/read/results/pymupdf/2201.00037.txt +++ b/read/results/pymupdf/2201.00037.txt @@ -13,6 +13,7 @@ proaches that expected for a rigid planet. Corresponding author: Mathieu Dumberry, dumberry@ualberta.ca –1– arXiv:2201.00037v1 [astro-ph.EP] 31 Dec 2021 + Confidential manuscript submitted to JGR-Planets Abstract We present a model of the Cassini state of Mercury that comprises an inner core, a fluid core @@ -60,6 +61,7 @@ ronment GEochemistry and Ranging (MESSENGER) spacecraft. Within measurement erro all techniques yield an obliquity which is coplanar with the orbit and Laplace plane normals and consistent with a Cassini state. Furthermore, the observed obliquity angle (2.042 ± 0.08 –2– + Confidential manuscript submitted to JGR-Planets I descending @@ -123,6 +125,7 @@ radial contraction of ∼ 7 km since the late heavy bombardment [Byrne et al., 2 approximate limit of 800 km on the inner core radius [Grott et al., 2011]. However, the inner core could be larger if a significant fraction of its growth occurred earlier in Mercury’s history. –3– + Confidential manuscript submitted to JGR-Planets With a fluid core, and possibly a solid inner core, the observed obliquity εm reflects the orientation of the spin-symmetry axis of the precessing mantle and crust alone. Neglecting dis- @@ -172,6 +175,7 @@ obliquity of the mantle spin axis with respect to the gravity field could be us size of the inner core, even though this is difficult to do at present because the different esti- mates of the obliquity of the gravity field do not match well with one another. –4– + Confidential manuscript submitted to JGR-Planets There is thus a significant interest in properly assessing how the presence of a solid in- ner core at the centre of Mercury may affect its Cassini state equilibrium. Here, we present a @@ -241,6 +245,7 @@ mξm + ρcR5ξr where ¯A is the mean equatorial moment of inertia defined below. The same procedure was used in Peale et al. [2016] and the mathematical details are given in Equations (18-20) of Dumberry –5– + Confidential manuscript submitted to JGR-Planets Mercury Parameter Numerical value @@ -331,6 +336,7 @@ J2 , C22 . (4) –6– + Confidential manuscript submitted to JGR-Planets θm θn @@ -392,6 +398,7 @@ mantle (b), the Cassini plane is rotating at frequency ωΩo tion. The oblateness of all three regions and the amplitude of all angles are exaggerated for purpose of illustration. –7– + Confidential manuscript submitted to JGR-Planets 2.2 The rotational model Mercury’s rotation is characterized by a 3:2 spin-orbit resonance in which it completes @@ -462,6 +469,7 @@ or equivalently, by Equation (19e) of Stys and Dumberry [2018], ω sin(θp) + sin(θm + θp) = 0 . (7) –8– + Confidential manuscript submitted to JGR-Planets This expresses a formal connection between θp and θm which is independent of the interior struc- ture of Mercury. Using Equation (5) and cos(θm) → 1, this connection can be rewritten as @@ -515,6 +523,7 @@ The notation ˜m, ˜mf, ˜ms, ˜ns follows that introduced in the original model Note that all tilded amplitudes are complex: their imaginary part reflects the out-of-phase re- sponse to the applied torque as a result of dissipation, for instance from viscous or EM coupling –9– + Confidential manuscript submitted to JGR-Planets at the boundaries of the fluid core. In the absence of dissipation, all tilded variables are purely real. We concentrate our analysis in this work on the real part of the solutions, which corre- @@ -598,6 +607,7 @@ o ¯A (14) where –10– + Confidential manuscript submitted to JGR-Planets φm = 3 2 @@ -669,6 +679,7 @@ of equations for the five rotational variables ˜m, ˜mf, ˜ms, ˜ns and ˜εm. of Mercury, in the frequency domain, when subject to a periodic solar torque applied at fre- quency ω. The system can be written in a matrix form as –11– + Confidential manuscript submitted to JGR-Planets M · x = y , (22a) @@ -767,6 +778,7 @@ a larger span of the parameter space. One drawback, however, is that our model d ture time-dependent variations at any other frequencies, including the precession of the peri- center of Mercury’s orbit about the Sun. –12– + Confidential manuscript submitted to JGR-Planets 2.3 Analytical solutions and limiting cases 2.3.1 @@ -829,6 +841,7 @@ responds to the Eulerian wobble, or Chandler wobble, and represents the prograde of the rotation axis about the symmetry axis. The second mode is the free retrograde axial pre- cession of Mercury. As seen in the inertial frame, its frequency is given by –13– + Confidential manuscript submitted to JGR-Planets ωfp = nMR2 C @@ -917,6 +930,7 @@ Kicb − α1es λs = ¯σs − Ωp cos I , (33c) –14– + Confidential manuscript submitted to JGR-Planets and where we have introduced the frequencies ¯σf = Ωo @@ -980,6 +994,7 @@ rameters listed in Table 1, and an observed obliquity of εm = 2.04 arcmin [Marg this gives ˆC = C/MR2 = 0.3455 and all our interior models are consistent with this choice. Obviously, this reflects a Cassini state equilibrium in which the fluid core and inner core are –15– + Confidential manuscript submitted to JGR-Planets perfectly aligned with the mantle, which is not strictly correct. Hence, we make an error in es- timating ˆC from Equation (28), or conversely in predicting εm based on a given choice for ˆC. @@ -1037,6 +1052,7 @@ numerical values shown on Figure 3a but not their trends with rs. Figure 3b shows how the FCN and FICN periods vary with rs for each of the two inner core density scenarios and in the absence of viscous and EM coupling (i.e. Kcmb = Kicb = –16– + Confidential manuscript submitted to JGR-Planets 0 200 @@ -1133,6 +1149,7 @@ is retrograde). This is also the case for the Moon [e.g. Dumberry and Wieczorek, Dumberry, 2018], but it is different for Earth, where α1 > α3αg because of its faster rotation and the FICN mode is prograde [Mathews et al., 1991]. Note also that our approximate expres- –17– + Confidential manuscript submitted to JGR-Planets sion for the FICN differs by a factor ( ¯A+ ¯As)/( ¯A− ¯As) compared to that given in Dumberry and Wieczorek [2016] and Stys and Dumberry [2018] for the Moon. @@ -1185,6 +1202,7 @@ the mantle is much stronger than the inertial torque acting at the ICB. As a res core must remain in close alignment with the mantle. Presented differently, since the FICN pe- riod is more than 3000 times shorter than the forced precession period, the inner core can eas- –18– + Confidential manuscript submitted to JGR-Planets 2.038 2.040 @@ -1280,6 +1298,7 @@ leads to a slightly larger ˜εm compared to a rigid planet. Because the inner c itationally locked to the mantle, deviations from a rigid planet are dominantly caused by the misalignment of the fluid core. In Equation (41), ¯σs ≫ ¯σf, so to a good approximation –19– + Confidential manuscript submitted to JGR-Planets χ ≈ ¯Af @@ -1348,6 +1367,7 @@ rior is not well known but based on theoretical and experimental studies it is e of the order of 10−6 m2 s−1 [e.g. Gans, 1972; de Wijs et al., 1998; Alf`e et al., 2000; Rutter et al., 2002a,b]. –20– + Confidential manuscript submitted to JGR-Planets The above parameterizations are valid only under the assumption that the flow in the bound- ary layer remains laminar. Whether this is reasonable can be assessed by evaluating the Reynolds @@ -1410,6 +1430,7 @@ and ˜mf are qualitatively similar: viscous coupling at the CMB acts to reduce t fluid spin axis from the mantle symmetry axis. Considering the upper bound in turbulent vis- cosity that we have identified above (i.e ν ≈ 5 × 10−4 m2 s−1), the influence of viscous cou- –21– + Confidential manuscript submitted to JGR-Planets εm εg @@ -1490,6 +1511,7 @@ and ˜ns change with inner core size would certainly be different for a turbule coupling. But the general conclusion remains that the addition of viscous coupling at the CMB and ICB does not significantly modify the Cassini state equilibrium angle of the mantle. –22– + Confidential manuscript submitted to JGR-Planets 3.4 Electromagnetic coupling Let us now turn to electromagnetic (EM) coupling. To focus on its role in the equilibrium @@ -1586,6 +1608,7 @@ at the bottom of Mercury’s mantle, for instance by the upward sedimentation an of solid FeS crystals precipitating out of the fluid core [e.g. Hauck et al., 2013]. However, even in the extreme case of σm = σf = 106 S m−1, Kcmb ≈ (1.6 × 10−8) · (1 − i), which remains –23– + Confidential manuscript submitted to JGR-Planets smaller by a factor ∼ 60 than the smallest possible viscous coupling constant. Viscous forces dominate the tangential stress on the CMB of Mercury. @@ -1643,6 +1666,7 @@ of Mercury is unknown. The dynamo model of Christensen [2006] showed that the etry inside the core could be dominated by small length scales, yet only the weaker lower har- monics of the field would penetrate through a thermally stratified layer in the upper region of –24– + Confidential manuscript submitted to JGR-Planets the fluid core and reach the surface. If so, the field strength inside the core can exceed the sur- face field strength by a factor 1000. Taking a surface field strength equal to ∼ 300 nT [e.g An- @@ -1695,6 +1719,7 @@ tween ˜εm and ˜εg can be as large as 0.008 arcmin for a large inner core. Coupling models when viscous and EM stresses are both present have been presented in Mathews and Guo [2005] and Deleplace and Cardin [2006]. However, in the light of our results, –25– + Confidential manuscript submitted to JGR-Planets 2.032 2.034 @@ -1780,6 +1805,7 @@ tle, fluid core and inner core associated with the equilibrium Cassini state of model included the tangential viscous stress at the ICB and CMB, but not the EM stress. Their Table 1 gives the obliquities of the mantle, fluid core and inner core, denoted respectively as –26– + Confidential manuscript submitted to JGR-Planets 2.032 2.034 @@ -1880,6 +1906,7 @@ torque (and alter the period of the FICN), but only for a small inner core. The EM torque to the gravitational torque decreases with inner core size, so a large inner core should be more strongly aligned with the mantle. The more strongly the inner core and mantle are –27– + Confidential manuscript submitted to JGR-Planets gravitationally locked together, the more they behave as a single rigid body in response to the external torque from the Sun. We expect then that the obliquity of the mantle should be brought @@ -1931,6 +1958,7 @@ of Mercury reflect instead the orientation of the principal moment of the whole two orientations do not coincide when an inner core is present and is misaligned from the man- tle. Since gravitational coupling prevents a large inner core tilt with respect to the mantle, we –28– + Confidential manuscript submitted to JGR-Planets find that the misalignment ∆εg = εg − εm is limited. The maximum offset that we obtain is approximately ∆εg ≈ 0.007 arcmin. This limited magnitude of offset is important in the @@ -1980,6 +2008,7 @@ sis of a rigid planet. However, the smaller measurement errors expected from the Columbo satellite mission may permit this distinction, and thus provide further constraints on Mercury’s interior structure. –29– + Confidential manuscript submitted to JGR-Planets Acknowledgments Figures were created using the GMT software [Wessel et al., 2013]. The source codes, GMT @@ -2028,6 +2057,7 @@ de Koker, N., G. Seinle-Neumann, and V. Vlˇcek (2012), Electrical resistivity a conductivity of liquid Fe alloys at high P and T, and heat flux in Earth’s core, Proc. Nat. Acad. Sci., 109, 4070–4073. –30– + Confidential manuscript submitted to JGR-Planets de Wijs, G. A., G. Kresse, L. Voˇcadlo, D. Dobson, D. Alf´e, M. J. Gillan, and G. D. Price (1998), The viscosity of liquid iron at the physical conditions of the Earth’s core, Nature, @@ -2079,6 +2109,7 @@ tude libration of Mercury reveals a molten core, Science, 316, 710–714. Margot, J. L., S. J. Peale, S. C. Solomon, S. A. Hauck, F. D. Ghigo, R. F. Jurgens, M. Yseboodt, J. D. Giorgini, S. Padovan, and D. B. Campbell (2012), Mercury’s –31– + Confidential manuscript submitted to JGR-Planets moment of inertia from spin and gravity data, J. Geophys. Res., 117, E00L09, doi:10.1029/2012JE004161. @@ -2130,6 +2161,7 @@ study of liquid Fe-S (8.5 wt. per cent S), Geophys. Res. Lett., 29, 080,000–1. Rutter, M. D., R. A. Secco, H. Liu, T. Uchida, M. Rivers, S. Sutton, and Y. Wang (2002b), Viscosity of liquid Fe at high pressure, Phys. Rev. B, 66, 060,102, –32– + Confidential manuscript submitted to JGR-Planets doi:10.1029/2001GL014392. Schaefer, L., S. B. Jacobsen, J. L. Remo, M. I. Petaev, and D. D. Sasselov (2017), Metal- @@ -2170,3 +2202,4 @@ A, 303, 327–338. Yseboodt, M., and J. L. Margot (2006), Evolution of Mercury’s obliquity, Icarus, 181, 327–337. –33– + diff --git a/read/results/pymupdf/2201.00069.txt b/read/results/pymupdf/2201.00069.txt index 69d3743..56db061 100644 --- a/read/results/pymupdf/2201.00069.txt +++ b/read/results/pymupdf/2201.00069.txt @@ -37,6 +37,7 @@ S. Zouari,22 N. Żywucka,1 Accepted XXX. Received YYY; in original form ZZZ MNRAS 000, 1–15 (2021) arXiv:2201.00069v1 [astro-ph.HE] 31 Dec 2021 + MNRAS 000, 1–15 (2021) Preprint 4 January 2022 Compiled using MNRAS LATEX style file v3.0 @@ -135,6 +136,7 @@ angular scales of a few arcseconds, but resolved out to scales of Localisations of four one-off FRBs through imaging of 2 https://frbhosts.org/ © 2021 The Authors + MeerKAT, e-MERLIN, Swift and H.E.S.S., observations of three localised FRBs 3 buffered raw voltage data at 1.4 GHz (Bannister et al. 2019; @@ -255,6 +257,7 @@ more details). 4 https://www.meertrap.org/ 5 https://github.com/AstroAccelerateOrg/astro-accelerate MNRAS 000, 1–15 (2021) + 4 Chibueze et al. 2.2 @@ -377,6 +380,7 @@ Considering the results of the astrometric comparison with NVSS (see Figure 1), we considered potential associations of contin- uum sources in the MeerKAT observations with the FRB loca- MNRAS 000, 1–15 (2021) + MeerKAT, e-MERLIN, Swift and H.E.S.S., observations of three localised FRBs 5 tion to sources within 5′′. Using this spatial coincidence criterion, @@ -497,6 +501,7 @@ w3pimms.pl 10 https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh. pl MNRAS 000, 1–15 (2021) + 6 Chibueze et al. Figure 1. Astrometric comparison between MeerKAT and NVSS discrete compact sources.The open circles represent the difference in position between the @@ -541,6 +546,7 @@ rated from the position of FRB 20190714A by 0.′′53. The persistent source near FRB 20190714A has a flux broadly consistent with the MeerKAT flux and is unresolved on the e-MERLIN baselines. The MNRAS 000, 1–15 (2021) + MeerKAT, e-MERLIN, Swift and H.E.S.S., observations of three localised FRBs 7 Figure 2. FRB 20190714A MeerKAT epoch I image (top) and a zoom-in (bottom) around the position of the FRB indicated by the cyan circle. White contours @@ -548,6 +554,7 @@ Figure 2. FRB 20190714A MeerKAT epoch I image (top) and a zoom-in (bottom) aroun white ellipse in the bottom left corner represents the beam size of MeerKAT. The cyan cross indicates the position of the detected compact emission in our e-MERLIN observations. MNRAS 000, 1–15 (2021) + 8 Chibueze et al. Figure 3. FRB 20190714A MeerKAT epoch II image (top) and a zoom-in (bottom) around the position of the FRB indicated by the cyan circle. White contours @@ -555,6 +562,7 @@ Figure 3. FRB 20190714A MeerKAT epoch II image (top) and a zoom-in (bottom) arou white ellipse in the bottom left corner represents the beam size of MeerKAT. The cyan cross indicates the position of the detected compact emission in our e-MERLIN observations. MNRAS 000, 1–15 (2021) + MeerKAT, e-MERLIN, Swift and H.E.S.S., observations of three localised FRBs 9 Figure 4. UVOT summed image of FRB 20171019A region taken during the MWL observation campaign in September-October 2019. The white circles @@ -643,6 +651,7 @@ Our e-MERLIN observations probe a different spatial scale than the size of the persistent radio source associated with FRB 20121102A. At the angular diameter distance of MNRAS 000, 1–15 (2021) + 10 Chibueze et al. Figure 5. XRT summed image of FRB 20171019A region taken during the MWL observation campaign in September - October 2019. The position of the @@ -702,6 +711,7 @@ H.E.S.S. observations did not lead to a detection of a persistent or a transient source associated to FRB 20171019A. We found no X-ray counterparts and thus derived the upper limits to constrain these MNRAS 000, 1–15 (2021) + MeerKAT, e-MERLIN, Swift and H.E.S.S., observations of three localised FRBs 11 Figure 6. Map of upper limits on the VHE gamma-ray energy flux derived from the H.E.S.S. observations. The limits are valid above 120 GeV and assume @@ -767,6 +777,7 @@ port of the Namibian authorities and of the University of Namibia in facilitating the construction and operation of H.E.S.S. is grate- fully acknowledged, as is the support by the German Ministry for MNRAS 000, 1–15 (2021) + 12 Chibueze et al. Figure 7. FRB 20171019A MeerKAT image and a zoom-in (insert) around the position of the FRB. The white ellipse on the bottom left corner of the insert @@ -828,6 +839,7 @@ Dimoudi S., Armour W., 2015, arXiv e-prints, p. arXiv:1511.07343 Dimoudi S., Adamek K., Thiagaraj P., Ransom S. M., Karastergiou A., Armour W., 2018, ApJS, 239, 28 MNRAS 000, 1–15 (2021) + MeerKAT, e-MERLIN, Swift and H.E.S.S., observations of three localised FRBs 13 Figure 8. FRB 20190711A MeerKAT epoch I image and a zoom-in (insert) around the position of the FRB. The white ellipse on the bottom left corner of the @@ -897,6 +909,7 @@ ods in Physics Research A, 551, 493 Roming P. W. A., et al., 2005, Space Science Reviews, 120, 95–142 Tavani M., et al., 2021, Nature Astronomy, 5, 401–407 MNRAS 000, 1–15 (2021) + 14 Chibueze et al. Figure 9. FRB 20190711A MeerKAT epoch II image and a zoom-in (insert) around the position of the FRB. The white ellipse on the bottom left corner of @@ -960,6 +973,7 @@ versity, 351 95 Växjö, Sweden 24Laboratoire Univers et Théories, Observatoire de Paris, Univer- sité PSL, CNRS, Université de Paris, 92190 Meudon, France MNRAS 000, 1–15 (2021) + MeerKAT, e-MERLIN, Swift and H.E.S.S., observations of three localised FRBs 15 25Sorbonne Université, Université Paris Diderot, Sorbonne Paris @@ -1014,3 +1028,4 @@ gashinada, Kobe, Hyogo 658-8501, Japan 45RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan This paper has been typeset from a TEX/LATEX file prepared by the author. MNRAS 000, 1–15 (2021) + diff --git a/read/results/pymupdf/2201.00151.txt b/read/results/pymupdf/2201.00151.txt index 66494fa..86ff84e 100644 --- a/read/results/pymupdf/2201.00151.txt +++ b/read/results/pymupdf/2201.00151.txt @@ -83,6 +83,7 @@ this technique (Walker & Peñarrubia 2011; Amorisco & Evans 2012; Hayashi et al. 2018) in order to resolve the so-called cusp- core problem. It has been shown to be difficult, however, due Article number, page 1 of 12 + A&A proofs: manuscript no. Populations4 Table 1. Properties of the Illustris galaxy used to create mock data. Property @@ -217,6 +218,7 @@ larities, angular momenta, and axis ratios published by the Illus- tris team (Genel et al. 2015) containing subhalos with the stellar mass larger than 109 M⊙, only a few met our restrictive require- Article number, page 2 of 12 + K. Kowalczyk & E. L. Łokas: Multiple stellar populations in Schwarzschild modeling -80 -40 @@ -435,6 +437,7 @@ sities (multiplied by the mean mass of a stellar particle when needed), the application of the half-number radius is more self- consistent. Article number, page 3 of 12 + A&A proofs: manuscript no. Populations4 10-3 10-1 @@ -556,6 +559,7 @@ I(R) = I0 , (1) Article number, page 4 of 12 + K. Kowalczyk & E. L. Łokas: Multiple stellar populations in Schwarzschild modeling 10-3 10-2 @@ -707,6 +711,7 @@ a(log r − log r0)c + log(Υ0) r > r0 (5) Article number, page 5 of 12 + A&A proofs: manuscript no. Populations4 1 2 @@ -811,6 +816,7 @@ algebraic sum of values for both populations. As our parametrization of the mass-to-light ratio is not intu- itive we present its profiles explicitly in the first rows of the left- Article number, page 6 of 12 + K. Kowalczyk & E. L. Łokas: Multiple stellar populations in Schwarzschild modeling 106 107 @@ -1047,6 +1053,7 @@ and more biased for the remaining lines of sight. We notice a trend from under- to overestimation of the anisotropy when go- ing from the major to the minor axis. Article number, page 7 of 12 + A&A proofs: manuscript no. Populations4 -1 0 @@ -1193,6 +1200,7 @@ or blue for population I and II, respectively. As we have shown in our earlier work, the light profile of the Fornax dSph can be well reproduced with the three-parameter Article number, page 8 of 12 + K. Kowalczyk & E. L. Łokas: Multiple stellar populations in Schwarzschild modeling Table 2. Properties of the data samples for the Fornax dSph. Property @@ -1338,6 +1346,7 @@ the results from Kowalczyk et al. (2019) for comparison. As a result of freeing the steepness of the mass-to-light ratio profile (parameter c) with respect to the previous study Article number, page 9 of 12 + A&A proofs: manuscript no. Populations4 0 0.5 @@ -1462,6 +1471,7 @@ ders of magnitude larger than the estimated masses of classical dwarfs. Still, the galaxy possessed appropriate qualitative char- acteristics, such as the lack of gas and an almost spherical shape, Article number, page 10 of 12 + K. Kowalczyk & E. L. Łokas: Multiple stellar populations in Schwarzschild modeling 101 103 @@ -1618,6 +1628,7 @@ Fornax dSph galaxy. Due to the addition of another free param- eter in our functional form for the mass-to-light ratio, our re- sults for modeling all stars are slightly different from the ones Article number, page 11 of 12 + A&A proofs: manuscript no. Populations4 obtained in Kowalczyk et al. (2019). However, in terms of the total density and mass distribution the estimates obtained here @@ -1725,3 +1736,4 @@ Vogelsberger, M., Genel, S., Springel, V., et al. 2014b, MNRAS, 444, 1518 Walker, M. G., & Peñarrubia, J. 2011, ApJ, 742, 20 Wang, M. Y., de Boer, T., Pieres, A., et al. 2019, ApJ, 881, 118 Article number, page 12 of 12 + diff --git a/read/results/pymupdf/2201.00178.txt b/read/results/pymupdf/2201.00178.txt index f19265d..7b745f8 100644 --- a/read/results/pymupdf/2201.00178.txt +++ b/read/results/pymupdf/2201.00178.txt @@ -48,6 +48,7 @@ validated by Hanson et al. (2021) (hereafter H21) by examining the power-spectru comparing with previous time-distance studies (Langfellner et al. 2018). prasad.subramanian@tifr.res.in arXiv:2201.00178v1 [astro-ph.SR] 1 Jan 2022 + 2 Mani et al. Normal-mode coupling refers to the concept of expressing solar-oscillation eigenfunctions as a linear weighted combi- @@ -111,6 +112,7 @@ exploited to expedite inversions. Note that Pqj = P ∗ −qj for the flow field to be real in the spatio- temporal domain. To infer flows from wavefields φ scattered by a perturbation of length scale q, cross-correlate them in the manner + Imaging near-surface flows using mode-coupling analysis 3 φω∗ @@ -201,6 +203,7 @@ random process and the wavefields are uncorrelated across wavenumber and freque Every independent realization of a mode can be understood as the output of a damped harmonic oscillator driven by a random forcing function (see Duvall & Harvey 1986). Modes are thus generated with random phases and amplitudes and with finite lifetimes. This stochasticity leads to realization noise in repeated measurements of mode parameters + 4 Mani et al. Figure 1. Dispersion relation for the radial orders used in this analysis; f (blue), p1 (orange) and p2 (green). The shaded @@ -243,6 +246,7 @@ affecting the quality of the seismic measurements. Owing to these factors, to m inspecting the power-spectrum), the parameters describing the extent of coupling over different ranges of kR⊙ at fixed radial order are different. In wavenumber, we restrict our analysis to within 200 ≤ kR⊙ ≤ 2000 and qR⊙ ≤ 300. Our frequency range is confined to span the range over which acoustic modes are observed (2 ≤ ω/2π ≤ 5 in mHz). + Imaging near-surface flows using mode-coupling analysis 5 Coupling @@ -313,6 +317,7 @@ M × M) and pre-multiplying by K⊺, (11) U =(K⊺Λ−1K)−1K⊺Λ−1B. (12) + 6 Mani et al. Figure 2. Left: Averaging kernel for poloidal flow (see section B.2, eq B17, and left panel of Figure 8) for qR⊙ = [−112, −45], @@ -358,6 +363,7 @@ min), the velocities are given by vx = ∆x/∆t and vy = ∆y/∆t. This exerci images I1, I2 and for each consecutive pair of images in the cube. In practice, we use the Fourier LCT algorithm (FLCT, Fisher & Welsch 2008) for computing vx and vy. FLCT requires the input sigma, which we set to 4 pix, that captures the extent of localization desired, and depends on the + Imaging near-surface flows using mode-coupling analysis 7 Figure 3. Top: Inverted poloidal flow power-spectrum for the three couplings f-f, p1-p1, and p2-p2 as a function of qxR⊙ and @@ -393,6 +399,7 @@ We seek to show comparisons (see Figures 5, 6, and 7) for qR⊙ = 100, 150, 200 flows at these length scales, we apply a Gaussian filter (see Figure 4) to flows obtained from eqns 16 and 17. The Gaussian is centered at the desired wavenumber with a half-width of 25. We then perform a 2D Fourier transform to obtain a real-space steady-flow map. + 8 Mani et al. Figure 4. Left: Divergence-flow power spectrum |div|2, from eqn 16, obtained from inversion using all the couplings. The @@ -458,6 +465,7 @@ inferred flow moves further away from that of supergranules (Figure 7), the dem An adequate number of modes (and coupling strength between higher radial-orders) thus becomes a necessity to comment substantively on the flows at these scales. 6.1. Amplitudes of mode-coupling flows + Imaging near-surface flows using mode-coupling analysis 9 (a) qR⊙ = 100, f-f + p1-p1 + p2-p2 @@ -483,6 +491,7 @@ Thus, the amplitudes of the mode-coupling flows (and the correlation coefficien Here, we report in Table 2 only the maximum correlation found from among the points in the radial grid close to the surface (within ±0.5 Mm from z=0). For a desired comparison length scale qR⊙, we first fix the coupling(s) and the regularization parameter to be used in the inversion. We then separately compute filtered divergence and + 10 Mani et al. (a) qR⊙ = 100, f-f @@ -505,6 +514,7 @@ and LCT agree closely in amplitudes. But, to recapitulate, a host of factors des for divergence flows owing to the multi-step process involved in obtaining them. For example, there has been a history (see, e.g., De Rosa et al. 2000; Sekii et al. 2007; Zhao et al. 2007; Langfellner et al. 2018; B¨oning et al. 2020; Korda & ˇSvanda 2021) of using travel-time difference as only a proxy for horizontal divergence. However, Langfellner et al. + Imaging near-surface flows using mode-coupling analysis 11 Coupling @@ -613,6 +623,7 @@ jq ∂z � . (A3) + 12 Mani et al. Express the mode eigenfunction describing oscillations in the Cartesian domain by (see Woodard 2006) @@ -752,6 +763,7 @@ the real domain. Setting σ = 0 gives us the linear, invertible equation eq 6. S the noise model obtained in H21 and summing over ±ω establishes the symmetry Gσ k,q = G−σ −k,−q. + Imaging near-surface flows using mode-coupling analysis 13 B. SOLA INVERSIONS @@ -819,6 +831,7 @@ k (B17) As an aside, we note that averaging kernels can similarly be constructed for RLS (see section 3.1) using eqns 13 and B14. + 14 Mani et al. Figure 8. Left: Kernel Kk,q(z) (eq B14) shown vs depth z for the three radial order couplings f-f, p1-p1, and p2-p2. qR⊙ = @@ -885,6 +898,7 @@ Christensen-Dalsgaard, J. 2002, Reviews of Modern Physics, 74, 1073, doi: 10.1103/RevModPhys.74.1073 —. 2021, Living Reviews in Solar Physics, 18, 2, doi: 10.1007/s41116-020-00028-3 + Imaging near-surface flows using mode-coupling analysis 15 Figure 9. Left: Poloidal flow power-spectrum for f-f as a function of qxR⊙ and qyR⊙. Right: Corresponding power-spectrum @@ -934,6 +948,7 @@ Hanson, C. S., Hanasoge, S., & Sreenivasan, K. R. 2021, ApJ, 910, 156, doi: 10.3847/1538-4357/abe770 Hathaway, D. H., Teil, T., Norton, A. A., & Kitiashvili, I. 2015, ApJ, 811, 105, doi: 10.1088/0004-637X/811/2/105 + 16 Mani et al. Hathaway, D. H., Upton, L., & Colegrove, O. 2013, Science, @@ -1009,3 +1024,4 @@ ApJ, 659, 848, doi: 10.1086/512009 Zhao, J., Nagashima, K., Bogart, R. S., Kosovichev, A. G., & Duvall, T. L., J. 2012, ApJL, 749, L5, doi: 10.1088/2041-8205/749/1/L5 + diff --git a/read/results/pymupdf/2201.00200.txt b/read/results/pymupdf/2201.00200.txt index 34fbc57..34504e3 100644 --- a/read/results/pymupdf/2201.00200.txt +++ b/read/results/pymupdf/2201.00200.txt @@ -85,6 +85,7 @@ To reinvigorate the debate, Buldgen et al. (2019b) recently highlighted once again how the transition of the temperature gra- 1 arXiv:2201.00200v1 [astro-ph.SR] 1 Jan 2022 + Baraffe et al.: Local heating due to convective overshooting and the solar modelling problem dient just below the convective envelope can significantly impact the disagreement between solar models and helioseismic con- @@ -180,6 +181,7 @@ The figure shows the local heating in the overshooting layer and its impact on the sub-adiabaticity (∇ − ∇ad), with ∇ = d log T d log P the 2 + Baraffe et al.: Local heating due to convective overshooting and the solar modelling problem temperature gradient and ∇ad = d log T d log P|S the adiabatic gradient. @@ -313,6 +315,7 @@ with g(r) = sin{[(r − rmid)/(rCB − rmid)]a × π/2}. (3) 3 + Baraffe et al.: Local heating due to convective overshooting and the solar modelling problem For rov ≤ r < rmid, we use ∇ = ∇rad − h(r)∇ad, @@ -404,6 +407,7 @@ in the first of the trends outlined in Sect. 3.1. Unsurprisingly, such a modification of the temperature gradient is expected to improve the agreement with helioseismic constraints and help 4 + Baraffe et al.: Local heating due to convective overshooting and the solar modelling problem remove the sound speed anomaly below the convective bound- ary (second trend in Sect. 3.1), as suggested by the results of @@ -495,6 +499,7 @@ shooting region due to convective penetration provides the quali- tative effects required to improve the speed of sound discrepancy below the convective boundary. This discrepancy is persistent in 5 + Baraffe et al.: Local heating due to convective overshooting and the solar modelling problem Fig. 4. Difference of various structural quantities between a modified model and a reference model calculated with the @@ -584,6 +589,7 @@ grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. 6 + Baraffe et al.: Local heating due to convective overshooting and the solar modelling problem References Anders, E. & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197 @@ -636,3 +642,4 @@ L14 Zhang, Q. S. & Li, Y. 2012, ApJ, 746, 50 Zhang, Q.-S., Li, Y., & Christensen-Dalsgaard, J. 2019, ApJ, 881, 103 7 + diff --git a/read/results/pymupdf/2201.00201.txt b/read/results/pymupdf/2201.00201.txt index 0ea978b..31d8868 100644 --- a/read/results/pymupdf/2201.00201.txt +++ b/read/results/pymupdf/2201.00201.txt @@ -86,6 +86,7 @@ estimates for these stars. A summary of the main results and prospects emerging from these Hipparcos-era studies is given by Article number, page 1 of 9 arXiv:2201.00201v2 [astro-ph.SR] 17 Jan 2022 + A&A proofs: manuscript no. trabucchi_etal_2022_period_age_relation_of_lpvs Feast (2007). More recently, the study of the Galaxy with LPVs has been stimulated by the wealth of data acquired by large-scale @@ -210,6 +211,7 @@ in the Ks filter is smaller than ∼ 0.1 mag for most of the clus- ters we considered, and at most as large as ∼ 0.3 mag, which is negligible for our purposes. Article number, page 2 of 9 + Trabucchi et al.: The period-age relation of LPVs A detailed membership verification is beyond the scope of this work, and we relied on the checks performed by authors @@ -334,6 +336,7 @@ It is the very fact that DFMP occurs only during the final portion 2 A further version of the PA plane highlighting both chemical types can be found in Fig. A.2 of appendix A.1. Article number, page 3 of 9 + A&A proofs: manuscript no. trabucchi_etal_2022_period_age_relation_of_lpvs Fig. 1. Period-age diagram. Panel (a) shows the predicted period-age distribution (darker tones indicate a higher expected number of LPVs on a linear scale, normalized to maximum). Symbols represent observed LPVs (green: SRVs; purple: Miras; white: unclassified) with the shape @@ -375,6 +378,7 @@ tribution. We note that, in contrast with the prescription we adopted, the onset of DFMP in reality is probably sensitive to metallic- Article number, page 4 of 9 + Trabucchi et al.: The period-age relation of LPVs ity. While the good degree of agreement with observations sug- gests that the dependence is weak at most, it is possible for @@ -502,6 +506,7 @@ cated to time-domain astronomy will be highly valuable to probe this possibility. A study of the impact of metallicity on nonlinear pulsation is highly desirable to pursue this line of investigation, Article number, page 5 of 9 + A&A proofs: manuscript no. trabucchi_etal_2022_period_age_relation_of_lpvs as would be a theoretical investigation of the dependence of pho- tometric amplitudes upon global stellar parameters. @@ -649,6 +654,7 @@ Wilson, R. E. & Merrill, P. W. 1942, ApJ, 95, 248 Wyatt, S. P. & Cahn, J. H. 1983, ApJ, 275, 225 Ya’Ari, A. & Tuchman, Y. 1996, ApJ, 456, 350 Article number, page 6 of 9 + Trabucchi et al.: The period-age relation of LPVs Fig. A.1. Absolute-Ks Gaia-2MASS diagram for the stars with or with- out a spectral type (left and right panels, respectively) in the selected @@ -746,6 +752,7 @@ modeling and best-fit, respectively, by means of the gaussian_kde tool from the stats module and the curve_fit function from the optimize module. Article number, page 7 of 9 + A&A proofs: manuscript no. trabucchi_etal_2022_period_age_relation_of_lpvs Fig. A.2. Similar to Fig. 1, except each source is color-coded according to whether it has been classified as O-rich (blue) or C-rich (red). Table B.1. Best-fit coefficients for the PA relation and its boundaries in @@ -854,6 +861,7 @@ This is equivalent to saying that the period grows more slowly after it exceeds a critical value Pb = P(Rb), marked by the gray dotted line in Fig. C.1. The isochrone reaches it at Article number, page 8 of 9 + Trabucchi et al.: The period-age relation of LPVs Fig. B.1. Similar to Fig. 2, but showing initial mass Mi in place of age. The best-fit lines to the most populated band and edges of the theoretical PFM – Mi relation are shown. @@ -897,3 +905,4 @@ young TP-AGB stars, and they give rise to the poorly populated portion of the PA relation at the longest periods, as seen in panel (a) of Fig. 2. Article number, page 9 of 9 + diff --git a/read/results/pymupdf/2201.00214.txt b/read/results/pymupdf/2201.00214.txt index a3b3f16..24bbe18 100644 --- a/read/results/pymupdf/2201.00214.txt +++ b/read/results/pymupdf/2201.00214.txt @@ -41,6 +41,7 @@ Detections of coronal waves have a historical preview and have been reported for Berghmans & Clette (1999); De Moortel et al. (2000), Verwichte et al. (2004), De Moortel & Brady (2007), Ballai et al. (2011)). Coronal seismology and MHD waves have been reviewed widely by 1 + De Moortel (2005), Nakariakov & Verwichte (2005), Aschwanden (2006), Banerjee et al. (2007) and De Moortel & Nakariakov (2012). Along with the development of the observations, transverse and longitudinal oscillations have also been studied theoretically (e.g., Gruszecki et al. (2006), @@ -89,6 +90,7 @@ ter that, Aschwanden & Boerner (2011) criticized the method of background subtra Schmelz et al. had applied. They claimed that the background subtraction method caused their inferred result of a multithermal loop. Aschwanden & Boerner (2011) analyzed a set of hundred loops and understood that 66% of the loops could be fitted with a narrowband single-Gaussian + DEM model. In this regard, some attention was paid to the instrumental limitations and abil- ity of AIA and Guennou et al. (2012a,b) discussed on the accuracy of the differential emission measure diagnostics of solar plasmas in respect of the AIA instrument of SDO. The abovemen- @@ -135,6 +137,7 @@ Finally, these loops are chosen: happening around the loops with the specification we are looking for. So this selected LOS X-flare, which occurs near the loops is of rare cases. We consider EUV images of NOAA AR 11283, in the time period of 22:10UT till 23:00UT of 2011 September 6 with the cadence + of 12 sec. This period of time is selected since no other flare is happening during it. A few distinct loops are visible and follow-able here during this period. Loop shapes in our active region change permanently; therefore, it is difficult or impossible to follow a loop @@ -187,6 +190,7 @@ empty area around each loop and the distance to the neighbor loop). The area around the loop is needed for calculations of background subtraction. The selected loop segment is cut in 1Based on data on these WebSites: https://solarflare.njit.edu/webapp.html, and https://www.swpc.noaa.gov/ + all wavelengths and at the same considered box from the images set. These loop images are necessary entrances for our thermal analysis process. Then the loop is divided into different strips and its best division in terms of pixel intervals is considered. To do thermal analysis, we @@ -243,6 +247,7 @@ the periods, we computed the probability values (p-values). In the Lomb-Scargle significance returned here is the false alarm probability of the null hypothesis, i.e., as the data is composed of independent Gaussian random variables. Accordingly, low probability values (p-value less than 0.05) indicate a high degree of significance in the associated periodic signal. + IV. Results i. @@ -291,6 +296,7 @@ of (Max(log T)-Min(log T)) is 0.81 through this loop segment. The loops C1 and C 12, and 6 strips, have the lengths of 22.08 and 11.06 (Mm), the mean temperatures of 6.25 ± 0.22, and 6.14 ± 0.25 (log), and the mean (Max(log T)-Min(log T)) of 1.48, 0.88, respectively. We observe that despite the temperature oscillations, the flaring loops show a temperature + rise at the end of the considered time interval (figure3). As their temperature maps also show, the oscillations follow with a relatively sensible rise in the final temperature of the loop segments (Figures 4). Although in the case of the transverse oscillations, the loops oscillate as the flare @@ -341,6 +347,7 @@ this flare is associated with could cause this increase in temperature. We can source of this CME is AR 11283 (Romano et al. (2015)). This CME is in our flare region, hence the loops receive energy even after the flare occurrence and it is probably the reason why the expected cooling does not occur. + The thermal oscillations periods obtained the Lomb-Scargle method, do not have the same significance in all strips of the loops, but for most strips of the flaring loops, the significances are very near to one. To be assured about these oscillations, we probed the intensity time-series for @@ -388,6 +395,7 @@ much increase in the temperatures of the strips, which was obvious in the loops region toward the end times, is not observed here. The temperatures are also totally lower in the nonf-loops in comparison with the flaring loops. Conversely, it seems that different strips of the non-flaring loops have relatively more similar temperature fluctuations. + As figure 8 shows, the peaks of the observed temperature periods for the loops’ strips of the flaring active region (blue ones), and non-flaring active region (red ones), are around 18 minutes, and 30 minutes, respectively. The temperature periods’ diversity is higher in the loops’ strips of @@ -436,6 +444,7 @@ the F-loopA, which has a distinct transverse oscillation in the flaring time wi roughly 10 to 28.5 minutes in different segments of this loop. And as the transverse oscillation decays in this interval, no special definite decay is observed in its temperature oscillations. The temperature periods of the flaring loops are rather shorter than the temperature periods + of the non-flaring loops. The loops of the flaring region show some short temperature oscillations periods in which some are less than 10 minutes (Table1). These kind of short periods are more frequently observed for the loops of the flaring active region and in the case of the non-flaring @@ -482,6 +491,7 @@ oscillatory manner. Compared with these non-flaring loops, the flaring loops s peratures on average and higher oscillation periods with higher peaks and deeper valleys. More accurate commentary in this respect requires more extensive statistical research and broader ob- servations. + arcsec arcsec 79 @@ -550,6 +560,7 @@ nonf−LoopC b Figure 2: (a) The NOAA AR12194 on 2014 October 26, at 08:00:00UT in 171 recorded by AIA/SDO. (b) Zoom-in view of the area, marked by a box in the left, the loops are distinguished in red. + 5.8 6 6.2 @@ -610,6 +621,7 @@ LogT Figure 3: From up to down: The time-series of the temperature oscillations for the first 3 strips of Loop A (strip 1 to 3 from top to down), and the first 2 strips of LoopB1. Horizontal axis is the time and the vertical axis is the logarithm of the temperature. The red lines mark the initial and final time of the flare x2.1. + 22:10 22:20 22:30 @@ -729,6 +741,7 @@ Loop Length(Mm) Figure 4: Temperature map of the flaring loops A, B1, B2, C1, and C2 (from top to down) as a time series. The vertical axis is the distance along the loop in Mm, and the horizontal axis is the time. The colorbar in the left shows the colors considered for the temperature range. + Table 1: The properties observed for the loop segments of the flaring AR. FLoopA (Strip Number) @@ -939,6 +952,7 @@ FLoopB1 11 18.07 1.6 + Table 2: The properties observed for the loop segments of the non flaring AR. Nonf-LoopA (Strip Number) @@ -1027,6 +1041,7 @@ Min(log(T)) 5 30 0.8 + 22:10 22:20 22:30 @@ -1083,6 +1098,7 @@ acknowledgements The author Narges Fathalian wishes to also express her thanks for the technical support and comments which has received from Dr.Farhad Daii and Dr.Mohsen Javaherian regarding to this work. + @@ -1145,6 +1161,7 @@ time LogT Figure 6: from top to down: The time-series of the temperature for the first 2 strips (from top to down) of the non- flaring Loops A and B. Horizontal axis is the time and the vertical axis is the logarithm of the temperature. + 8:10 8:20 8:30 @@ -1214,6 +1231,7 @@ Loop Length(Mm) Figure 7: from top to down: Temperature map of the non-flaring loops A, B and C as a time-series. The vertical axis is the distance along the loop in Mm, and the horizontal axis is the time. The color-bar in the left shows the colors considered for the temperature range. + 6 7 8 @@ -1279,6 +1297,7 @@ max(log(T))−min(log(T)) Number Figure 9: Hisogram of the parameter of (max(log(T))-min(log(T))) for each strip of the loops of the flaring (blue bars) and non-flaring (red bars) ARs. + References Abedini, A., Safari, H., & Nasiri, S. 2012, Solar Physics, 280 Anfinogentov, S., Nakariakov, V. M., Mathioudakis, M., Van Doorsselaere, T., & Kowalski, A. F. @@ -1310,6 +1329,7 @@ Habbal, S. R., & Rosner, R. 1979, ApJ, 234, 1113 Hindman, B. W., & Jain, R. 2014, ApJ, 784, 103 Jain, R., Maurya, R. A., & Hindman, B. W. 2015, ApJ, 804, L19 Jess, D. B., Reznikova, V. E., Ryans, R. S. I., et al. 2016, Nature Physics, 12, 179 + Kolotkov, D. Y., Nakariakov, V. M., & Zavershinskii, D. I. 2019, A&A, 628, A133 Krishna Prasad, S., Jess, D. B., & Van Doorsselaere, T. 2019, Frontiers in Astronomy and Space Sciences, 6, 57 @@ -1341,6 +1361,7 @@ Schmelz, J. T., Pathak, S., Jenkins, B. S., & Worley, B. T. 2013, ApJ, 764, 53 Ugarte-Urra, I., & Warren, H. P. 2014, ApJ, 783, 12 Van Doorsselaere, T., Kupriyanova, E. G., & Yuan, D. 2016, Solar Physics, 291, 3143 Van Doorsselaere, T., Wardle, N., Del Zanna, G., et al. 2011, ApJ, 727, L32 + VanderPlas, J. T. 2018, ApJ, 236, 16 Verwichte, E., Nakariakov, V. M., Ofman, L., & Deluca, E. E. 2004, Solar Physics, 223, 77 Wang, T. 2011, Space Science Reviews, 158, 397–419 @@ -1352,3 +1373,4 @@ Wang, T., Ofman, L., Sun, X., Provornikova, E., & Davila, J. M. 2015, ApJ, 811, Wang, T., Ofman, L., Yuan, D., et al. 2021, Space Science Reviews, 217 Warren, H. P., Winebarger, A. R., & Brooks, D. H. 2012, ApJ, 759, 141 Wills-Davey, M. J., & Thompson, B. J. 1999, Solar Physics, 190, 467 + diff --git a/read/results/pymupdf/GeoTopo-book.txt b/read/results/pymupdf/GeoTopo-book.txt index dfa483f..f462df8 100644 --- a/read/results/pymupdf/GeoTopo-book.txt +++ b/read/results/pymupdf/GeoTopo-book.txt @@ -2,6 +2,7 @@ Einführung in die Geometrie und Topologie 0. Auflage, 31. Dezember 2016 Martin Thoma + Vorwort Dieses Skript wurde im Wintersemester 2013/2014 von Martin Thoma geschrieben. Es beinhaltet die Mitschriften aus der Vorlesung von Prof. Dr. Herrlich sowie die Mitschriften einiger Übungen @@ -33,6 +34,7 @@ in „Analysis I“ vermittelt. Außerdem wird vorausgesetzt, dass (affine) Vektorräume, Faktorräume, lineare Unabhängigkeit, der Spektralsatz und der projektive Raum P(R) aus „Lineare Algebra I“ bekannt sind. In „Lineare Algebra II“ wird der Begriff der Orthonormalbasis eingeführt. + iii (a) S2 (b) Würfel @@ -44,6 +46,7 @@ x Abbildung 0.1: Beispiele für verschiedene Formen Obwohl es nicht vorausgesetzt wird, könnte es von Vorteil sein „Einführung in die Algebra und Zahlentheorie“ gehört zu haben. + Inhaltsverzeichnis 1 Topologische Grundbegriffe @@ -147,10 +150,12 @@ Ergänzende Definitionen und Sätze 107 Symbolverzeichnis 108 + 2 Inhaltsverzeichnis Stichwortverzeichnis 111 + 1 Topologische Grundbegriffe 1.1 Topologische Räume Definition 1 @@ -184,6 +189,7 @@ chem Mittelpunkt (vgl. Definition 1.ii). Beobachtungen: • U ∈ TZ ⇔ ∃f ∈ R[X], sodass R \ U = V (f) = { x ∈ R | f(x) = 0 } • Es gibt keine disjunkten offenen Mengen in TZ. + 4 1.1. TOPOLOGISCHE RÄUME 5) X := Rn, TZ = {U ⊆ Rn|Es gibt Polynome f1, . . . , fr ∈ R[X1, . . . , Xn] sodass @@ -230,6 +236,7 @@ B = { Br(x) | r ∈ Q>0, x ∈ Qn } ist eine abzählbare Basis von T. 3) Sei (X, T) ein topologischer Raum mit X = { 0, 1, 2 } und T = { ∅, { 0 } , { 0, 1 } , { 0, 2 } , X }. Dann ist S = { ∅, { 0, 1 } , { 0, 2 } } eine Subbasis von T, da gilt: + 5 1.1. TOPOLOGISCHE RÄUME • S ⊆ T @@ -267,6 +274,7 @@ Beispiel 4 (Produkttopologien) R2 überein. 2) X1 = X2 = R mit Zariski-Topologie. T Produkttopologie auf R2: U1 × U2 (Siehe Abbildung 1.2) + 6 1.1. TOPOLOGISCHE RÄUME U1 = R \ N @@ -306,6 +314,7 @@ x ∼ y ⇔ ∃λ ∈ R× mit y = λx ⇔ x und y liegen auf der gleichen Ursprungsgerade X = Pn(R) + 7 1.2. METRISCHE RÄUME Also für n = 1: @@ -355,6 +364,7 @@ falls x = y 1 falls x ̸= y die diskrete Metrik. Die Metrik d induziert die diskrete Topologie. + 8 1.2. METRISCHE RÄUME Beispiel 10 @@ -368,6 +378,7 @@ r (a) Br(0) (b) Euklidische Topologie Abbildung 1.3: Veranschaulichungen zur Metrik d aus Beispiel 10 + 9 1.2. METRISCHE RÄUME Beispiel 11 (SNCF-Metrik1) @@ -407,6 +418,7 @@ x ̸= y. Da (xn) gegen x und y konvergiert, existiert ein n0 mit xn ∈ Ux ∩ U ⇒ x = y ■ 1Diese Metrik wird auch „französische Eisenbahnmetrik“ genannt. + 10 1.3. STETIGKEIT (x1, y1) @@ -448,6 +460,7 @@ f ist stetig Beispiel 13 (Stetige Abbildungen und Homöomorphismen) 1) Für jeden topologischen Raum X gilt: idX : X → X ist Homöomorphismus. 2Es wird die Äquivalenz von Stetigkeit im Sinne der Analysis und Topologie auf metrischen Räumen gezeigt. + 11 1.3. STETIGKEIT 2) Ist (Y, TY ) trivialer topologischer Raum, d. h. TY = Ttriv, so ist jede Abbildung @@ -497,6 +510,7 @@ Bemerkung 12 Sei X ein topologischer Raum, ∼ eine Äquivalenzrelation auf X, X = X/∼ der Bahnenraum versehen mit der Quotiententopologie, π : X → X, x �→ [x]∼. Dann ist π stetig. + 12 1.4. ZUSAMMENHANG Beweis: Nach Definition ist U ⊆ X offen ⇔ π−1(U) ⊆ X offen. @@ -581,6 +595,7 @@ a) Ein Raum X heißt zusammenhängend, wenn es keine offenen, nichtleeren Teilm U1, U2 von X gibt mit U1 ∩ U2 = ∅ und U1 ∪ U2 = X. b) Eine Teilmenge Y ⊆ X heißt zusammenhängend, wenn Y als topologischer Raum mit der Teilraumtopologie zusammenhängend ist. + 13 1.4. ZUSAMMENHANG x @@ -614,6 +629,7 @@ Umgebung von z liegt ein Punkt von U1 ⇒ Widerspruch zu U2 offen. Bemerkung 14 Sei X ein topologischer Raum und A ⊆ X zusammenhängend. Dann ist auch A zusammen- hängend. + 14 1.4. ZUSAMMENHANG Beweis: durch Widerspruch @@ -677,6 +693,7 @@ a) Z(x) ist die größte zusammenhängende Teilmenge von X, die x enthält. b) Z(x) ist abgeschlossen. c) X ist disjunkte Vereinigung von Zusammenhangskomponenten. Beweis: + 15 1.5. KOMPAKTHEIT a) Sei Z(x) = A1 ˙∪ A2 mit Ai ̸= ∅ abgeschlossen. @@ -731,6 +748,7 @@ Das Einheitsintervall I := [0, 1] ist kompakt bezüglich der euklidischen Topolo Beweis: Sei (Ui)i∈J eine offene Überdeckung von I. Es genügt zu zeigen, dass es ein δ > 0 gibt, sodass jedes Teilintervall der Länge δ von I in einem der Ui enthalten ist. Wenn es ein solches δ gibt, kann man I in endlich viele Intervalle + 16 1.5. KOMPAKTHEIT der Länge δ unterteilen und alle Ui in die endliche Überdeckung aufnehmen, die Teilintervalle @@ -797,6 +815,7 @@ Beweis: Sei (Wi)i∈I eine offene Überdeckung von X × Y . Für jedes (x, y) offene Teilmengen Ux,y von X und Vx,y von Y sowie ein i ∈ I, sodass Ux,y × Vx,y ⊆ Wi. 3Dies gilt nicht für alle n ≥ n0, da ein Häufungspunkt nur eine konvergente Teilfolge impliziert. 4Sogar für unendlich viele. + 17 1.5. KOMPAKTHEIT Wi @@ -848,6 +867,7 @@ Sei V := n� i=1 Vxi + 18 1.6. WEGE UND KNOTEN ⇒ V ∩ @@ -899,6 +919,7 @@ ist. 1.6 Wege und Knoten Definition 17 Sei X ein topologischer Raum. + 19 1.6. WEGE UND KNOTEN a) Ein Weg in X ist eine stetige Abbildung γ : [0, 1] → X. @@ -951,6 +972,7 @@ dung 1.9 dargestellte Hilbert-Kurve. Definition 19 Sei X ein topologischer Raum. Eine Jordankurve in X ist ein Homöomorphismus γ : [0, 1] → C ⊆ X bzw. γ : S1 → C ⊆ X, wobei C := Bild γ. + 20 1.6. WEGE UND KNOTEN (a) Spirale S mit Kreis C @@ -986,6 +1008,7 @@ Beweis: ist technisch mühsam und wird hier nicht geführt. Er kann in „Algebr Eine Einführung“ von R. Stöcker und H. Zieschang auf S. 301f (ISBN 978-3519122265) nachgelesen werden. Idee: Ersetze Weg C durch Polygonzug. + 21 1.6. WEGE UND KNOTEN Definition 20 @@ -1018,6 +1041,7 @@ Ist (π|γ([0,1]))−1(x) = { y1, y2 }, so liegt y1 über y2, wenn gilt: Satz 1.3 (Satz von Reidemeister) Zwei endliche Knotendiagramme gehören genau dann zu äquivalenten Knoten, wenn sie durch endlich viele „Reidemeister-Züge“ ineinander überführt werden können. + 22 1.6. WEGE UND KNOTEN (a) Ω1 @@ -1031,6 +1055,7 @@ werden kann, dass an jeder Kreuzung eine oder 3 Farben auftreten und alle 3 Farb auftreten. Abbildung 1.13: Ein 3-gefärber Kleeblattknoten 5Siehe „Knot Theory and Its Applications“ von Kunio Murasugi. ISBN 978-0817638177. + 23 1.6. WEGE UND KNOTEN Übungsaufgaben @@ -1061,11 +1086,13 @@ Geben Sie, falls möglich, ein Beispiel für folgende Fälle an. Falls es nicht begründen Sie warum. 1) Ein Homomorphismus, der zugleich ein Homöomorphismus ist, 2) ein Homomorphismus, der kein Homöomorphismus ist, + 24 1.6. WEGE UND KNOTEN 3) ein Homöomorphismus, der kein Homomorphismus ist Aufgabe 6 (Begriffe) Definieren Sie die Begriffe „Isomorphismus“, „Isotopie“ und „Isometrie“. + 2 Mannigfaltigkeiten und Simplizialkomplexe 2.1 Topologische Mannigfaltigkeiten @@ -1099,6 +1126,7 @@ Ist n < m und Rm homöomorph zu Rn, so wäre f : Rn → Rm → Rn, (x1, . . . , xn) �→ (x1, x2, . . . , xn, 0, . . . , 0) eine stetige injektive Abbildung. Also müsste f(Rn) offen sein ⇒ Widerspruch + 26 2.1. TOPOLOGISCHE MANNIGFALTIGKEITEN Beispiel 20 (Mannigfaltigkeiten) @@ -1180,6 +1208,7 @@ Als kompakte Mannigfaltigkeit wird Sn auch „geschlossene Mannigfaltigkeit“ g Es gibt keine Umgebung von 0 in [0, 1], die homöomorph zu einem offenem Intervall ist. 1xi wird rausgenommen + 27 2.1. TOPOLOGISCHE MANNIGFALTIGKEITEN 6) V1 = @@ -1231,6 +1260,7 @@ Bemerkung 27 Sei n ∈ N, F : Rn → R stetig differenzierbar und X = V (F) := { x ∈ Rn | F(x) = 0 } das „vanishing set“. Dann gilt: + 28 2.1. TOPOLOGISCHE MANNIGFALTIGKEITEN Abbildung 2.1: Durch Verklebung zweier Tori entsteht ein Zweifachtorus. @@ -1303,6 +1333,7 @@ a = 2 Abbildung 2.2: Rechts ist die Neilsche Parabel für verschiedene Parameter a. Daher ist Bemerkung 27.b nicht anwendbar, aber V (F) ist trotzdem eine 1-dimensionale topologische Mannigfaltigkeit. + 29 2.1. TOPOLOGISCHE MANNIGFALTIGKEITEN Definition 26 @@ -1337,6 +1368,7 @@ Für i, j ∈ I mit Ui ∩ Uj ̸= ∅ heißt i ϕi(Ui ∩ Uj) → ϕj(Ui ∩ Uj) Kartenwechsel oder Übergangsfunktion. + 30 2.2. DIFFERENZIERBARE MANNIGFALTIGKEITEN Rn @@ -1378,6 +1410,7 @@ b) f heißt differenzierbar (von Klasse Ck), wenn f in jedem x ∈ X differenz c) f heißt Diffeomorphismus, wenn f differenzierbar von Klasse C∞ ist und es eine differenzierbare Abbildung g : Y → X von Klasse C∞ gibt mit g ◦ f = idX und f ◦ g = idY . + 31 2.2. DIFFERENZIERBARE MANNIGFALTIGKEITEN Bemerkung 29 @@ -1448,6 +1481,7 @@ cos2(v)(cos2(u) + sin2(u)) + sin2(v) cos2(v) + sin2(v) � =R2 + 32 2.2. DIFFERENZIERBARE MANNIGFALTIGKEITEN N @@ -1482,6 +1516,7 @@ y sin x cos x (c) Sinus und Kosinus haben keine gemeinsame Nullstelle + 33 2.2. DIFFERENZIERBARE MANNIGFALTIGKEITEN Die Jacobi-Matrix @@ -1574,6 +1609,7 @@ Fj auf W eine differenzierbar Inverse F −1 j hat. + 34 2.2. DIFFERENZIERBARE MANNIGFALTIGKEITEN Weiter gilt: @@ -1638,6 +1674,7 @@ Ist G eine Lie-Gruppe und g ∈ G, so ist die Abbildung lg : G → G h �→ g · h ein Diffeomorphismus. + 35 2.3. SIMPLIZIALKOMPLEX 2.3 Simplizialkomplex @@ -1696,6 +1733,7 @@ wenn gilt: b) |K| := � ∆∈K ∆ (mit Teilraumtopologie) heißt geometrische Realisierung von K. c) Ist d = max { k ∈ N0 | K enthält k-Simplex }, so heißt d die Dimension von K. + 36 2.3. SIMPLIZIALKOMPLEX (a) 1D Simplizialkomplex (b) 2D Simplizialkomplex @@ -1718,6 +1756,7 @@ b) f|∆ : ∆ → f(∆) ist eine affine Abbildung. Beispiel 26 (Simpliziale Abbildungen) 1) ϕ(e1) := b1, ϕ(e2) := b2 ϕ ist eine eindeutig bestimmte lineare Abbildung + 37 2.3. SIMPLIZIALKOMPLEX 0 @@ -1799,6 +1838,7 @@ Beispiel 27 χ(Würfel, unterteilt in Dreiecksflächen) = 8 − (12 + 6) + (6 · 2) = 2 Bemerkung 33 χ(∆n) = 1 für jedes n ∈ N0 + 38 2.3. SIMPLIZIALKOMPLEX Beweis: ∆n ist die konvexe Hülle von (e0, . . . , en) in Rn+1. Jede (k + 1)-elementige Teilmenge @@ -1852,6 +1892,7 @@ b) Ist n = a1(Γ) − a1(T), so ist χ(Γ) = 1 − n. Beweis: a) Siehe „Algorithmus von Kruskal“. 2T wird „Spannbaum“ genannt. + 39 2.3. SIMPLIZIALKOMPLEX b) χ(Γ) = a0(Γ) − a1(Γ) @@ -1901,6 +1942,7 @@ Beweis: 1) Die Aussage ist richtig für den Tetraeder. 2) O. B. d. A. sei 0 ∈ P und P ⊆ B1(0). Projeziere ∂P von 0 aus auf ∂B1(0) = S2. Erhalte Triangulierung von S2. + 40 2.3. SIMPLIZIALKOMPLEX (a) Die beiden markierten Dreiecke schneiden sich im @@ -1911,6 +1953,7 @@ Abbildung 2.11: Fehlerhafte Triangulierungen (a) Einfache Triangulierung (b) Minimale Triangulierung Abbildung 2.12: Triangulierungen des Torus + 41 2.3. SIMPLIZIALKOMPLEX 3) Sind P1 und P2 konvexe Polygone und T1, T2 die zugehörigen Triangulierungen von @@ -1958,6 +2001,7 @@ Beispiel 29 Sei a < b < c. Dann gilt: d2σ = e1 − e2 + e3 d1(e1 − e2 + e3) = (c − b) − (c − a) + (b − a) + 42 2.3. SIMPLIZIALKOMPLEX = 0 @@ -2019,6 +2063,7 @@ k=0 (−1)kak(K) = χ(K) Bemerkung 39 Es gilt nicht ak = bk ∀k ∈ N0. + 43 2.3. SIMPLIZIALKOMPLEX Beweis: @@ -2071,6 +2116,7 @@ d � k=0 (−1)kbk + 44 2.3. SIMPLIZIALKOMPLEX Übungsaufgaben @@ -2079,6 +2125,7 @@ Aufgabe 7 (Zusammenhang) gend ist, wenn sie zusammenhängend ist (b) Betrachten Sie nun wie in Beispiel 20.8 den Raum X := (R\{ 0 })∪{ 01, 02 } versehen mit der dort definierten Topologie. Ist X wegzusammenhängend? + 3 Fundamentalgruppe und Überlagerungen 3.1 Homotopie von Wegen a @@ -2118,6 +2165,7 @@ Bemerkung 41 Durch Homotopie wird eine Äquivalenzrelation auf der Menge aller Wege in X von a nach b definiert. Beweis: + 46 3.1. HOMOTOPIE VON WEGEN • reflexiv: H(t, s) = γ(t) für alle (t, s) ∈ I × I @@ -2151,6 +2199,7 @@ H(0, s) = γ(0) und H(1, s) = γ(1 − s + s) = γ(1) ⇒ H ist Homotopie. ■ + 47 3.1. HOMOTOPIE VON WEGEN a @@ -2225,6 +2274,7 @@ Sind γ1 ∼ γ′ 2, so ist γ1 ∗ γ2 ∼ γ′ 1 ∗ γ′ 2. + 48 3.2. FUNDAMENTALGRUPPE γ1 @@ -2274,6 +2324,7 @@ Für einen Weg γ sei [γ] seine Homotopieklasse. Definition 45 Sei X ein topologischer Raum und x ∈ X. Sei außerdem π1(X, x) := { [γ] | γ ist Weg in X mit γ(0) = γ(1) = x } + 49 3.2. FUNDAMENTALGRUPPE Durch [γ1] ∗G [γ2] := [γ1 ∗ γ2] wird π1(X, x) zu einer Gruppe. Diese Gruppe heißt Funda- @@ -2321,6 +2372,7 @@ Dann ist die Abbildung α : π1(X, a) → π1(X, b) [γ] �→ [δ ∗ γ ∗ δ] ein Gruppenisomorphismus. + 50 3.2. FUNDAMENTALGRUPPE a @@ -2358,6 +2410,7 @@ Beispiel 33 1) f : S1 �→ R2 ist injektiv, aber f∗ : π1(S1, 1) ∼= Z → π1(R2, 1) = { e } ist nicht injektiv. 2) f : R → S1, t �→ (cos 2πt, sin 2πt) ist surjektiv, aber f∗ : π1(R, 0) = { e } → π1(S1, 1) ∼= Z ist nicht surjektiv. + 51 3.2. FUNDAMENTALGRUPPE Bemerkung 48 @@ -2392,6 +2445,7 @@ Sei X ein topologischer Raum, U, V ⊆ X offen mit U ∪ V = X und U ∩ V wegz menhängend. Dann wird π1(X, x) für x ∈ U ∩ V erzeugt von geschlossenen Wegen um x, die ganz in U oder ganz in V verlaufen. + 52 3.3. ÜBERLAGERUNGEN Beweis: Sei γ : I → X ein geschlossener Weg um x. Überdecke I mit endlich vielen offenen @@ -2433,6 +2487,7 @@ sodass p−1(U) disjunkte Vereinigung von offenen Teilmengen Vj ⊆ Y ist (j p|Vj : Vj → U ein Homöomorphismus ist. |I| heißt Grad der Überlagerung p und man schreibt: deg p := |I| + 53 3.3. ÜBERLAGERUNGEN Abbildung 3.10: R → S1, @@ -2504,6 +2559,7 @@ ist Homöomorphismus. D. h. es existiert ein y ∈ Vj, so dass p|Vj(y) = x. Da x ∈ X beliebig war und ein y ∈ Y existiert, mit p(y) = x, ist p surjektiv. ■ + 54 3.3. ÜBERLAGERUNGEN 1 @@ -2547,6 +2603,7 @@ b) p−1(x) ist diskret in Y für jedes x ∈ X. Beweis: a) Seien y1, y2 ∈ Y . 1. Fall: p(y1) = p(y2) = x. + 55 3.3. ÜBERLAGERUNGEN Sei U Umgebung von x wie in Definition 48, Vj1 bzw. Vj2 die Komponente von p−1(U), @@ -2588,6 +2645,7 @@ Sei Z zusammenhängend und f0, f1 : Z → Y Liftungen von f. ∃z0 ∈ Z : f0(z0) = f1(z0) ⇒ f0 = f1 Beweis: Sei T = { z ∈ Z | f0(z) = f1(z) }. Z. z.: T ist offen und Z \ T ist auch offen. + 56 3.3. ÜBERLAGERUNGEN 0 @@ -2628,6 +2686,7 @@ p|Vj : Vj → U Homöomorphismus. Bemerkung 55 Wege in X lassen sich zu Wegen in Y liften. Zu jedem y ∈ p−1(γ(0)) gibt es genau einen Lift von γ. + 57 3.3. ÜBERLAGERUNGEN Proposition 3.3 @@ -2666,6 +2725,7 @@ Für geschlossene Wege γ0, γ1 um x gilt: ⇔[γ0 ∗ γ−1 1 ] ∈ p∗(π1(Y, y0)) ⇔[γ0] und [γ1]liegen in der selben Nebenklasse bzgl. p∗(π1(Y, y0)) + 58 3.3. ÜBERLAGERUNGEN Zu i ∈ { 0, . . . , d − 1 } gibt es Weg δi in Y mit δi(0) = y0 und δi(1) = yi @@ -2706,6 +2766,7 @@ Sei Z := p−1(U). Für u ∈ Z sei δ ein Weg in Z von z nach u. ⇒ Z ⊆ ˜ p−1(W) ⇒ ˜p ist stetig + 59 3.3. ÜBERLAGERUNGEN Folgerung 3.6 @@ -2745,6 +2806,7 @@ Annahme: [˜γ] ̸= e Mit Bemerkung 55.a folgt dann: [γ] ̸= e. Dann ist der Lift von γ nach ˜x mit Anfangspunkt ˜x0 ein Weg von ˜x0 nach (x0, [γ]). Wider- spruch. + 60 3.3. ÜBERLAGERUNGEN Definition 54 @@ -2785,6 +2847,7 @@ Beispiel 39 (Decktransformationen) 1) p : R → S1 : Deck(R/S1) = { t �→ t + n | n ∈ Z } ∼= Z 2) p : R2 → T 2 : Deck(R2/T 2) ∼= Z × Z = Z2 3) p : Sn → Pn(R) : Deck(Sn/Pn(R)) = { x �→ ±x } ∼= Z/2Z + 61 3.3. ÜBERLAGERUNGEN Nun werden wir eine Verbindung zwischen der Decktransformationsgruppe und der Fundamen- @@ -2827,6 +2890,7 @@ Es existiert n ∈ Z mit g(0) = n. Da auch fn(0) = 0 + n = n gilt, folgt mit Bem g = fn. Damit folgt: Deck(R/S1) = { fn | n ∈ Z } ∼= Z Nach Satz 3.8 also π1(S1) ∼= Deck(R/S1) ∼= Z + 62 3.4. GRUPPENOPERATIONEN 3.4 Gruppenoperationen @@ -2870,6 +2934,7 @@ Def. 55.a = x Beispiel 42 In Beispiel 41.1 operiert Z durch Homöomorphismen. + 63 3.4. GRUPPENOPERATIONEN Bemerkung 59 @@ -2927,12 +2992,14 @@ aus Beispiel 43 einen Gruppenhomomorphismus ϱ : π1(X, x0) → Homöo(X). Nach f : ˜X → ˜X Homöomorphismus ��� p ◦ f = p � + 64 3.4. GRUPPENOPERATIONEN Beispiel 44 Sei X := S2 ⊆ R3 und τ die Drehung um die z-Achse um 180◦. g = ⟨τ⟩ = { id, τ } operiert auf S2 durch Homöomorphismen. Frage: Was ist S2/G? Ist S2/G eine Mannigfaltigkeit? + 4 Euklidische und nichteuklidische Geometrie Definition 57 @@ -2966,6 +3033,7 @@ allel sind und senkrecht auf die erste stehen. Definition 58 Eine euklidische Ebene ist eine Geometrie (X, d, G), die Axiome §1 - §5 erfüllt: §1) Inzidenzaxiome: + 66 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE (i) Zu P ̸= Q ∈ X gibt es genau ein g ∈ G mit { P, Q } ⊆ g. @@ -3012,6 +3080,7 @@ b) „⊇“ ist offensichtlich � d(P, R) = d(P, Q) + d(Q, R) oder d(P, Q) = d(P, R) + d(R, Q) � + 67 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE ⇒ d(Q, R) = 2d(P, Q) + d(Q, R) @@ -3052,6 +3121,7 @@ Satz 4.1 ====⇒ PB schneidet AP ′ ∪ AQ Sei C der Schnittpunkt. Dann gilt: 1Die „Verschiebung“ von P ′Q′ nach PQ und die Isometrie, die zusätzlich an der Gerade durch P und Q spiegelt. + 68 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE P @@ -3087,6 +3157,7 @@ Sei C der Schnittpunkt vom PB und AQ. Dann gilt: (i) d(A, C) + d(C, Q) = d(A, Q) Vor. = d(B, Q) < d(B, C) + d(C, Q) ⇒ d(A, C) < d(B, C) + 69 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE P @@ -3134,6 +3205,7 @@ d(P, Q) = d(P, ϕ(S)) + d(ϕ(S), Q) Proposition 4.2 In einer Geometrie, die §1 - §3 erfüllt, gibt es zu P, P ′, Q, Q′ mit d(P, Q) = d(P ′, Q′) höchstens zwei Isometrien mit ϕ(P) = P ′ und ϕ(Q) = Q′ + 70 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE Aus den Axiomen folgt, dass es in der Situation von §4 höchstens zwei Isometrien mit @@ -3173,6 +3245,7 @@ Sei (X, d, G) eine Geometrie, die §1 - §4 erfüllt. Seien außerdem △ABC und Dreiecke, für die gilt: (i) d(A, B) = d(A′, B′) (ii) ∠CAB ∼= ∠C′A′B′ + 71 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE (iii) d(A, C) = d(A′, C′) @@ -3223,6 +3296,7 @@ Beweis: Zeige ∠PRQ < ∠RQP ′. Sei M der Mittelpunkt der Strecke QR und P ′ ∈ PQ+ \ PQ. Sei A ∈ MP − mit d(P, M) = d(M, A). 2Für dieses Skript gilt: ∠R1PR2 = ∠R2PR1. Also sind insbesondere alle Winkel ≤ 180◦. + 72 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE P @@ -3268,6 +3342,7 @@ Dann gibt es zu jeder Geraden g ∈ G und jedem Punkt P ∈ X \ g mindestens ein Parallele h ∈ G mit P ∈ h und g ∩ h = ∅. Beweis: Seien P, Q ∈ f ∈ G und ϕ die Isometrie, die Q auf P und P auf P ′ ∈ f mit d(P, P ′) = d(P, Q) abbildet und die Halbebenen bzgl. f erhält. + 73 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE Q @@ -3294,6 +3369,7 @@ Dreiecke mit drei 90◦-Winkeln. Proposition 4.5 In einer Geometrie mit den Axiomen §1 - §4 ist in jedem Dreieck die Summe der Innenwinkel ≤ π. + 74 4.1. AXIOME FÜR DIE EUKLIDISCHE EBENE Sei im Folgenden „IWS“ die „Innenwinkelsumme“. @@ -3348,6 +3424,7 @@ Beweis: Sei g eine Parallele von AB durch C. • Es gilt α′ = α wegen Proposition 4.3. • Es gilt β′ = β wegen Proposition 4.3. • Es gilt α′′ = α′ wegen Aufgabe 8. + 75 4.2. WEITERE EIGENSCHAFTEN EINER EUKLIDISCHEN EBENE ⇒ IWS(△ABC) = γ + α′′ + β′ = π @@ -3390,6 +3467,7 @@ Abbildung 4.13: Die Dreiecke △ABC und △AB′C′ sind ähnlich. Definition 62 „Simplizialkomplexe“ in euklidischer Ebene (X, d) heißen flächengleich, wenn sie sich in kongruente Dreiecke zerlegen lassen. + 76 4.2. WEITERE EIGENSCHAFTEN EINER EUKLIDISCHEN EBENE (a) Zwei kongruente Dreiecke (b) Zwei weitere kongruente Drei- @@ -3433,6 +3511,7 @@ Im rechtwinkligen Dreieck gilt a2 + b2 = c2, wobei c die Hypotenuse und a, b die Katheten sind. Beweis: (a + b) · (a + b) = a2 + 2ab + b2 = c2 + 4 · ( 1 2 · a · b) + 77 4.2. WEITERE EIGENSCHAFTEN EINER EUKLIDISCHEN EBENE c @@ -3479,6 +3558,7 @@ Im Folgenden werden zwei Aussagen gezeigt: (ii) h ist eine Isometrie Da jede Isometrie injektiv ist, folgt aus (i) und (ii), dass h bijektiv ist. Nun zu den Beweisen der Teilaussagen: + 78 4.3. HYPERBOLISCHE GEOMETRIE · @@ -3523,6 +3603,7 @@ H := { z ∈ C | ℑ(z) > 0 } = � (x, y) ∈ R2 �� y > 0 � + 79 4.3. HYPERBOLISCHE GEOMETRIE die obere Halbebene bzw. Poincaré-Halbebene und G = G1 ∪ G2 mit @@ -3585,6 +3666,7 @@ Z2 Abbildung 4.20: Zwei Punkte liegen in der hyperbolischen Geometrie immer auf genau einer Geraden b) Sei g ∈ G1 ˙∪ G2 eine hyperbolische Gerade. + 80 4.3. HYPERBOLISCHE GEOMETRIE Es existieren disjunkte Zerlegungen von H \ g: @@ -3648,6 +3730,7 @@ y 4 5 Abbildung 4.21: Hyperbolische Geraden erfüllen §5 nicht. + 81 4.3. HYPERBOLISCHE GEOMETRIE Definition 64 @@ -3741,6 +3824,7 @@ Die Abbildung bildet also nach H ab. Außerdem gilt: ◦ z = x + iy 1 = x + iy = z + 82 4.3. HYPERBOLISCHE GEOMETRIE und @@ -3861,6 +3945,7 @@ d −a −b � + 83 4.3. HYPERBOLISCHE GEOMETRIE Gehe zu Fall 2. @@ -3986,6 +4071,7 @@ a z Bemerkung 69 Zu hyperbolischen Geraden g1, g2 gibt es σ ∈ PSL2(R) mit σ(g1) = g2. + 84 4.3. HYPERBOLISCHE GEOMETRIE · @@ -4035,6 +4121,7 @@ a) DV(z1, . . . , z4) ̸= 0, da zi paarweise verschieden DV(z1, . . . , z4) ̸= 1, da: Annahme: DV(z1, . . . , z4) = 1 ⇔ (z1 − z2)(z3 − z4) = (z1 − z4)(z3 − z2) + 85 4.3. HYPERBOLISCHE GEOMETRIE ⇔ z1z3 − z2z3 − z1z4 + z2z4 = z1z3 − z3z4 − z1z2 + z2z4 @@ -4092,6 +4179,7 @@ DV(a2, z1, a1, z2) Außerdem gilt: ln 1 x = ln x−1 = (−1) · ln x = − ln x + 86 4.3. HYPERBOLISCHE GEOMETRIE Da der ln im Betrag steht, folgt direkt: @@ -4127,6 +4215,7 @@ Satz 4.10 Die hyperbolische Ebene H mit der hyperbolischen Metrik d und den hyperbolischen Geraden bildet eine „nichteuklidische Geometrie“, d. h. die Axiome §1 - §4 sind erfüllt, aber Axiom §5 ist verletzt. + 87 4.3. HYPERBOLISCHE GEOMETRIE Übungsaufgaben @@ -4154,6 +4243,7 @@ Seien f, g, h ∈ G und paarweise verschieden. Zeigen Sie: f ∥ g ∧ g ∥ h ⇒ f ∥ h Aufgabe 11 Beweise den Kongruenzsatz SSS. + 5 Krümmung Definition 67 Sei f : [a, b] → Rn eine eine Funktion aus C∞. Dann heißt f Kurve. @@ -4201,6 +4291,7 @@ Definition 69 Sei γ : I → R2 eine durch Bogenlänge parametrisierte Kurve. a) Für t ∈ I sei n(t) Normalenvektor an γ in t wenn gilt: ⟨n(t), γ′(t)⟩ = 0, ∥n(t)∥ = 1 und det((γ′(t), n(t))) = +1 + 89 5.1. KRÜMMUNG VON KURVEN b) Seit κ : I → R so, dass gilt: @@ -4302,6 +4393,7 @@ r � ⇒ κ(t) = 1 r + 90 5.2. TANGENTIALEBENE Definition 70 @@ -4358,6 +4450,7 @@ Bemerkung 73 (Eigenschaften der Tangentialebene) a) TsS ist 2-dimensionaler Untervektorraum von R3. b) TsS = ⟨˜u, ˜v⟩, wobei ˜u, ˜v die Spaltenvektoren der Jacobi-Matrix JF (p) sind. c) TsS hängt nicht von der gewählten Parametrisierung ab. + 91 5.2. TANGENTIALEBENE d) Sei S = V (f) eine reguläre Fläche in R3, also f : V → R eine C∞-Funktion, V ⊆ R3 @@ -4408,6 +4501,7 @@ Beispiel 46 (Normalenfelder) Auch n2 = −idS2 ist ein stetiges Normalenfeld. 2) S = Möbiusband (vgl. Abbildung 5.1) ist nicht orientierbar. Es existiert ein Norma- lenfeld, aber kein stetiges Normalenfeld. + 92 5.3. GAUSS-KRÜMMUNG Abbildung 5.1: Möbiusband @@ -4438,6 +4532,7 @@ S ∩ E1 = V (X2 + Y 2 − 1) ∩ E, Kreislinie in E ⇒ κNor(s, x1) = ±1 x2 = (0, 0, 1), E2 = R · e1 + R · e3 (x, z-Ebene) 1Siehe z. B. https://github.com/MartinThoma/LaTeX-examples/tree/master/documents/Analysis%20II + 93 5.3. GAUSS-KRÜMMUNG V ∩ E2 ∩ S = @@ -4526,6 +4621,7 @@ eine glatte Funktion und Bild κn Nor(s) ist ein abgeschlossenes Intervall. Definition 75 Sei S eine reguläre Fläche und n = n(s) ein Normalenvektor an S in s. + 94 5.3. GAUSS-KRÜMMUNG a) κn @@ -4576,6 +4672,7 @@ s3 Abbildung 5.3: K(s1) > 0, K(s2) = 0, K(s3) < 0 Bemerkung 79 Sei S eine reguläre Fläche, s ∈ S ein Punkt. + 95 5.4. ERSTE UND ZWEITE FUNDAMENTALFORM a) Ist K(s) > 0, so liegt S in einer Umgebung von s ganz auf einer Seite von TsS + s. @@ -4661,6 +4758,7 @@ z3 = x1y2 − x2y1 1 + z2 2 + z2 3 + 96 5.4. ERSTE UND ZWEITE FUNDAMENTALFORM det(IS) = g1,1g2,2 − g2 @@ -4784,12 +4882,14 @@ Soll auf Fläche S bleiben ��� t=0 Die Abbildung dsn heißt Weingarten-Abbildung + 97 5.4. ERSTE UND ZWEITE FUNDAMENTALFORM b) Tn(s)S2 = TsS. c) dsn ist ein Endomorphismus von TsS. d) dsn ist selbstadjungiert bzgl. des Skalarproduktes IS. Hinweis: Die Weingarten-Abbildung wird auch Formoperator genannt. + 98 5.4. ERSTE UND ZWEITE FUNDAMENTALFORM Beweis: @@ -4861,6 +4961,7 @@ dtn(γ(t)) t=0, γ′(0) � + ⟨n(s), γ′′(0)⟩ + 99 5.4. ERSTE UND ZWEITE FUNDAMENTALFORM = ⟨dsn(γ′(0)), γ′(0)⟩ + κNor(s, γ) @@ -4902,6 +5003,7 @@ K(s)dA = 2πχ(S) Dabei ist χ(S) die Euler-Charakteristik von S. Beweis: Der Beweis wird hier nicht geführt. Er kann in „Elementare Differentialgeometrie“ von Christian Bär (2. Auflage), ISBN 978-3-11-022458-0, ab Seite 281 nachgelesen werden. + Lösungen der Übungsaufgaben Lösung zu Aufgabe 1 Teilaufgabe a) Es gilt: @@ -4936,6 +5038,7 @@ nicht offen. ■ Teilaufgabe c) Beh.: Es gibt unendlich viele Primzahlen. + 101 Lösungen der Übungsaufgaben Bew.: durch Widerspruch @@ -5001,6 +5104,7 @@ A ∈ R1×1 �� det A = 1 � ∼= { 1 }. 22 ⇒ SL1(R) ist kompakt. + 102 Lösungen der Übungsaufgaben SLn(R) ⊆ GLn(R) lässt sich mit einer Teilmenge des Rn2 identifizieren. Nach Satz 1.1 @@ -5044,6 +5148,7 @@ Seien (G, ∗) und (H, ◦) Gruppen und ϕ : G → H eine Abbildung. ϕ heißt Isomorphismus, wenn ϕ ein bijektiver Homomorphismus ist. Eine Isotopie ist also für Knoten definiert, Isometrien machen nur in metrischen Räumen Sinn und ein Isomorphismus benötigt eine Gruppenstruktur. + 103 Lösungen der Übungsaufgaben Lösung zu Aufgabe 7 @@ -5094,6 +5199,7 @@ Weg γ2 von a nach 02. Damit existiert ein (nicht einfacher) Weg γ von 01 nach ■ Lösung zu Aufgabe 9 Vor.: Sei (X, d) eine absolute Ebene, A, B, C ∈ X und △ABC ein Dreieck. + 104 Lösungen der Übungsaufgaben (a) Beh.: AB ∼= AC ⇒ ∠ABC ∼= ∠ACB @@ -5139,6 +5245,7 @@ Lösung zu Aufgabe 10 Sei f ∥ h und o. B. d. A. f ∥ g. f ∦ h ⇒ f ∩ h ̸= ∅, sei also x ∈ f ∩ h. Mit Axiom §5 folgt: Es gibt höchstens eine Parallele zu g durch x, da x /∈ g. Diese ist f, da x ∈ f und f ∥ g. Da aber x ∈ h, kann h nicht + 105 Lösungen der Übungsaufgaben parallel zu g sein, denn ansonsten gäbe es zwei Parallelen zu g durch x (f ̸= h). ⇒ g ∦ h ■ @@ -5156,6 +5263,7 @@ Bem. 62 =====⇒ C = ϕ(C). Es gilt also ϕ(△A′B′C′) = △ABC. ■ + Bildquellen Alle Bilder, die hier nicht aufgeführt sind, wurden von Martin Thoma erstellt. Teilweise wurden die im folgenden aufgelisteten Bilder noch leicht modifiziert. @@ -5180,6 +5288,7 @@ Abb. 4.7a Sphärisches Dreieck: Dominique Toussaint, commons.wikimedia.org/wiki/File:Spherical_triangle_3d_opti.png Abb. 5.1 Möbiusband: Jake, tex.stackexchange.com/a/118573/5645 Abb. 5.3 Krümmung des Torus: Charles Staats, tex.stackexchange.com/a/149991/5645 + Abkürzungsverzeichnis Beh. Behauptung Bew. Beweis @@ -5199,6 +5308,7 @@ vgl. vergleiche z. B. zum Beispiel zhgd. zusammenhängend z. z. zu zeigen + Ergänzende Definitionen und Sätze Da dieses Skript in die Geometrie und Topologie einführen soll, sollten soweit wie möglich alle benötigten Begriffe definiert und erklärt werden. Die folgenden Begriffe wurden zwar verwendet, @@ -5263,6 +5373,7 @@ a3b1 − a1b3 a1b2 − a2b1 � � + Symbolverzeichnis Mengenoperationen Seien A, B und M Mengen. @@ -5364,6 +5475,7 @@ Rg(M) Rang von M χ(K) Euler-Charakteristik von K + 110 Symbolverzeichnis ∆k @@ -5378,6 +5490,7 @@ A ist isometrisch zu B f∗ Abbildung zwischen Fundamental- gruppen (vgl. Seite 49) + 111 Symbolverzeichnis Zahlenmengen @@ -5436,6 +5549,7 @@ Tangentialebene an S ⊆ R3 durch s ∈ S dsn(x) Weingarten-Abbildung 2von Vanishing Set + Stichwortverzeichnis Abbildung affine, 107 @@ -5514,6 +5628,7 @@ Halbgerade, 65 Halbraum, 28 Hauptkrümmung, 92 Hilbert-Kurve, 19, 19 + 113 Stichwortverzeichnis Homöomorphismengruppe, 10 @@ -5608,6 +5723,7 @@ hausdorffscher, 8 kompakter, 14 metrischer, 6 projektiver, 5, 22, 25, 52 + 114 Stichwortverzeichnis topologischer, 2 @@ -5671,3 +5787,4 @@ Winkel, 70 Zusammenhang, 11–14 Zusammenhangskomponente, 13 Zwischenwertsatz, 107 + diff --git a/read/results/pypdf2/1601.03642.txt b/read/results/pypdf2/1601.03642.txt index a6b1d991f20572032bc3aefa38a20a27ae7cb494..f2608ce76e6d10d705b96f062331ce815da20a5d 100644 GIT binary patch delta 60 zcmaF-nDO;v#tjEV7`Zkd5V_9>q~