-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathresponses.txt
824 lines (602 loc) · 39.6 KB
/
responses.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
Independent Review Report, Reviewer 3
-I was happy with the presentation of the framework in the context of other
-projects, but I miss a comparison with other approaches like SPM or pyNN.
SPM and pyNN, both projects differing in significant ways from TVB, have been
adding the the discussion. PyNN is mainly an interface to be used with different
neural network simulators, diverging from TVB computational model itself.
Dynamic Causal Modelling (DCM) in SPM does model fitting to data at the level of
a small set of nodes, however SPM remains primarily a tool for doing statistical
a analysis of data. There is not really a strong relationship to TVB since our
mathematical approach is model-based and theirs is data-driven.
-It is mentioned that V1 can be activated... do you plan to include some
-topography in these maps, and then to support for input stimuli? Similarly,
-can the output be read-out for potential motor actions?
This is certainly one of many interesting avenues in the future. It is currently
possible to stimulate and read-out the activity from any area. That said, such
work will want to take advantage of the cortical surface simulations to do so,
as the resolution of the representation of the cortex is much higher. A small
note to confirm this possibility next to the mention of V1 stimulation has been
included.
-The package seems to come "all batteries included". However, could this
-choice still allow the inclusion of other tools. For instance it is very
-helpful to have a tool for managing and tracking projects based on
-numerical simulation or analysis, such as sumatra
-https://neuralensemble.org/sumatra/ .
The batteries included approach is intended to ease the adoption for those users
who don't have preferred packages and tools already, and to provide a set of
components known to work well together. That said, the entire package has been
designed with extensibility in mind, and just about any piece could be swapped
out, from analysis to data formats.
While the authors do not personally use Sumatra day-to-day, such a project
management package (according to its documentation) is perfectly compatible
with the console based scenario we discuss for using TVB.
-Integrators: allows for a cross-validation of models
This is an on-going topic of research, precisely for which this platform has
been built: validation of the simulation models has begun in the historical
papers we cite, where links are made between empirical and simulated resting
state data.
Several papers based on TVB are currently submitted and some favorable
reviewed. See for instance:
Petra Ritter, Michael Schirner, Anthony Randal McIntosh and Viktor Jirsa. "The
Virtual Brain Integrates Computational Modelling and Multimodal Neuroimaging"
Brain Connectivity. ahead of print. doi:10.1089/brain.2012.0120.
Hence we hope it can be understood why these models are not yet extensively
cross validated. There is no doubt that model validation is a crucial subject
for future development in TVB, but we feel it is fair to say that much original
research remains to be done before validation tools become accepted standards.
In the resting state literature, the covariance matrix of BOLD signals is often
comported to the one generated by the model. But there is evidence that the
covariance matrix changes on shorter time scales. Hence this approach has to be
adapted carefully.
- The results presented in the last figures (11 & 12) are too briefly
- explained in the text. They are certainly of relevance for the rest to
- demonstrate the efficiency of the presented method.
We have expanded the text in the captions and text to more clearly explain those
figures.
Minor comments:
* p.1 (abstract) "web based" > "web-based"
It has been fixed
* p.2 " for implementing modeling tools. (Spacek et al., 2008)" > " for
* implementing modeling tools (Spacek et al., 2008)."
It has been fixed
* p.4 in Code 1, you define a speed, but do not give an unit.
It is millimeters per milliseconds. We have fixed this.
* p.6 "on CPUs Intel(R) Xeon(R) X5672 @ 3.20GHz (Linux kernel
* 3.1.0-1-amd64)." : on a single-core machine?
No, this is a multicore processor. Simulations were run on a cluster whose nodes
are Intel Xeon X5672 (quad-core) processors.
* p.7 I did not see "(deterministic 11A and stochastic 11B integration
* schemes)" explained elsewhere in the text
p.5 left column, in paragraph "Integrators" it is explained that integration
methods are provided to solve deterministic and stochastic differential
equations. Then two main types of integration schemes are defined: deterministic
or stochastic. Also, the caption of figure 11 has been extended to clarify that
in 11A a deterministic integration scheme was used and in 11B a stochastic
scheme was chose since noise was added to the model. This figure reference has
been fixed. Corresponding additions to the text have been incorporated.
* p.7 it is rather unfair to state that "they, most importantly, do not
* consider the space-time structure of brain connectivity (connection
* weights and transmission associated time delays) constraining the full
* brain network dynamics": most models do, but these models can implement
* such delays (see for instance in the high level API of PyNN).
There is a bit of a theoretical gap between delays typically considered in PyNN
style simulations and those implemented in TVB. Simulators like Brian ot NEST
can indeed implement delays, but it is usually considered as just another way of
adding complexity and they do not correspond to the large-scale of the whole
brain. In TVB, however, it is a crucial component in generation of the space
time structure. Therefore, TVB is the only tool designed to incorporate this
realistic information as the base component of the brain network model
dynamics.
* p.7 fix grammar of "Focusing on the brain’s large-scale architecture, in
* addition to the dimension reduction accomplished through the mean field
* methods applied on the mesoscopic scale, allows TVB to make computer
* simulations on the full brain scale on workstations and network
* workstation parallel clusters, with no need to use supercomputing
* resources."
Correct, the sentence had no (grammatical) subject. It has been fixed.
* p.7 no need to cite again some simulators in "brain models at the level
* of neurons (Goodman and Brette, 2008, 2009; Cornelis et al., 2012), "
This has been fixed.
* p.9 check initials: "Goodman, D. and Brette, R. (2008). Brian: a
simulator for spiking neural networks in python. Front Neuroinform, 2:5.
Goodman, D. F. M. and Brette, R. (2009). The Brian simulator. Front Neurosci,
3(2):192–197."
This and OTHER REFERENCES have been checked for correct author names.
* p.13 the reference to "TVB svn revision 4355." should be removed xor
* explained.
This reference to the version control revision has been removed as it is
irrelevant to the content of the article.
* p.13 re-order " integration scheme 4th Runge-Kutta"
This has been fixed to read "4th order Runge-Kutta integration scheme"
* p.13 "at 40 Hz." > "at approximately 40 Hz."
It has been fixed
* p.13 rephrase "Variance of the Variance of nodes." ("Mean variance of the
* activity of each node."?)
The corresponding modifications have been introduced in the text along the
definitions for these two metrics. (Variance of the nodes Variance and Global
Variance)
* Fig12 put the legend somewhere else than the plot
This has been fixed.
Reviewer 2
TVB represents, if anything, the integration of diverse, existing, peer reviewed
work, that traditionally requires the expertise of several groups, into one
package usable by people from the many backgrounds in neuroscience.
* The current status of the use of software developer practices in TVB:
simulation engine:
- we use standard algorithms from literature (e.g. Euler-
Maruyama integration) tested in standard ways (testing convergence)
- the simulator is based on several iterative improvements of previously peer reviewed
simulation paradigms.
- continuous integration
- integration testing of simulator by means of simple programs (demonstration scripts)
- there arequalitative links between simulation dynamics and empirical data. Models have
been taken from peer reviewed articles.
- comparison with other similar methods is not straight forward because TVB unifies previous methods
- we follow code quality guidelines and have periodic code reviews
- several rounds of acceptance testing (we verify that software meets design use cases)
framework:
- as we are using Agile techniques for managing project, each task
is considered done, only after an entire procedure of validation (definition of
Done): automated unit-test added, status finish from the person implementing
the task and status closed from the person responsible for the module, which
means a second test.
- before each release we have a period for manual testing, mostly done by
test-groups from the scientific world checking that no
major issues gets passed into the release.
- We also have automated integration tests (running with Selenium and
JMeter on top of a browser engine) testing UI navigation and major TVB workflows.
On going and future work
- Further develop unit & regression testing of simulator
- Undertake the clinical validation in our own group coordinate and validate models developed in TVB
- by wider scientific groups
Concerning performance, there is no information about error, or a discussion
of why an when the Heun method---which appears to be the preferred numerical
integration method---is applicable and what numerical errors are to be
expected. The timing experiments presented in Fig 7 provide very little
information: Isn't it trivial that simulating 2s takes twice as long as
simulating 1s? This should especially hold for a fixed time-stepping scheme
as used here. Deviations from this would either point to strong fluctuations
in dynamics and thus variations in load (in which case the benchmark case is
ill-defined), or a design problem (or bug) in the software. From Fig. 7A it
also seems that halving the time step doubles the duration of the simulation
time---again a trivial property of fixed time stepping schemes. Finally,
there is no discussion of the trade-off between step size and accuracy---is
there maybe a "sweet spot" providing maximal accuracy at minimal cost?
You argue that you cannot compare TVB runtimes to any other simulator, as
there is no comparable simulator available. In this case it would be
interesting to discuss theoretically expected runtimes (from estimates of
number of operations required), to get a rough idea of how performant your
code is.
We have modified our discussion of runtimes in the following ways
- Simulations similar to the neural dynamics simulation core have been
constructed in Brian to test relative performance, and we have noted the results
in the manuscript, with the caveat that the architectures are quite different.
- We have developed supplementary analytic estimates of run times, noting that
as TVB simulations can be memory bound, the hardware's memory bandwidth can be
*the* determining factor (a reason why we had not previously developed such an
analysis and we will keep working on this topic)
- Adaptive schemes are possible even in the presence of noise and delays,
however linear interpolation on an already memory bound algorithm will not
likely relieve any speed/accuracy tradeoff.
We have revised Figure 7 to include more informative benchmarks, and we've
update the caption accordingly
Given that you point out the parallel capabilities of TVB, you should
provide data on how TVB scales in parallel simulations.
TVB parallelizes different tasks e.g. simulation and analyses, but not the tasks
themselves. This is subject of current work (GPU), and has been clarified in the
text.
Concerning question 4, I find that the is far too little information about
the way the simulator works. The only point described in detail are the TVB
Datatypes, and even for them we mainly learn that they are NumPy arrays with
some metadata attached. There is no information about how data flows between
the different parts of the simulator, how the update cycle is organized, how
delays of different length are implemented and causality ensured, and how
parallelism is implemented and integrated in the simulator.
We have outlined both the systemic use of datatypes and the structure of the
main simulation algorithm. 1. Author names in citations should not be in
parentheses when used as names in the text. This needs to be fixed
throughout.
This is fixed.
2. While the English is very good overall, there seem to be a few glitches
here an there, so I'd suggest careful proofreading by a "stickler".
It has been corrected.
3. P. 2, left column, about middle: "underwrites the need" should be
"underscore ..."
This is fixed.
4. In the second paragraph of 1.2, you suddenly mention patient data. This
comes abrupt and you should give a better overview over potential data
sources.
We have adjusted the text accordingly.
5. Why is section 2 typeset in a smaller font?
The font size and style are given by the Frontiers latex template.
6. P. 3, left column: I find that "When using TVB web application" and
similar phrases sound incorrect and that "When using the TVB web
application" reads much better---even though it expands to "...the the
virtual brain web application".
The authors acknowledge this issue, however it is not a matter of grammar. We
have kept the previous notation.
7. Table 1 provides a level of detail unsuitable for a scientific paper: All
packages are standard and easily available and there is no indication in the
manuscript that the (un-)availability of particular packages had design
consequences that need to be balanced against important features of TVB.
Therefore, you should drop the table---it belongs with the installation
guidelines on your website. The same holds for details of Matlab path
settings (p 3, left, towards bottom).
This has been removed.
8. Why does a basic installation of TVB require 50GB disk space?
Installing TVB does not require more than 400 MB. However, the rest of the space
that TVB will use, largely depends on the length and numbers of simulations
users will perform. Additionally, 50Gb is indeed a mistake, it should be 5Gb.
This is just a quota, which in the case of multiple users accessing one instance
of TVB an administrator can specify the maximum hard disk space per user. By
default it set to 5GB. This has been corrected and improved.
9. P. 4, left, top: Within which scoped are GUIDs unique?
GUIDs are generated using the current time and the MAC address of the computer
and using the standard python uuid module. So, GUIDs have a genuinely global
scope as unique IDs.
10. P. 4, left, top: ZIP and BZIP2 are general compression formats, listing
them as possible input formats seems unusual---what kind of data is
contained in these files?
We have data import routines that expect a set of ASCII text files compressed in
an archive, e.g. connectivity weights, node centers and distances, describing a
connectivity dataset.
Example: A a connectivity dataset (connectome) may be uploaded as a zip folder
containing the following files: - areas.txt - average_orientations.txt -
info.txt - positions.txt - tract_lengths.txt - weights.txt
We have added notes to the text to clear up this ambiguity.
11. P. 5, Sec 2.3.3: Why is RK4 "only suitable for the integration of single
... instances of the dynamic"?
This has been clarified. It is mainly related to the differing convergence
of the method for SDEs.
There is not an agreed stochastic version of rk4, and the different rates of
convergence being one of the points that various attempts of creating a
stochastic rk4 fail at. Also, it is an issue of the way we calculate time-
delayed connections, because rk4 has an intermediate step in its algorithm, to
use it correctly we would need to either store the history at the mid steps as
well or generate them as needed by using an interpolation on the history, either
of which would lead to added complexity.
12. P. 6, Code 2: The main simulation loop gives the impression that the
simulation is always only brought a small step forward and that the user
then needs to pull the current state from the model for immediate or delayed
plotting. There seems to be now "pushing" of data from the simulation
directly to visualization. Is this correct, and if yes, why this seemingly
cumbersome implementation.
Following another reviewer request, we detailed the main simulation algorithm to
make dataflow model clearer.
Responding to this particular question: the simulator must internally push raw
simulation state to each monitor at every time step, however not all monitors
have the same periods, such that at each time step, it is necessary to test
whether one of the particular monitors set up for simulation generated data or
not. In the for loop of a scripted simulation, the user receives time series
data as available, and then is free to do as he likes. One can imagine for
example a numerical bifurcation analysis routine that wishes to track a
bifurcation branch; such a routine will need the new simulation step data as
soon as available, perhaps manipulate the state or parameters before continuing,
et cetera. Additionally, this scheme permits accessing data while the
simulation is still running and gives a smaller memory footprint (as at each
step or certain number of steps data can be written stored to disk), both
important and useful features when dealing with larger simulations
When combinations of monitors such as EEG & fMRI enter into play, such a
dataflow is also advantageous because it decouples the "timestamps" on the data
from different monitors.
An additional advantage of this implementation is in stimulation paradigms where
the output signals, e.g. EEG signals are fed back under the form of stimulation
patterns without the need to stop the simulation.
13. P 8. "Future work": I find it curious that the authors discuss work that
apparently has been done already, is listed under "Future work". Shouldn't
you rather update the body of the paper in the appropriate places?
This has been fixed. Many of the points addressed in this section were work in
progress at the moment of writing the first version of the manuscript.
14. Fig 1: The "data types" box in the diagram appears misplaced. All other
boxes repesent entities that either do something or are something (stored
data), while datatypes are an abstract concept.
Datatypes do contain/store data, representing "active data". We have added a
note as such to the caption.
15. Fig 2: The figure supposedly shows work*flow*. Unfortunately, I cannot
recognize any flow in the figure. The box "Operations board" is introduced
only in the caption and not explained elsewhere.
The caption has been reworded accordingly. This figure intends to represent
working areas of TVB. As requested by another reviewer, the text describing sub-
working areas (e.g operation dashboard) has been expanded.
16. Fig 5: Why are some items boxed and others not?
The main idea was to highlight the difference between entities represented by
datatypes and entities which are in the scientific library (monitors, models,
simulator, noise, integrators). The figure has been improved.
17. Fig 5: You write that "[s]ignal propagation via local connectivity is
instantaneous (no time delays)". How do you implement this without violation
causality?
Finite propagation speed is possible on the cortical surface if a neural field
model (including a Laplace-Beltrami operator) is used to describe the dynamics,
and so in that way causality is automatically preserved. In the absence of
finite propagation speed the activity doesn't really propagate along the
cortical surface, it only has an influence (which is instantaneous) on the local
neighbourhood, which depends on the footprint of the imposed local
connectivity. Therefore the activity coming from neighboring nodes is
transformed by the local connectivity kernel and added (introduced in the local
model equations) in the next integration step.
A local connectivity kernel that would strictly preserve causality would be
achievable with a high resolution surface.
Hence, we considered the current implementation to be an acceptable
approximation, but this will be further tested in the future.
In the case of time delays introduced by the long-range connectivity a history
state propagates the system from one state to the next and events or activity
coming from other regions are delivered to their target nodes. This history
array contains the information about the state of the system but the system X
time step units before the current state. X depends on the white matter tract
lengths (given by the structural connectivity matrix), the conduction speed and
the time step size.
18. Fig 7B: what is the purpose of shading the area between the curves?
It was mainly to represent the integration time steps between the lower and
upper boundaries for both connectivity matrices.
19. Fig 9A: Why do some lines seem turquoise, while others are blue?
Each line has indeed a different color since they represent distinct nodes
(regions) as given by the connectivity matrix.
20. Fig 9B: "color scale correspond to the global variance"---global
variance of what?
This has been explained more clearly in the text.
21. Fig 10: Again: "global variance" and "Variance of the Variance"---but of
what?
This has been explained more clearly in the text.
22. Fig 11: Something seems to have been mixed up towards the end of the
caption, there is no clear description of (B). It should probably also be
"damped", not "dumped oscillations".
Yes. this has been corrected.
23. Fig 12: You might want to spell out "MSE" in the caption.
It has been fixed.
Independent Review Report, Reviewer 1
Thank you for your kind comments and constructive criticism.
The package is freely available and open source.
Page 2, "In TVB, the tract-lengths matrix of the demonstration connectivity
dataset is symmetric ... On the other hand, the weights matrix is asymmetric
..." Is this essential to the design, or just currently the case? In
particular, while in most cases one would expect that connection distances
A->B are the same as B->A, this need not be the the case.
This is just currently the case, as the methods used to measure such data do not
yet yield asymmetric tract length matrices.
Structural connectivity is given by both the adjacency matrix, or weighted
connectivity matrix in the case a scalar map has been used to define the
strength of the connections, and the white matter fibre lengths. Since diffusion
is a symmetric process and the connectomes derived from diffusion tensor imaging
are indeed symmetric.
However, this clarification was made specifically for TVB's demonstration
connectome dataset, which is a fusion of DTI derived and the CoCoMac database,
the latter being a directed connectivity matrix (graph) thanks to the tracing
studies used to determined the connections between regions.
However, this is not a limitation or modelling constraint. The implementation
for weights and tract-lengths are full node x node matrices without any symmetry
restrictions.
Page 2, "Two types of brain connectivity are distinguished in TVB, that is
region-based and surface-based connectivity. ..." I take it that these
approaches are *exclusive* of each other, that is to say, either the
(computational) nodes represent brain regions, or they represent much
smaller units, but not both concurrently? If so, then this limitation should
be pointed out clearly. If not, then the authors need to explain how these
different scales are being interfaced.
Details of how these two levels interact in the presence of a surface
connectivity have been added to this section.
Page 2, "TVB has been principally built in the Python programming
language..." Can the authors comment on the speed limitations this may
introduce, as compared to using for example C? (Actually, on reading further
this is discussed towards the end. But a brief mention here would be in
order.)
Native code integrators yield 40% improvement in simulation speed, because the
algorithms, especially in the case of surface-based simulations, are memory
bound. Note that very little of the original Python code is replaced in such a
scenario, and the integration routine is interfaced with the rest of the
package. Recent advancements in Python numerical software (Continuum Analytics'
"numba" package) have brought just-in-time compilation to numerical code, which,
when applied in our simulator code base, will likely shrink the 40% gap between
As the heavily numeric components make use of libraries which are built against
highly optimised math libraries (mkl on intel), the performance hit for the high
level language isn't as great as it might initially seem. With current active
development aimed at improving both Python's speed as well as specific
improvements for high performance computing (eg. PyPy, Blaze, Theano, etc), the
difference between compiled languages and Python should continue to shrink.
High-performance is desired in a scientific software tool. However, the choice
of Python is also due to its flexibility to implement modelling paradigms, the
possibility to expand the different analysis techniques, read different sources
of data, and connect other neuroscientific modules that have also been written
in Python. So, there is a trade-off between performance (with respect to
simulation execution times, memory usage, memory bandwidth) and readability of
the implementation structure (clean separation of the modelling components,
accessibility by non programmers, etc)
We have added appropriate notes to the text.
Page 3, "Based on the usage scenarios and user’s level of programming
knowledge, two user profiles are represented: a graphical user (G-user) and
scripting user (S-user)." Add a reference to Figure 1 somewhere here.
Provide a discussion to what extent G-user and S-user have the same
abilities to interact with the system, or not. Does a S-user have more
control, or just different means of control?
S-users have different extents of control over different parts of the system.
S-user typically will not find it easy to manipulate the database, users or
projects, or use the interactive visualizers designed for the graphical user
interface, though it is in principal possible.
G-users will find it difficult to perform intricate, repeated or algorithm
manipulations of simulations and data, because of the nature of graphical
interfaces.
We have added some detail on this contrast to the text.
Page 3, "users are required to have a high definition monitor (at least 1600
x 1000 pixels)" Why is this necessary?
We need a high resolution for the web interface of TVB because some of the pages
contain a lot of information, and that is the resolution for which web-designers
have managed to make TVB Graphical User Interface acceptable with regard to
displaying all the required information as well as maintaining an acceptable
size for the controls (e.g. size of the buttons). It is normal in the world of
web design to impose a limit in resolution.
This is not a hard dependency of the software, just a recommendation; the text
has been updated to reflect this.
Page 4, "The following formats are supported: NIFTI-1 (volumetric time-
series), GIFTI (surfaces), CFF (connectome file), ZIP (connectivity,
sensors, surfaces), BZIP2 (region mapping) and text files." ZIP and BZIP2
are compression formats. It is hence meaningless to mention them, without
saying what they compress. Explain what the actual underlying standards for
connectivity, sensors, surfaces, and region mapping are (e.g., home-baked
XML?), then you can mention that they are (B)ZIPped.
They compress conventionally organized ASCII files representing data. We have
noted this, given an example, and pointed the reader towards the user guide
where said conventions are well-documented.
Page 4, "Then for each operation, one folder per operation is created
containing a set of .h5 files generated during that particular operation,
and one XML file describing the operation itself." How much data is a user
generating per command on average, and is there any kind of "garbage
collection". Will a user "playing around" with the system generate massive
data files which are largely superfluous (because they just track user moves
superceded by later ones)?
There is not an average amount of data generated per operation, it largely
depends on the parameters of the simulation and then subsequent operations will
depend on the output simulated data (time-series). The XML file attached to
each operation are extremely small (1 - 2 KB) , containing mainly a map with the
user-selected input parameters for that operation, to help us (in case of import
export or data-lost) to reconstruct the Database indexes. We have no automated
mechanism of Garbage Collection in TVB, but the user can remove using controls
from TVB interface data that she no longer needs; in that case any link/file/xml
related to the data is dumped, and disk space freed.
- A default region level simulation with length 1000 ms takes
approximatively 1 MB of disk space. - A surface level simulation, with
Local Connectivity pre-computed, Raw monitor and length of 10 ms
takes 280 MB.
Typical per-operation data is not high for typical region-based simulations
(generated GBs of data may take hours on a typical workstation).
Page 4/5, "2.3.2 Population Models A set of default mesoscopic neural models
are defined in TVB’s MODELS." Can the authors explain what is involved in
adding other models to their simulation software? In particular, what format
do the "model modules" take, and how is their availability registered in the
main software?
There is a template for adding a new model available in the scientific library,
as well as all a collection of untested models. This template is available from
the public repository in github.
A TVB Model mainly consists of: - parameters with a default value and ranges,
labels and description. - state variables and their ranges which will be used to
set random initial conditions. - state variable equations - coupling variables
(ie, the variable describing the activity that will propagate through the long-
range connectivity and where the stimulation at the same time the state variable
to which input stimuli will be added) - variables of interest for the monitors.
(the state variables used in the forward solutions of the biophysical monitor
depend both on the model and the monitor). A default is giving thinking that the
neural activity will be projected onto EEG space. - derived parameters
equations. parameters derived from the "configurable parameters" of the model.
Page 6, "The biophysical Monitors instantiate a physically realistic
measurement process on the simulation, such as EEG, MEG, or BOLD." What
aspects of the underlying models are considered to drive the signal
expression, i.e., what model state variables serve as inputs to the EEG/MEG
lead field projections and the BOLD haemodynamics, respectively? Is this
fixed or somehow specified?
This remains an open question, one which the user or modeler will need to
justify. However, in most neural models have a variable modeling potential,
which serves as a biophysical basis for the monitors. So, which state variables
should be used depends both in the model and the biophysical space where the
neural activity described by the model will be projected onto (MEG, EEG, BOLD).
By default the ones specified in the model are used, however they can be
overridden by users.
The underlying sources of BOLD haemodynamics are less well known, but given the
disparate time scale separation between the neural dynamics and the haemodynamic
response function, it may (depending on the goals of the user) be reasonable to
ignore such concerns.
In the current implementation:
EEG specified as variables of interest and modes (when working with mutimodal
models) are summed to get a single source of activity.
MEG the same applies here
BOLD no underlying assumption. The model's specifications are used.
Page 6,"We make the following estimates: it takes in average 16 seconds to
compute one second of brain network dynamics" The detail here should not be
shoved into the Figure caption, but discussed in the main text. Linear
scaling is hardly surprising if this is a one-core job with integration
detail directly linked to the temporal scale (i.e., noise appearing at all
temporal levels). The scaling shown if Fig 7B apparently provides no "stress
test" of the computational framework at all. The interesting bit is of
course rather where performance runs into a wall as the number of nodes is
increased (256, 512, ...?). Furthermore, we have no indication here about
the performance for a truly large number of nodes potentially using a
parallel cluster. How does the system handle tens of thousands of nodes,
with potentially millions of connections?
Following the feedback of another reviewer, we have added a basic benchmarking
for connectivity matrices of different size. For large numbers of nodes, the
algorithm is memory bound, and run time is determined entirely by the bandwidth
between the CPU and main memory, thanks to use of libraries such as 'numexpr'
which regularly attain the theoretical limits of the memory bus.
At the default, low-resolution cortical surface, with the default local
connectivity kernel, there are ~17k nodes with 2.5 % nonzero connections or
roughly 6.8 million local connections. In our recent tests on Intel's recent
server-grade 16-core Xeon CPU and Intel's Math Kernel Library (MKL), 1 second of
surface simulation requires ~150 s of wall time for the generic 2D oscillator.
Despite 16 available cores, MKL, which chooses at runtime how many cores to
use, uses only 4 - 6 cores. This may be due partially to our use of the sparse
matrix-vector multiply routine from SciPy, itself based on ARPACK, a Fortran 77
library.
For the moment, it is not clear that an MPI implementation on conventional
hardware will yield a significant improvement in runtime, and it would
introduce considerable complexity to the simulator architecture.
We have added notes thereof in the text where appropriate.
Page 7, "Such a dynamic approach leads toward an adaptive modeling scheme
where stimuli and other factors may be regulated by the ongoing activity."
This requires that the stopping, modifying and restarting the simulation can
be done automatically, rather than by user interference. Is that the case?
Are relevant methods of other authors mentioned? If not, please specify the
methods the author should incorporate.
Current implementation allows to retrieve simulated data as it's being produced.
When a data point is available after each integration time step ( or a number of
integration time steps for monitors other than the Raw monitor) it can be
immediately accessed, processed or stored and potentially being fed back (under
the form of a stimulus or any other modification modellers may require) at each
integration time step without actually stopping the simulation. As long as the
parameters being modified are not the ones determining the spatiotemporal
structure/dimensionality of the input and output (integration time step,
transmission speed, simulation length, number of nodes, number of state
variables, number of modes)
However, no prepackaged tools are currently provided for automating this process
which we would consider to be very dependent on the details of the user's
goals.
Page 1, "In particular, cognitive and clinical neuroscience employs...":
References are only provided for EEG, not for sEEG, MEG and fMRI. Add some
references providing overviews over these neuroimaging techniques.
We have added references for these modalities.
Page 1. The introduction lacks discussion and proper references to works in
this field which are more directly comparable than simply generic neural
field/mass approaches. At a minimum, the authors should mention and briefly
discuss the following works from five key groups:
We have reviewed this literature and integrated it into the introduction of this
work.
Page 2, "Mesoscopic dynamics describe..." This paragraph lacks proper
references to the vast array of previous work in this field, starting with
Beurle (1956). I would recommend here reference to some recent reviews, in
particular the following:
Deco GR, Jirsa VK, Robinson PA, Breakspear M, Friston KJ (2008) The dynamic
brain: From spiking neurons to neural masses and cortical fields. PLoS
Comput Biol 4:e1000092 Coombes S (2010) Large-scale neural dynamics: Simple
and complex. NeuroImage 52:731–739 Bressloff PC (2012) Spatiotemporal
Dynamics of Continuum Neural Fields. J Phys A 45:033001 Liley DTJ, Foster
BL, Bojak I (2012) Co-operative populations of neurons: mean field models of
mesoscopic brain activity. In: Computational Systems Neurobiology. Ed. N. Le
Novère. Dordrecht: Springer. pp. 315–362.
We have reviewed this literature as well, adding to the text where appropriate.
Page 3, "The data format used in TVB is based on the HDF5 format (The HDF
Group. Hierarchical data format version 5)" Provide a proper reference,
e.g., to some official description of the standard (possibly a webpage). Are
methods of other authors accurately presented and interpreted? No Page 7,
"TVB is the first neuroinformatics tool of this type and has been developed
by integrating concepts from theoretical, computational, cognitive and
clinical neuroscience." The authors need to be a bit more cautious. This is
perhaps the first tool of this type which aims at general accessibility to
the scientific community and hopes to support general analysis approaches.
Tools of this type have been built by several groups before, but targeted
more at their own research question with less ambition to be widely used.
Previous works needs to be mentioned here properly again, see comment above.
We have modified the text where appropriate.
Page 3, "The distribution packages for TVB ... There is an active Users
group of TVB hosted in GoogleGroups ..." This would be a convenient place to
introduce relevant links for downloading the package and discussing it,
respectively.
We have centralized the links to these resources in a table as suggested.
Figure 2: Bring the dashed double-sided curved arrow to the foreground, so
that its ends do not get covered.
The figure has been slightly modified. It represents the working areas of the
web interface. The user's manual is available and contain more details about
the usage and relationships between these pages.
Page 4, "As an example of the flow of data through TVB using Datatypes."
Fragment of a sentence, please correct.
This has been fixed.
Figures 3,4 are not mentioned/explained in the text. Please provide at
least one paragraph in the main text per figure, appropriately placed.
Please resort the Figures according to their order of reference in the main
text (Fig. 8 appears early).
We have added appropriate texts and ordered the figures as requested.