Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LWFA: GUI .cfg & Additional Parameters #2336

Merged

Conversation

ax3l
Copy link
Member

@ax3l ax3l commented Oct 26, 2017

Add to python input Parameter class

  • dtype: data type of the PIConGPU value
  • label: UI label instead of name
  • formatter: UI value label instead of name

New cfg for GUI: 0004gpus_gui.cfg

  • make more easily modifiable with tbg -o:
    • steps
    • grid size
    • devices
  • expose TBG_steps properly as integers.
  • all plugins dump in same interval: TBG_plugin_period
  • add histogram, second cut in PNG and phase space

@ax3l ax3l added the component: tools scripts, python libs and CMake label Oct 26, 2017
@ax3l ax3l added this to the 0.4.0 / 1.0.0: Next Stable milestone Oct 26, 2017
@ax3l ax3l requested a review from codingS3b October 26, 2017 14:07
@ax3l ax3l force-pushed the topic-lwfaGUIcfg branch 2 times, most recently from 70e752f to 437398f Compare October 26, 2017 14:34
"""

members = super(UiParameter, self).as_dict()
del members["formatter"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like we need to exclude the "dtype" member as well since it can't be serialzed either

Copy link
Contributor

@codingS3b codingS3b Oct 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just found out that we could store the the dtype as a string representation using
members["dtype"] = members["dtype"].__name__. Maybe it would be cleaner to do this for later restoring the parameter objects from the json representation correctly.

Similarly it seems also possible to store the formatter lambda as a string representation (see https://stackoverflow.com/a/30984012)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, but careful that a lambda does not mean it is just one source line ;)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we add the string representation of both in a follow-up? do we need it now?
otherwise feel free to approve the PR :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for the moment we can leave them out, but maybe the class of each parameter object should be stored as a string representation.
This is because, as of right now, we still have issues in our ui-code when reading back the parameter variables from json: there, we implicitely have to cast to the base class "Parameter" (since we don't have any other information) losing all info concerning the dtype and formatter which the program then tries to access, of course crashing.
So I still need to figure out a proper way to restore an "exact" copy of the parameter object on our side of the code. But the dtype and the formatter are handled with defaults when not given, so that should be not a problem.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hold on, sorry...maybe our bug is somewhere else and we don't need the class representation. Don't change it just yet, I will get back to you :)

Copy link
Contributor

@codingS3b codingS3b Nov 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found the critical section: calling del members["formatter"] seems to remove the entry not only from the members dictionary, but since it is a reference also from the true object. This should of course be avoided!

@@ -208,7 +241,8 @@ class LogScaledParameter(Parameter):
"""

def __init__(self, name, ptype, unit, default, slider_min, slider_max,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LogScaledParameter should actually inherit from UiParameter. I commited that wrong and it needs to be fixed.

slider_step=1, scale_factor=1.e-15),

LinearScaledParameter(
name="TBG_steps", ptype="run", unit="s",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would probably be less confusing to use the units the user is accustomed to like pico-seconds, nano-meters etc as units. The formatted internal picongpu value of each parameter is, as discussed yesterday, indeed not relevant to the user. Sorry for switching back and forth about this issue.

self.value = self.base ** x
return self.value
self.value = self.dtype(self.base ** x)
return self.formatter(self.value)

def get_value(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before calling log10(self.value) we need to cast the value back to float, because in case we use self.dtype = int, I got the error AttributeError: 'int' object has no attribute 'log10'

slider_step=1, scale_factor=1.e-15),

LinearScaledParameter(
name="TBG_steps", ptype="run", unit="s",
Copy link
Contributor

@codingS3b codingS3b Oct 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the Wave_Length_SI parameter the default should be a float or the dtype should be specified as float, since right now the dtype is taken as int causing much trouble in the computations that follow.

Same goes for the Base_Density_SI

ax3l added 3 commits October 27, 2017 14:15
- make more easily modifiable with `tbg -o`:
  - steps
  - grid size
  - devices
- all plugins dump in same interval
Add to python input parameter class:
- dtype: data type of the PIConGPU value
- label: UI label instead of name
- formatter: UI value label instead of name
Expose TBG_steps properly as integers.
@ax3l ax3l force-pushed the topic-lwfaGUIcfg branch from 437398f to 6ccbf5f Compare October 27, 2017 12:15
@ax3l
Copy link
Member Author

ax3l commented Oct 27, 2017

@codingS3b thanks for the review, I addressed all inline comments.
can you pls take a look again?

@ax3l ax3l requested a review from PrometheusPi October 27, 2017 14:03
@ax3l ax3l added the component: examples PIConGPU or PMacc examples label Oct 27, 2017
Copy link
Member

@PrometheusPi PrometheusPi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very good code - just some remarks and an evil magic number


"""
Parameter.__init__(self, name, ptype, unit, default, value)
Parameter.__init__(self, name, ptype, unit, default, value, dtype)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QUESTION:
dtypeis (as far as I can tell) not used below, why is is then used as input parameter?
Add a todo if you want to add functionality using dtype in the future.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dtype is used to init a member of the parent class.

it is then used in all set_value

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the confusion - my fault - it is used right there

@@ -18,10 +18,13 @@

from picongpu.input.parameters import LogScaledParameter, LinearScaledParameter

pico = 1.e-12
dt_r = 1. / 1.39e-16 * pico
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

REQUIRED:
Please explain this magic number (1.39e-16) in a comment or give it a name that makes it clear (as e.g. pico).

Copy link
Member Author

@ax3l ax3l Oct 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is the inverse (historically suffixed _r) of dt (in inverse ps).

We might be able to just add is as a Parameter (but not UiParameter) in the list below to make it more accessible.

Currently, we do not have a clean interface for it :(

TBG_e_PSypy="--e_phaseSpace.period !TBG_plugin_period --e_phaseSpace.space y --e_phaseSpace.momentum py --e_phaseSpace.min 0.0 --e_phaseSpace.max 1.0"
TBG_e_PSzpz="--e_phaseSpace.period !TBG_plugin_period --e_phaseSpace.space z --e_phaseSpace.momentum pz --e_phaseSpace.min -2.0 --e_phaseSpace.max 2.0"

TBG_plugins="!TBG_pngYX \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QUESTION:
Why is this so much different to the other *.cfg files that it needs to be a separate file?
Couldn't we just use a combined *.cfgfile to avoid duplicated code/pile up.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want to control steps directly from the outside, but our tools do not understand -s number but only tbg variables such as !TBG_steps, controlled with tbg -o.

We will also add probably more changes here, so it is best to make it a fully separate setup.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay - that's a good idea 👍

@ax3l
Copy link
Member Author

ax3l commented Nov 1, 2017

@PrometheusPi @codingS3b if you are ok with the intermediate state of this feature we can already merge it so @codingS3b can work on with the dev

Copy link
Member

@PrometheusPi PrometheusPi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

accept for now - fixes will be added later

@PrometheusPi PrometheusPi merged commit b6925ca into ComputationalRadiationPhysics:dev Nov 1, 2017
@ax3l ax3l deleted the topic-lwfaGUIcfg branch November 1, 2017 16:47
psychocoderHPC pushed a commit to psychocoderHPC/picongpu that referenced this pull request Aug 7, 2024
106a4975f4 fix getFunctionAttributes for the SYCL backend
f36e1156af update CUDA version in CI
3f8456973e use inline for CUDA/HIP code when rdc is on, otherwise use static
8b9cc3c557 fix gh-pages jobA
89d5ce671c Ignore VI temporary files
4b7bd17493 Fix the device used by KernelExecutionFixture (ComputationalRadiationPhysics#2344)
2c386dc5e9 Make alpaka follow CMAKE_CUDA_RUNTIME_LIBRARY
2d652dd233 Add thread count to CPU blocks accelerators (ComputationalRadiationPhysics#2338)
dbc5ebe1e9 Fix complex math pow test (ComputationalRadiationPhysics#2336)
4995c5b22a Fix isValidWorkDivKernel to use the correct device
f571ce9197 Remove unnecessary include
a26cdbcd41 Re-enable the KernelNoTemplateGpu test
a9217fb780 Link libcudart even when libcurand is not used
9c8614143b Suppress GCC warning about casting a function to void*
ba169cdc52 Rewrite the getValidWorkDivForKernel tests
948eb757d4 Fix getValidWorkDivForKernel tests for the SYCL CPU backend
f6f94f13b5 Fix getValidWorkDivForKernel tests for the CUDA backend
f612f971a0 Reduce code duplications in matrixMulWithMdSpan (ComputationalRadiationPhysics#2326)
d1cc2e01c1 Add a matrix multiplication example using mdspan
536a183cce Add missing whitespace in enqueue log messages
81d4410eec Reduce code duplication in CUDA/HIP kernel launch
6fdec14904  add remove-restrict
5323600508 CI: improve script utils
01d123e605 fix missing C++20 STL for ICPX in the CI
d254bcd6a3 ctest: display only output of tests, which failed
c9b8c941af change documentation
b9ed742913 remove getValidWorkDiv itself
048ef8afca use getValidWorkDivForKernel in kernelfixture of tests
38805498f0 fix random strategies
4f175420f2 remove getValidWorkDiv first
7f08120428 CI_FILTER: ^linux_nvcc11.*
789344f019 ALPAKA_FN_HOST is not a type
4efdb9dc63 fix explicit instantiation issue
fe4106f88a CI_FILTER: ^linux_nvcc11.*gcc9
e6b4881b70 CI_FILTER: ^linux_nvcc11.*gcc9
e3e760ed9e make conv2dmdspan use kernelbundle
62efffe605 Add getValidWorkDivForKernel function and KernelBundle with tests
690da679bd Let the SYCL queue implement `ConceptCurrentThreadWaitFor`, `ConceptGetDev` and `ConceptQueue` (ComputationalRadiationPhysics#2314)
995c57b54b set alpaka_CXX_STANDARD in the job generator
6ad09baa38 remove nvcc11.0 and nvcc11.1 support (ComputationalRadiationPhysics#2310)
0775f7c066 clang-format and fix typo
18eeeb7b49 move complex declaration to internal namespace
3468d2f8ac add trait IsKernelTriviallyCopyable
3015eae06b update CI container to version 3.2
56c0e416bc Update Catch2 to v3.5.2 (ComputationalRadiationPhysics#2300)

git-subtree-dir: thirdParty/alpaka
git-subtree-split: 106a4975f48dc38cc34f6a2494a3d16774282951
psychocoderHPC pushed a commit to psychocoderHPC/picongpu that referenced this pull request Aug 9, 2024
ab3092357b Use shared CUDA libraries by default
106a4975f4 fix getFunctionAttributes for the SYCL backend
f36e1156af update CUDA version in CI
3f8456973e use inline for CUDA/HIP code when rdc is on, otherwise use static
8b9cc3c557 fix gh-pages jobA
89d5ce671c Ignore VI temporary files
4b7bd17493 Fix the device used by KernelExecutionFixture (ComputationalRadiationPhysics#2344)
2c386dc5e9 Make alpaka follow CMAKE_CUDA_RUNTIME_LIBRARY
2d652dd233 Add thread count to CPU blocks accelerators (ComputationalRadiationPhysics#2338)
dbc5ebe1e9 Fix complex math pow test (ComputationalRadiationPhysics#2336)
4995c5b22a Fix isValidWorkDivKernel to use the correct device
f571ce9197 Remove unnecessary include
a26cdbcd41 Re-enable the KernelNoTemplateGpu test
a9217fb780 Link libcudart even when libcurand is not used
9c8614143b Suppress GCC warning about casting a function to void*
ba169cdc52 Rewrite the getValidWorkDivForKernel tests
948eb757d4 Fix getValidWorkDivForKernel tests for the SYCL CPU backend
f6f94f13b5 Fix getValidWorkDivForKernel tests for the CUDA backend
f612f971a0 Reduce code duplications in matrixMulWithMdSpan (ComputationalRadiationPhysics#2326)
d1cc2e01c1 Add a matrix multiplication example using mdspan
536a183cce Add missing whitespace in enqueue log messages
81d4410eec Reduce code duplication in CUDA/HIP kernel launch
6fdec14904  add remove-restrict
5323600508 CI: improve script utils
01d123e605 fix missing C++20 STL for ICPX in the CI
d254bcd6a3 ctest: display only output of tests, which failed
c9b8c941af change documentation
b9ed742913 remove getValidWorkDiv itself
048ef8afca use getValidWorkDivForKernel in kernelfixture of tests
38805498f0 fix random strategies
4f175420f2 remove getValidWorkDiv first
7f08120428 CI_FILTER: ^linux_nvcc11.*
789344f019 ALPAKA_FN_HOST is not a type
4efdb9dc63 fix explicit instantiation issue
fe4106f88a CI_FILTER: ^linux_nvcc11.*gcc9
e6b4881b70 CI_FILTER: ^linux_nvcc11.*gcc9
e3e760ed9e make conv2dmdspan use kernelbundle
62efffe605 Add getValidWorkDivForKernel function and KernelBundle with tests
690da679bd Let the SYCL queue implement `ConceptCurrentThreadWaitFor`, `ConceptGetDev` and `ConceptQueue` (ComputationalRadiationPhysics#2314)
995c57b54b set alpaka_CXX_STANDARD in the job generator
6ad09baa38 remove nvcc11.0 and nvcc11.1 support (ComputationalRadiationPhysics#2310)
0775f7c066 clang-format and fix typo
18eeeb7b49 move complex declaration to internal namespace
3468d2f8ac add trait IsKernelTriviallyCopyable
3015eae06b update CI container to version 3.2
56c0e416bc Update Catch2 to v3.5.2 (ComputationalRadiationPhysics#2300)

git-subtree-dir: thirdParty/alpaka
git-subtree-split: ab3092357bb0b917b4cc396ce49e47f7ac1924e1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: examples PIConGPU or PMacc examples component: tools scripts, python libs and CMake
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants