Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conductor shapes and particle scraping #1901

Closed
peterscherpelz opened this issue Apr 20, 2021 · 5 comments
Closed

Conductor shapes and particle scraping #1901

peterscherpelz opened this issue Apr 20, 2021 · 5 comments

Comments

@peterscherpelz
Copy link
Member

The goal here is to support something analogous to the Assembly class in Warp. Assemblies can both modify the field solve (see #1675 and #1641 for existing WarpX work in this direction), and also support scraping particles that impact them. A python interface is also desired.

@peterscherpelz
Copy link
Member Author

Other desired features are:

  • Record particle statistics that were scraped: For each conductor, and for each particle species, record for each timestep the number of particles that impacted, and their summed attributes (eg total weight impacted, and an option for including summed additional particle attributes from Add additional particle attributes at runtime #1902 )
  • Have an interface in scraping to reflection functions: These can remove particles from the list of particles to be scraped, and modify properties of those particles. Mentioned in Expand python couplings #1900 ).

@ax3l
Copy link
Member

ax3l commented Jun 11, 2021

Meeting notes 06/11/2021:

Record particle statistics that were scraped

A possible way to implement this is using the particle buffers as also used in back-transformed diagnostics (#1898) to accumulate "lost" particles (negative ID) before they are removed in an AMReX redistribute. We could then use the "full" output diagnostics routines to periodically output these particles (plotfiles or openPMD).

Have an interface in scraping to reflection functions

We discussed several possible ways:

  • providing a C++ hook to interact with scraped particles: similar to PICSAR, this could then be implemented with routines in an external repo
  • moving (a few hounded) particles per step to CPU and modifying them with regular Python, then copying back to GPU
  • exposing AMReX particle data structures & fields to cupy/numba (most work; draft atm paused because of developer deadlines)

@peterscherpelz
Copy link
Member Author

Thanks @ax3l ! I think the two other notes I had were:

  • Whether or not we use a particle buffer of lost particles, being able to hand those particles coordinates/attributes to python to record in whatever transformed/abbreviated diagnostic would be very helpful
  • We do need to know how to mark particles that hit Assemblies/Conductors/Embedded Boundaries as "lost". It seemed like there may be a way already implemented in related codes to do this, but I wasn't clear on that point?

@ax3l
Copy link
Member

ax3l commented Jun 25, 2021

We discussed today that @atmyers will help us to scrape particles into a pinned memory buffer before our redistribute.
I will help with the hooking in of external C++ routines once we have a first prototype.

In parallel, the Modern Electron team might be interested to check if the Python routines for "regular" particles are sufficient to manipulate/analyze the scraped particles or if they need extensions (or fixes):

def get_particle_arrays(species_number, comp, level):
'''
This returns a list of numpy arrays containing the particle array data
on each tile for this process.
The data for the numpy arrays are not copied, but share the underlying
memory buffer with WarpX. The numpy arrays are fully writeable.
Parameters
----------
species_number : the species id that the data will be returned for
comp : the component of the array data that will be returned.
Returns
-------
A List of numpy arrays.
'''
particles_per_tile = _LP_c_int()
num_tiles = ctypes.c_int(0)
data = libwarpx.warpx_getParticleArrays(species_number, comp, level,
ctypes.byref(num_tiles),
ctypes.byref(particles_per_tile))
particle_data = []
for i in range(num_tiles.value):
arr = np.ctypeslib.as_array(data[i], (particles_per_tile[i],))
try:
# This fails on some versions of numpy
arr.setflags(write=1)
except ValueError:
pass
particle_data.append(arr)
_libc.free(particles_per_tile)
_libc.free(data)
return particle_data
def get_particle_x(species_number, level=0):
'''
Return a list of numpy arrays containing the particle 'x'
positions on each tile.
'''
structs = get_particle_structs(species_number, level)
if geometry_dim == '3d' or geometry_dim == '2d':
return [struct['x'] for struct in structs]
elif geometry_dim == 'rz':
return [struct['x']*np.cos(theta) for struct, theta in zip(structs, get_particle_theta(species_number))]
def get_particle_y(species_number, level=0):
'''
Return a list of numpy arrays containing the particle 'y'
positions on each tile.
'''
structs = get_particle_structs(species_number, level)
if geometry_dim == '3d' or geometry_dim == '2d':
return [struct['y'] for struct in structs]
elif geometry_dim == 'rz':
return [struct['x']*np.sin(theta) for struct, theta in zip(structs, get_particle_theta(species_number))]
def get_particle_r(species_number, level=0):
'''
Return a list of numpy arrays containing the particle 'r'
positions on each tile.
'''
structs = get_particle_structs(species_number, level)
if geometry_dim == 'rz':
return [struct['x'] for struct in structs]
elif geometry_dim == '3d':
return [np.sqrt(struct['x']**2 + struct['y']**2) for struct in structs]
elif geometry_dim == '2d':
raise Exception('get_particle_r: There is no r coordinate with 2D Cartesian')
def get_particle_z(species_number, level=0):
'''
Return a list of numpy arrays containing the particle 'z'
positions on each tile.
'''
structs = get_particle_structs(species_number, level)
if geometry_dim == '3d':
return [struct['z'] for struct in structs]
elif geometry_dim == 'rz' or geometry_dim == '2d':
return [struct['y'] for struct in structs]
def get_particle_id(species_number, level=0):
'''
Return a list of numpy arrays containing the particle 'id'
positions on each tile.
'''
structs = get_particle_structs(species_number, level)
return [struct['id'] for struct in structs]
def get_particle_cpu(species_number, level=0):
'''
Return a list of numpy arrays containing the particle 'cpu'
positions on each tile.
'''
structs = get_particle_structs(species_number, level)
return [struct['cpu'] for struct in structs]
def get_particle_weight(species_number, level=0):
'''
Return a list of numpy arrays containing the particle
weight on each tile.
'''
return get_particle_arrays(species_number, 0, level)
def get_particle_ux(species_number, level=0):
'''
Return a list of numpy arrays containing the particle
x momentum on each tile.
'''
return get_particle_arrays(species_number, 1, level)
def get_particle_uy(species_number, level=0):
'''
Return a list of numpy arrays containing the particle
y momentum on each tile.
'''
return get_particle_arrays(species_number, 2, level)
def get_particle_uz(species_number, level=0):
'''
Return a list of numpy arrays containing the particle
z momentum on each tile.
'''
return get_particle_arrays(species_number, 3, level)
def get_particle_theta(species_number, level=0):
'''
Return a list of numpy arrays containing the particle
theta on each tile.
'''
if geometry_dim == 'rz':
return get_particle_arrays(species_number, 4, level)
elif geometry_dim == '3d':
return [np.arctan2(struct['y'], struct['x']) for struct in structs]
elif geometry_dim == '2d':
raise Exception('get_particle_r: There is no theta coordinate with 2D Cartesian')

Those routines should generally be useful to manipulate pinned memory particles (and should also work for managed memory particles, in case we want to try to run cupy kernels on them just to start investigating).

@peterscherpelz
Copy link
Member Author

I'm going to mark this issue complete. Future PRs may address items such as multiple conductors, but all the primary functionality described here has been implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants