Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Picket-fence calibration interpretation appears incorrect? #36

Closed
bwmeyers opened this issue Dec 1, 2023 · 4 comments
Closed

Picket-fence calibration interpretation appears incorrect? #36

bwmeyers opened this issue Dec 1, 2023 · 4 comments
Assignees
Labels
bug Something isn't working question Further information is requested

Comments

@bwmeyers
Copy link
Contributor

bwmeyers commented Dec 1, 2023

@cplee1 Has better examples of this than I do, but essentially, when dealing with picket fence data, the way the VCSBeam reports to be interpreting the calibration solutions assumes that the full 30.72 MHz bandwidth is present. It then assumes the calibration solution channel widths based on this information and thus, miscalculates. It's unclear if this is just a reporting issue or whether it really applies much-wider-than-necessary channel solutions to the data...

@bwmeyers bwmeyers added bug Something isn't working question Further information is requested labels Dec 1, 2023
@bwmeyers bwmeyers self-assigned this Dec 14, 2023
@bwmeyers
Copy link
Contributor Author

Status: fixed the calibration interpretation of channel widths, etc. The caveat now being that the number of REQUESTED coarse channels to process ( == MPI world size) must match correspond to the number of coarse channels in your calibration solution. The fine channelisation interpolation is figured out on the fly, but the underlying assumption is that if you want 5 coarse channels processed, the cal. sol. file you provide ONLY has solutions for those 5 channels.

The next issue that seems related is a GPU shared memory alignment. Not sure exactly what's happening there, but may need someone with more CUDA experience to take a look at some point.

@bwmeyers
Copy link
Contributor Author

I believe I've now fixed the issue. There are strict memory allocation/access requirements for GPU shared-memory (4-byte boundaries). It just so happens that previously, our memory access patterns (with M*nant, where nant=128 or nant=144) did not violate this requirement (given we are access complex-double types, which guaranteed that there is always an access that fit within the 4-byte blocks).

For MWAX data, there is no guarantee that nant is even (e.g., in Dec 2023, the CRAM tile was included, so now there are 145 antenna in an observation). This means we have to actually consider how to access the shared memory arrays, and assigning things based on event indices (i.e., integer numbers of 4-byte chunks since a complex double takes up 16 bytes).

@bwmeyers
Copy link
Contributor Author

For posterity, I changed these lines

vcsbeam/src/form_beam.cu

Lines 272 to 276 in 2a93cdf

cuDoubleComplex *ex = (cuDoubleComplex *)(&arrays[0*nant]);
cuDoubleComplex *ey = (cuDoubleComplex *)(&arrays[2*nant]);
cuDoubleComplex *Nxx = (cuDoubleComplex *)(&arrays[4*nant]);
cuDoubleComplex *Nxy = (cuDoubleComplex *)(&arrays[6*nant]);
cuDoubleComplex *Nyy = (cuDoubleComplex *)(&arrays[8*nant]);

@bwmeyers
Copy link
Contributor Author

Nominally, this is corrected (combination of PRs #34 #42 #43 #44) as long as the following condition is met:

  • The calibration solution provided must only include the coarse channels that are to be processed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant