Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update batch processing length normalization to match non-batch processing length normalization #441

Merged
merged 4 commits into from
May 16, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -2190,7 +2190,7 @@ void usageTCCQuant(bool valid_input = true) {
<< " (default: equivalence classes are taken from the index)" << endl
<< "-f, --fragment-file=FILE File containing fragment length distribution" << endl
<< " (default: effective length normalization is not performed)" << endl
<< "--long Use version of EM for long reads " << endl

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you clarify your question?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same problem as in line 2075.

<< "--long Use version of EM for long reads " << endl
<< "-P, --platform. [PacBio or ONT] used for sequencing " << endl
<< "-l, --fragment-length=DOUBLE Estimated average fragment length" << endl
<< "-s, --sd=DOUBLE Estimated standard deviation of fragment length" << endl
Expand Down Expand Up @@ -2390,7 +2390,7 @@ int main(int argc, char *argv[]) {
if (fld_lr_c[i] > 0.5) {
//Good results with comment below.
//flensout_f << std::fabs((double)fld_lr[i] / (double)fld_lr_c[i] - index.k);//index.target_lens_[i] - (double)fld_lr[i] / (double)fld_lr_c[i] - k); // take mean of recorded uniquely aligning read lengths
flensout_f << std::fabs(index.target_lens_[i] - ((double)fld_lr[i] / (double)fld_lr_c[i]) - index.k);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Care to elaborate a bit? Ideally in comment, else maybe at least in the commit message?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! Based on our analysis for effective length normalization for long reads, the updated effective length provides better results.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't follow what you are changing and why but I am not familiar with the code. So if this makes sense to others without further explanation, feel free to ignore my comment.

flensout_f << std::fabs(((double)fld_lr[i] / (double)fld_lr_c[i]) - index.k);
} else {
flensout_f << std::fabs(index.target_lens_[i] - index.k);//index.target_lens_[i]);
}
Expand Down