Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add requirements for the Galaxy-ME tools #77

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Conversation

afgane
Copy link

@afgane afgane commented Nov 26, 2024

This includes resource requirement for the Galaxy-ME tutorial: https://training.galaxyproject.org/training-material/topics/imaging/tutorials/multiplex-tissue-imaging-TMA/tutorial.html

Values come from the cancer.usegalaxy.org instance.

tools.yml Outdated
toolshed.g2.bx.psu.edu/repos/guerler/charts/charts/.*:
mem: 10
toolshed.g2.bx.psu.edu/repos/hammock/hammock/hammock_1.0/.*:
mem: 20
env:
_JAVA_OPTIONS: -Xmx{int(mem)}G -Xms1G
toolshed.g2.bx.psu.edu/repos/imgteam/bfconvert/ip_convertimage/.*:
cores: 12
mem: 128
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be a lot, is this really needed? Is this dependent on the input size?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you haven't seen mesmer yet...

As I mentioned in the initial comment, the values come from the cancer.usegalaxy.org instance where the tools run correctly. So I figured we start there and bring it down if possible once we have some runtime data?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do have this tool installed:

(venv) galaxy@sn06:~$ gxadmin tsvquery tool-metrics %ip_convertimage% memory.max_usage_in_bytes --like | awk '{print $1 / 1024 / 1024 / 1024}' | gxadmin filter histogram

(   0.114,   22.214) n=1433  **************************************************
[  22.214,   44.314) n=6     
[  44.314,   66.415) n=20    
[  66.415,   88.515) n=1     
[  88.515,  110.615) n=7     
[ 110.615,  132.716) n=1     
[ 132.716,  154.816) n=0     
[ 154.816,  176.916) n=0     
[ 176.916,  199.017) n=0     
[ 199.017,  221.117) n=1     
[ 221.117,  243.218) n=0     
[ 243.218,  265.318) n=0     
[ 265.318,  287.418) n=0     
[ 287.418,  309.519) n=0     
[ 309.519,  331.619) n=0     
[ 331.619,  353.719) n=0     
[ 353.719,  375.820) n=0     
[ 375.820,  397.920) n=0     
[ 397.920,  420.021) n=0     
[ 420.021,  442.121) n=0     
[ 442.121,  464.221) n=0     
[ 464.221,  486.322) n=0     
[ 486.322,  508.422) n=0     
[ 508.422,  530.522) n=1     

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, nice. So do you think we should go with 64GB or bring it all the way down to 24?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least this is what data suggesting, maybe you have large inputs and it takes more mem? or it scales with cores? In that case we need to have a nice rule.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no usage data atm so I'll just drop it to 30GB for the time being.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forgot I didn't actually update the config following my reply... Updated now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants