-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: use threading for concurrent requests #133
refactor: use threading for concurrent requests #133
Conversation
|
||
dataset_jsons: list[dict] = await asyncio.gather( | ||
*[ | ||
with CatalogParserConnection() as connection: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this also helps to solve some of the warnings in here, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe, let's check the warnings when we merged everything
tuple[ | ||
tuple[str, str], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we are now using the default version of tuple from python?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I like it better I think it's easier than importing the typing hint
) -> list[pathlib.Path]: | ||
( | ||
username, | ||
def _original_files_file_download( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
weird name for the function, no? or do you refer to all the original files from the file requested?
@@ -20,7 +19,6 @@ dask = ">=2022" | |||
netCDF4 = ">=1.5.4" | |||
boto3 = ">=1.25" | |||
semver = ">=3.0.2" | |||
nest-asyncio = ">=1.5.8" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hehe <3
have you noticed or check the behaviour time wise? does it grow the time used or is it more or less the similar? maybe even less? if it increases, is it worth to be able to get rid of nest-asyncio? |
|
||
def _retry_policy(self, info: RetryInfo) -> RetryPolicyStrategy: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are we getting rid of the retries or is it handled internally somewhere else? the errors seem to be handle another place, but does this mean that we have less control on the behaviour of them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are using the requests
error retry directly
Yeah I guess a bit less control 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, not that I like it ee, I feel like it's easier then, so good!
Yeah I have run some tests and it was very similar times :D |
48360c3
to
a3ccd84
Compare
Not using aiohttp anymore nor nest-asyncio Also changed from multiprocessing to multi threading for get command
Not using aiohttp anymore nor nest-asyncio Also changed from multiprocessing to multi threading for get command
Gets rid of aiohttp and nest-async
fix: CMT-88, CMT-73, CMT-94