-
-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: Performance Rules #28
Comments
I think that we could do something like this. There is a tool you might find interesting called perflint, which does (somewhat similar) optimization checks, but none of the specific ones that you mention from what I can tell. Some questions that I have:
for item in (*list1, *list2, *list3):
# do something
# vs
for item in itertools.chain(list1, list2, list3):
# do something And, if for whatever reason your code depends on some side-effect during the exhausting of the iterators (first example), lazily iterating (second example) could be a bug. This is unlikely, but something we would have to mention in the explainer for that check.
even_nums = filter(lambda x: x % 2 == 0, nums)
# vs
even_nums = [x for x in nums if x % 2 == 0] I have yet to speed test even_nums = []
for x in nums:
if x % 2 == 0:
even_nums.append(x)
I also planned on writing a check which would look for code similar to the def all_truthy(it):
for i in it:
if not i:
return False
return True The above code could be replaced with |
So lemme go point by point-
|
I think that I agree that All in all I agree that we should add these checks, but I will have to do some brainstorming to see how I want to go about detecting and emitting errors for them. |
cool. BTW, I just tried |
#100 suggests configurability of I think "enforcing consistency" ( |
Adding onto this. using from itertools import chain
from timeit import timeit
big_nested_list = [["*"] * 9] * 9
def flatten_generator():
return (
item for flatten
in big_nested_list
for item in flatten
)
def flatten_chain():
return chain(*big_nested_list)
def flatten_chain_from_iterable():
return chain.from_iterable(big_nested_list)
def flatten_list():
return [
item for flatten
in big_nested_list
for item in flatten
]
def flatten_chain_list():
return list(chain(*big_nested_list))
def flatten_chain_from_iterable_list():
return list(chain.from_iterable(big_nested_list))
print("flatten_generator ", timeit(flatten_generator))
print("flatten_chain ", timeit(flatten_chain))
print("flatten_chain_from_iterable", timeit(flatten_chain_from_iterable))
print("==================")
print("flatten_list ", timeit(flatten_list))
print("flatten_chain_list ", timeit(flatten_chain_list))
print("flatten_chain_from_iterable_list", timeit(flatten_chain_from_iterable_list)) Python 3.9.13:
Python 3.11.6:
Interestingly, |
Also on the note of performance rules, Sources:
My own test: from contextlib import suppress
from timeit import timeit
def contextlib_suppress():
with suppress(ZeroDivisionError):
1 / 0
def try_except():
try:
1 / 0
except:
pass
print("contextlib_suppress", timeit(contextlib_suppress))
print("try_except", timeit(try_except)) Python 3.9.13:
Python 3.11.6:
|
Thank you @Avasam for bringing these up! Your second comment deserves it's own issue/PR, since the docs should mention that it is slower then using As for your first comment, big_nested_list = [["*"] * 99] * 99
Currently using Python 3.11.5 The code in the |
Here's another reason I found today to prefer |
Hi again @dosisod,
One thing I'd like to suggest is adding performance checks.
itertools.chain
, and in generalitertools
offers better performance in many cases.filter
,map
andreduce
, which can be significantly faster than simple python code because of pythons native optimizations.in
checks, is dramatically faster than list/tuple lookup).The text was updated successfully, but these errors were encountered: