-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up Comparison #497
Speed up Comparison #497
Conversation
…before creating iter
…t nets each instance has
One of the changes I've made might dramatically slow down a run the first time, but should increase the speed in subsequent runs since the list of possible matches for each instance should hopefully be significantly shorter so comparison will be faster and the caching will be faster... as long as I get it working. |
…n the list of possible matches quicker
…n most places when possible
While using the cache, the runtime of jpegencode through comparison is about 16 minutes. |
I was thinking about what you said in the meeting about the dictionary lookup times to for |
Looking at the profile, I don't think it is actually taking as long as I thought. It was only a quarter of a second in the entire jpegencode run. Edit:
I'm not using the get method, just the built in access method, so I'm not sure which one of these it actually is. The only time get is used on dicts is a line which was already in the code previous to any of my changes. |
Also, from what I'm seeing online, dict access is also about O(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is looking pretty good. Can you upload the latest profile results?
dictionary [] access should be get_item. I looked and I think the get() calls are spydrnet.
Dictionary access is O(1), since the time it takes to hash the key doesn't change based on the size of the datatype. But index access of course would be less instructions. However, I do not think we are impacted by this enough to care based on the get_item profile.
Going to push a few minor netlist_cleanup changes as well, same logic slightly different format |
Just added the flow argument logging_level to work with these tools: netlist_cleanup, error injector, phys netlist, and structural comparison. Example flow run: python scripts/run.py VivadoPhysNetlistCmp --flow_arguments "{'logging_level': 'ERROR'}" Will only log messages with logging.error or above, so no logging.debug, logging.info, or logging.warning messages will get added to the log file. The options are DEBUG, INFO, WARNING, ERROR, CRITICAL |
…oing it every time you go to check pins
…nd which mapping function to use
…find a close match
This should be good to merge @reillymck or we can talk about it during the meeting if there's anything else to be added |
I'm starting this pull request now but it's going to be a draft for a while until I've figured out exactly what changes we want to make and how to make them.
Current changes:
Matches as sets instead of lists
Passing in instance names instead of instances themselves, means unpacking the cache is a lot faster
Narrowing down list of possible matches through number of const nets (partially working, but not completely because of the issue with the const generator luts)
Checking pins based in order of if they're a const pin or not. The thought here is that a non const net connected to a pin has the ability to narrow the possibilities down a lot more than a const net does.
Breaks down some bigger functions to smaller functions
Flow argument logging_level which decides which severity of logging messages to print to the logs
named_netlist.instances_to_map holds tuples now, which contain the instance_name and the mapping function they should utilize. This cuts out later string comparisons.
Reorganization of netlist cleaner