-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow non-contiguous layers to be specified simultaneously in force_layer
and force_equal_layers
#43
Conversation
…yer` and `force_equal_layers`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah so the idea is to keep the empty layers til later, so it is more predicatable as to what you are forcing where?
src/layering.jl
Outdated
@@ -3,7 +3,8 @@ | |||
function layer_by_longest_path_to_source(graph, force_layer) | |||
dists = longest_paths(graph, sources(graph)) | |||
force_layers!(graph, dists, force_layer) | |||
layer_groups = collect.(IterTools.groupby(i->dists[i], sort(vertices(graph), by=i->dists[i]))) | |||
layer_groups = [findall(==(d), dists) for d in 1:maximum(dists)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note the different time complexity.
This is O(n*d)
where as the previous code is O(n* log(n))
But it seems we can do better with
layer_groups = [findall(==(d), dists) for d in 1:maximum(dists)] | |
layer_groups = [Int[] for d in 1:maximum(dists)] | |
for (ii, d) in enumerate() | |
push!(layer_groups[d], ii) | |
end |
Which is only O(n + d)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I hadn't thought this. Your suggestion is very nice: I will add that now 👍 !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be more natural to simply make this a Dict{Int, Vector{Int}}
instead? Then there could also be negative indices and we wouldn't have to potentially store a bunch of empty layers.
Maybe there's some assumptions in the code elsewhere that assumes iteration over layer2nodes
in incrementing fashion?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be more natural to simply make this a Dict{Int, Vector{Int}} instead? Then there could also be negative indices and we wouldn't have to potentially store a bunch of empty layers.
My instinct is that in practical cases the extra costs from empty layers will be lower than the extra cost of needing to do a hash.
Where would we get negative layer indexes? Would that be just from users specifying them by hand?
So it would be a way to say to put this before the natural root layer?
That sounds like a bigger change that would want to be in its own PR
Maybe there's some assumptions in the code elsewhere that assumes iteration over layer2nodes in incrementing fashion?
Good question. I don't think so, but i can't from memory rule it out
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, better to leave that for a potential follow-up, if ever relevant. Thanks for expeditious merging 👍 !
Exactly 👍 . Also, for plotting purposes, one may know that it would be nice to have some separation between certain graph components - that can be achieved by "reserving" some unoccupied layers between connected components via |
Before this PR, it is possible to specify vertices in layer positions that skip certain layers. Then, internally, those skips are eventually removed.
However, this makes it tricky to combine with
force_equal_layer
because you cannot know ahead of time what the layer position will be. This PR goes to greater lengths to ensure that the requested layer position is respected, hopefully making it much easier to combineforce_layer
andforce_equal_layers
whenforce_layer
is not a simple unit range.@oxinabox