You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How did you find sizeAvg to work in practice? A little hard to say I guess, since we don't have a lot of real data > 1e5 points to play with... You're right that we don't want to just use sizeMax, it's worth accepting a bit of clipping in order to generally have less wasted space, and much of the time the largest point won't be on any edge... just wondering how that balance plays out in practice, whether we would be better off with something like halfway between the average and max.
Anyway perhaps we don't have a god way to answer that question right now. I'll just mention that in case we do want to try and do better later, we could find some heuristics that only add a little bit of computation, like binning data points into top/middle/bottom thirds, and only using the top third for the top padding... maybe even with a smooth weighting of the size based on how far it is from the edge. That would still be far faster than the full calculation but could do a better job reducing wasted space without too much clipping.
The text was updated successfully, but these errors were encountered:
In comparison, at 1e5 - 1 pts (i.e. below the big-data threshold), we get:
@alexcjohnson I'll reference #2417 - but honestly feel free to close it if you think Axes.expand is satisfactory now.
Any improvement we make at this point will come with both a performance cost and a loss of generality - so lets close for now, and reopen if we hit a specific use case where the current behavior isn't good enough.
... most notably for
scattergl
, but also perhaps for all traces types that support data arrays with length greater than 1e5 in the future.From @alexcjohnson in #2404 (comment) :
The text was updated successfully, but these errors were encountered: