-
-
Notifications
You must be signed in to change notification settings - Fork 18.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pandas interpolation enhancement request : specifying the maximum gap to interpolate. #12187
Comments
you can do something this. Not sure how useful this is generally as IIRC this is first request about this type of filling
|
Based on the original values of "s" above, the results that I looking for "interpolating gaps up to 3 hours" in a hourly time-series would be as follows. The first gap (2 values) is interpolated, but for the second gap ( 4 values) no interpolation is performed at all. I could write code to do what I want, but I would prefer using an existing tool for which the user can specific the maximum gap to interpolate (maxgap).
|
this is a bit like consolidating |
I've actually had to do something like this before. See this answer to check out how it was achieved. I can actually say this is pretty useful. I know a number of quality-control techniques that use this kind of reasoning: if gap < maxgap do nothing, do something else otherwise (or vice-versa). I would recommend something like passing an extra
|
yeah iterative grouping is usefule, see the last example here: http://pandas.pydata.org/pandas-docs/stable/cookbook.html#grouping I suppose if someone wanted to contribute a |
This seems like a very real issue, as from my perspective the whole point of having the "limit" argument is to avoid filling spaces which are too large, i.e. the assumption for filling does not hold. Is anyone still looking at this issue? Another very similar issue is #16457 and on stackoverflow: https://stackoverflow.com/questions/43077166/interpolate-only-if-single-nan/43079055#43079055 |
I've had to do this many, many times, and adding this capability would be a good improvement. The most common use case is when you can justify interpolating short missing data gaps within a signal's temporal autcorrelation range, but cannot justify interpolating long gaps. Another use case is when data is being upsampled using pd.resample(), which inserts NaNs for timestamps to be interpolated. Say you're upsampling by a factor of three. In this case, you can do the upsample by interpolating across at most 2 consecutive NaNs. Three or more consecutive NaNs indicate the presence of missing data in the original time series; you don't want to interpolate across that. |
Is someone still working on this problem? |
I read this was already implemented, but as of today this still seems to be not the case. It would be a valuable improvement I think, would it be possible to make this available? |
@veenstrajelmer this is an open issue and we would likely take a PR for implementation |
Hope this becomes possible soon... |
community contributions are how things get done |
Hi community, do we already have a solution for this issue? |
I dont think so. You can use xarray.DataArray.interpolate_na instead. |
Currently, Pandas interpolation interpolates all gaps, regardless of there size and the limit parameter is used to limit the number of replacement : if there is a gap of 3 values and limit=2, pandas replaces the first 2 values.
I have difficulty understanding why someone would want to do interpolation on only a few missing items in a consecutive series of missing.
Personally, depending on the length of the gap, I would like to decide to interpolate the whole gap or none of it. For example, in an hourly time-series, interpolation of missing hours up to a maximum of 3 consecutive hours:
gaps <= 3 would be interpolated
gaps > 3 remain untouched.
I would appreciated a option for interpolation such as R "na.approx" in which a "maxgap" parameters is available. maxgap: maximum number of consecutive NAs to fill. Any longer gaps will be left unchanged.
The text was updated successfully, but these errors were encountered: