You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 1, 2023. It is now read-only.
If we keep enabling/disabling "Ignore blank rows" and/or "Ignore duplicate rows", some times an invalid CSV file is identified as valid. Seems like there is a race condition somewhere.
If we keep enabling/disabling "Ignore blank rows" and/or "Ignore duplicate rows", some times an invalid CSV file is identified as valid. Seems like there is a race condition somewhere.
How to reproduce
Here's an example of a job that displayed the errors correctly:
https://try.goodtables.io/?source=https%3A%2F%2Fraw.githubusercontent.com%2Ffrictionlessdata%2Fgoodtables-py%2Fbc6470a970aacf65f20a3ddb7f71eb05a2a31c70%2Fdata%2Finvalid-on-structure.csv&apiJobId=45cedf3e-1706-11e8-9203-0242ac110008
And here another one that incorrectly tell that the data is valid:
https://try.goodtables.io/?source=https%3A%2F%2Fraw.githubusercontent.com%2Ffrictionlessdata%2Fgoodtables-py%2Fbc6470a970aacf65f20a3ddb7f71eb05a2a31c70%2Fdata%2Finvalid-on-structure.csv&apiJobId=be4a592c-1706-11e8-b944-0242ac110008
The text was updated successfully, but these errors were encountered: