You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
JsonReader parse is easy, json -> row will print hint(put-cvt col name, type, value;deployment-cvt col name, type,value;query.parameter-cvt col type, idx )
the same place
just one row
status msg
status msg
jdbc insert row(not recommend)
One row insertion report col level failures?
Multi rows insertion report row idx, if user can get row easily
Spark insertion print failed row(readable), cuz user can't get row easily in spark way
TODO openmldb-import use prepared stmt instead of getInsertRow
local use new csv library to support escape, but it may still != cluster spark style.
The text was updated successfully, but these errors were encountered:
Error: [2000] Fail to get insert info--fail to parse row[1]: (2,invalid,,failed)
load data cluster
csv load failures will gen csv df with NULL, won't break loading.
set row or put row failures will report Caused by: java.io.IOException: write row to openmldb failed on -1,0,19025,date has time,
19025 is date, the number of days elapsed, I don't cvt here, you can use other cols to find the row
And internal error exception set xx failed. pos is ..., execute false just throw the exception below.
insert prepared stmt
set col failed to row: exception set xx failed. pos is ...
execute(put) failed: java log execute insert failed on ...
load data local mode
Error: [2000] file [/work/test/csv_data/insert_fail.csv] line [lineno=0: 1, 11, "not date", "csv row"] insert failed, translate failed on column c3(2) with value "not date"
All methods to load data
One row insertion report col level failures?
Multi rows insertion report row idx, if user can get row easily
Spark insertion print failed row(readable), cuz user can't get row easily in spark way
TODO openmldb-import use prepared stmt instead of getInsertRow
local use new csv library to support escape, but it may still != cluster spark style.
The text was updated successfully, but these errors were encountered: