-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow performance on bulk insert #1145
Comments
Is this covered by #1133? |
Yes! |
This isn't an Go-SQLite3 issue. I think it's the expected and normal behavior. The default SQLite3 settings are set to safe, and slow. @josnyder-rh I had the exact same problem on my project (which sole SQLite3 usage is bulk inserts). Google lead me to this amazing C++ SQLite3 performance guide and fortunately, the same techniques worked for me. Here's the 2 line change that fixed it for me. I wrote a standalone sqlite3.go benchmark file to help me understand this situation. It's the simplest way to explain what's going on. Basically the idea is to:
On production what you want to do is to create a new DB connection just for the bulk inserts, and change the For me the result was importing 350,000 lines at 10KB/s (13min) to 3.2MB/s (2sec). I've fully detailed my whole resolution process on issue #1, should you want more information. |
Bulk inserts appear to take quadratic time over the number of statements being processed. Using v1.14.16, I find that a bulk insert that finishes quickly for the sqlite3 shell does not finish at all when run in golang.
My supposition is that this is because the
sqlite3_prepare_v2
function returns a pointer to the remaining unprocessed sql, which we then copy back into a Go string. On the next run of the loop inexec()
, we do another roundtrip of copying strings Go->C->Go. By contrast, the sqlite3 shell performs the same task without copying the underlying buffer, and finishes significantly faster.A representative stack trace shows indicates time spent in
runtime.memmove()
:The text was updated successfully, but these errors were encountered: