-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
planner: cleanup prepare cache when client send deallocate #8332
Conversation
/run-all-tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@lysu Should we need to delete these obsoleted plans from the plan cache? By virtue of LRU, the plans will be destroyed eventually. |
planner/core/cache.go
Outdated
func (key *pstmtPlanCacheKey) SetPstmtIDSchemaVersion(pstmtID uint32, schemaVersion int64) { | ||
key.pstmtID = pstmtID | ||
key.schemaVersion = schemaVersion | ||
key.hash = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to set key.hash = key.hash[:0]
. In https://github.com/pingcap/tidb/pull/8332/files#diff-76a70a17b419c3333a3ff060d8f7c330R73:
if len(key.hash) == 0 {
// calculate hash value
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
key.hash = key.hash[:0]
will not release the memory and it's some kind of leak, while key.hash = nil
does
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tiancaiamao I reuse this key because it can be reuse if there have more then one key(emm, we can new everytime too), if we keep reuse way [:0]
in here is more suitable.
session/session.go
Outdated
if planCacheEnabled { | ||
if i > 0 { | ||
cacheKey.(plannercore.PstmtCacheKeyMutator).SetPstmtIDSchemaVersion( | ||
stmtID, s.sessionVars.PreparedStmts[stmtID].SchemaVersion, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we only need to reset stmtID
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for normal situation, session env are as same as last exec
time, and here we can only got current session vars, so we CAN only reset stmtID here, it works well in most time although do nothing in some corner case(e.g. prepare exec then change sql_mode then dealloc).
Hi, @dbjoa I think it's better to do that, because same stmtID for this connection will never be use, we should remove them just like we remove IMHO, LRU capacity should be a protection mechanism that protect max memory usage instead of free memory, LRU only can confirm cache item never overflow capacity but can not make sure item destoryed eventually. For real product env, we should be hard to set a right lowest capacity value, so there are still some memory will be hold, current plan cache is in connection level, so if a appliaction with a big connection pool or delploy in 100+ instance, it will be much memory in total view even if just some item can not free for each connections. I thinks it is better release resource if no need them, for this case delete isn't expensive, free memory will be async done by gc. |
In order to fix #8330, we can change |
+1 |
@dbjoa, I agree byte-size is better than elements and can solve OOM, but it still keep n bytes no use data per connection and waste resource. Again, I think LRU capacity should be a protection mechanism |
@dbjoa It's hard to calculate the memory consumed by a cached plan. The present plan cache is in the session level, once there are hundreds of connections, TiDB can still run in OOM with a low cache capacity.
For this point, I agree with @lysu I think maybe we can use a global plan cache to fix #8330 |
Would global plan cache introduce contention? IIRC, Oracle used to have this problem. |
This PR can not fix this situation: The capacity of the session level plan cache is reasonable, but there are a lot of connections execute the In this situation, the unused cache can not be cleaned by the method introduced in this PR. |
|
@zz-jason yes, that case we will fixed if user forgot close stmt by follow for long-alive connection without close stmt, maybe we need a TTL - -? but I think its by user's purpose didn't close, maybe we'd better doesn't close if total number doesn't over capacity. |
} | ||
|
||
// SetPstmtIDSchemaVersion implements PstmtCacheKeyMutator interface to change pstmtID and schemaVersion of cacheKey. | ||
// so we can reuse Key instead of new every time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we need to reuse Key?
planner/core/cache.go
Outdated
func (key *pstmtPlanCacheKey) SetPstmtIDSchemaVersion(pstmtID uint32, schemaVersion int64) { | ||
key.pstmtID = pstmtID | ||
key.schemaVersion = schemaVersion | ||
key.hash = nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
key.hash = key.hash[:0]
will not release the memory and it's some kind of leak, while key.hash = nil
does
session/session.go
Outdated
for _, stmtID := range retryInfo.DroppedPreparedStmtIDs { | ||
delete(s.sessionVars.PreparedStmts, stmtID) | ||
if len(retryInfo.DroppedPreparedStmtIDs) > 0 { | ||
planCacheEnabled := plannercore.PreparedPlanCacheEnabled() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will the deallocate prepare stmt1
in the retry history ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do some dig for it..
the result is deallocate prepare stmt1
will be added into history, and protocol level stmtClose
doesn't..
just as #2473 (comment) said, we only add exec into history, and retry reuse exec, so that PR fix the stmtClose
delay stmt distory until retry finished....
but It seems forgot deallocate prepare stmt1
, it seems there are a bug in deallocate prepare stmt1
with retry in master code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tiancaiamao I retrid:
- for
prepare stmt1 from 'xx'
anddeallocate prepare stmt1
will add history, and will reprepare and close for every retry, so no need lazy clean, and it's unnormal for people use prepare in text protocol, maybe is ok. - but for binary
stmtPrepare
andstmtClose
will NOT add to history, so need lazy cleanup at here
summary: it seems no problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
retry prepare stmt1 from 'xx'
will prepare twice, and retry deallocate prepare stmt1
will dealloc twice?
What will happen in the plan cache ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after this PR will it will add plan cache twice and remove from plan cache twice...
I suddenly found there are maybe have question when a "prepare stmt from xx" in a transaction but without dealloc, retry will make many useless stmt in server, maybe handleStmtPrepare's way is better, we should use a unified way to handle this two entrances.
@@ -70,12 +70,14 @@ type pstmtPlanCacheKey struct { | |||
|
|||
// Hash implements Key interface. | |||
func (key *pstmtPlanCacheKey) Hash() []byte { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, how about refactor kv.SimpleCache use []byte as key ? (not in this pr)
I don't see any benefit of its Key definition. @lysu
LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/run-all-tests |
What problem does this PR solve?
ref #8330, current tidb's ps deallocate handler doesn't release plan-cache memory, this PR fix it.
What is changed and how it works?
delete plan cache when client deallocate a prepared stmt.
Check List
Tests
use code in issue doesn't saw OOM any more
Code changes
Side effects
Related changes
Remain Question
current impl simple use current sessionVars, if
sql_mode
orschema_version
are changed before deallocate, then we can not free previous item.This change is