Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the item only if it exists in the cache #4117

Merged
merged 8 commits into from
Oct 12, 2023
Merged

Conversation

pingsutw
Copy link
Member

@pingsutw pingsutw commented Oct 2, 2023

TL;DR

I found that Propeller keeps sending the request to the agent even workflow is done.

The work queue only has unique items, so we add itemID to it. The workers won't process the item again after the task is done.

Before:

enqueueBatches workqueue sync (10 workers)
t1 obj(1) obj(1)
t2
t3 obj(2) obj(1), obj(2)
t4
t5 obj(1) obj(1), obj(2), obj(1)
t6 obj(1), obj(2), obj(1)

After:

enqueueBatches workqueue sync (10 workers)
t1 1 1
t2
t3 2 1, 2
t4
t5 1 1, 2
t6 1, 2

100 workflows, each has 100 tasks.
Before:
image

After:
image

Type

  • Bug Fix
  • Feature
  • Plugin

Are all requirements met?

  • Code completed
  • Smoke tested
  • Unit tests added
  • Code documentation added
  • Any pending items have an associated Issue

Complete description

^^^

Tracking Issue

NA

Follow-up issue

NA

Signed-off-by: Kevin Su <[email protected]>
@codecov
Copy link

codecov bot commented Oct 2, 2023

Codecov Report

Attention: 10 lines in your changes are missing coverage. Please review.

Comparison is base (b35cc95) 58.98% compared to head (aedaf76) 59.05%.
Report is 31 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #4117      +/-   ##
==========================================
+ Coverage   58.98%   59.05%   +0.07%     
==========================================
  Files         618      621       +3     
  Lines       52708    53091     +383     
==========================================
+ Hits        31088    31353     +265     
- Misses      19140    19239      +99     
- Partials     2480     2499      +19     
Flag Coverage Δ
unittests 59.05% <72.97%> (+0.07%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
flytestdlib/cache/auto_refresh.go 78.28% <72.97%> (+2.62%) ⬆️

... and 37 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@pingsutw pingsutw changed the title Add item ID to the workqueue instead Update the item only if it exists in the cache Oct 3, 2023
@pingsutw
Copy link
Member Author

pingsutw commented Oct 3, 2023

Updates:

  • Still add the item to the workqueue
  • Worker only updates the item if it exists in the cache
  • To prevent items from being added back to the cache after deletion, update the item with a lock.

If the Item is added back to the cache after deletion, worker will run forever.

  1. Add item to the workqueue (there is no status in the item since we didn't call plugin.status in the sync function)
  2. Update LRU cache
  3. Repeat 1
Screenshot 2023-10-03 at 3 20 25 PM image

Signed-off-by: Kevin Su <[email protected]>
Signed-off-by: Kevin Su <[email protected]>
@pingsutw pingsutw requested a review from EngHabu October 3, 2023 23:14
flytestdlib/cache/auto_refresh.go Outdated Show resolved Hide resolved
flytestdlib/cache/auto_refresh.go Outdated Show resolved Hide resolved
flytestdlib/cache/auto_refresh.go Outdated Show resolved Hide resolved
Signed-off-by: Kevin Su <[email protected]>
Signed-off-by: Kevin Su <[email protected]>
item, ok := w.lruMap.Get(itemID)
if !ok {
logger.Debugf(ctx, "item with id [%v] not found in cache", itemID)
t.Stop()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't stop the timer here, should we? just log and continue? the stop should happen after sync is called

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated it

newBatch = append(newBatch, b)
}

updatedBatch, err := w.syncCb(ctx, newBatch)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should add a check for if the len(newBatch) == 0, we don't need to sync at that point...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated it

Comment on lines +191 to +192
w.lock.Lock()
defer w.lock.Unlock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to avoid locking the entire dataset for these operations... any reason to?

Comment on lines +182 to +184
ok = w.lruMap.Contains(id)
if ok {
w.lruMap.Add(id, item)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why only update if it's already there, what if it has been evicted since?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't check if it exist in the cache, we might have this scenario.

worker1 removes item from the caches -> worker2 adds the item to cache again. (workerqueue can have duplicate item)

If we don't use lock, we might have this scenario.

  1. w.toDelete.Remove (worker 1)
  2. w.lruMap.Contains(id) (woker 2)
  3. w.lruMap.Remove(key) (woker 1)
  4. w.lruMap.Add(id, item) (woker 2) # we add item back to the cache again.

Signed-off-by: Kevin Su <[email protected]>
@EngHabu EngHabu merged commit da77476 into master Oct 12, 2023
36 checks passed
@EngHabu EngHabu deleted the update-stdlib branch October 12, 2023 22:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants