Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix mongo db client to use GridFS API on 16MB exceed file #1077

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

xx10222
Copy link

@xx10222 xx10222 commented Nov 24, 2024

What this PR does / why we need it:
Modified to create a snapshot of files larger than 16MB and upload them to MongoDB using GridFS.

Which issue(s) this PR fixes:

Fixes #267

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


Additional documentation:


Checklist:

  • Added relevant tests or not required
  • Addressed and resolved all CodeRabbit review comments
  • Didn't break anything

Summary by CodeRabbit

  • New Features

    • Enhanced snapshot handling for large documents, allowing uploads to GridFS when size exceeds 16MB.
    • Improved error handling and logging for snapshot creation processes.
  • Tests

    • Introduced new test functions to validate the handling of large snapshots in the database, ensuring error-free processing.

Modified to create a snapshot of files larger than 16MB and upload them to MongoDB using GridFS.
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


jiwon.yum seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Copy link

coderabbitai bot commented Nov 24, 2024

Walkthrough

The changes in this pull request enhance the MongoDB client implementation by modifying the CreateSnapshotInfo method to handle document snapshots exceeding 16MB using GridFS. A constant for the maximum snapshot size is introduced, and the method now logs warnings and stores large snapshots in GridFS while updating the method signature to return an error. Additionally, new test functions have been added to validate the handling of large snapshots in the database.

Changes

File Change Summary
server/backend/database/mongo/client.go Enhanced CreateSnapshotInfo to handle snapshots > 16MB using GridFS, added error handling, updated method signature.
server/backend/database/testcases/testcases.go Introduced RunCreateLargeSnapshotTest to validate large snapshot handling.
test/complex/mongo_client_test.go Added CreateLargeSnapshotTest to the existing test suite, utilizing the new test function.

Assessment against linked issues

Objective Addressed Explanation
Implement GridFS for large snapshots (#267)
Ensure no errors occur for large snapshots (#267)

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (5)
server/backend/database/testcases/testcases.go (3)

1623-1623: Add documentation for exported function.

Following Go's documentation standards and the pattern used in other test functions, add a comment explaining the purpose of this test function.

+// RunCreateLargeSnapshotTest runs the CreateLargeSnapshot test for the given db.
 func CreateLargeSnapshotTest(t *testing.T, db database.Database, projectID types.ID) {
🧰 Tools
🪛 GitHub Check: build

[failure] 1623-1623:
exported: exported function CreateLargeSnapshotTest should have comment or be unexported (revive)


1636-1639: Consider using random data for more robust testing.

The current test uses repeating A-Z characters. Consider using crypto/rand to generate random data for more thorough testing.

-largeData := make([]byte, 16*1024*1024+1) // 16MB + 1 byte
-for i := range largeData {
-    largeData[i] = byte('A' + (i % 26)) // A-Z 반복
-}
+largeData := make([]byte, 16*1024*1024+1) // 16MB + 1 byte
+if _, err := rand.Read(largeData); err != nil {
+    t.Fatal(err)
+}

1638-1638: Translate Korean comments to English.

For consistency and maintainability, translate the Korean comments to English.

-    largeData[i] = byte('A' + (i % 26)) // A-Z 반복
+    largeData[i] = byte('A' + (i % 26)) // Repeating A-Z characters

-    // 스냅샷 생성 및 오류 확인
+    // Create snapshot and verify no errors

Also applies to: 1648-1648

server/backend/database/mongo/client.go (2)

1073-1122: Translate code comments to English for consistency

Several comments between lines 1073 and 1122 are in Korean, while the rest of the code comments are in English. For consistency and maintainability, please translate all comments to English.

🧰 Tools
🪛 GitHub Check: build

[failure] 1097-1097:
Error return value of uploadStream.Close is not checked (errcheck)


1082-1082: Improve log message for clarity

The log message "16MB over!!!" is not very descriptive. Consider providing a clearer message to help with debugging and maintenance.

Apply this diff to improve the log message:

-    		log.Println("16MB over!!!")
+    		log.Printf("Snapshot size exceeds %d bytes; using GridFS for storage", maxSnapshotSize)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between e863b62 and d84d31d.

📒 Files selected for processing (3)
  • server/backend/database/mongo/client.go (2 hunks)
  • server/backend/database/testcases/testcases.go (1 hunks)
  • test/complex/mongo_client_test.go (1 hunks)
🧰 Additional context used
🪛 GitHub Check: build
server/backend/database/mongo/client.go

[failure] 1097-1097:
Error return value of uploadStream.Close is not checked (errcheck)

server/backend/database/testcases/testcases.go

[failure] 1623-1623:
exported: exported function CreateLargeSnapshotTest should have comment or be unexported (revive)

🔇 Additional comments (1)
test/complex/mongo_client_test.go (1)

175-177: LGTM! Verify the test implementation.

The test case follows the established patterns and is appropriately placed within the sharded DB test suite.

Let's verify the actual test implementation:

✅ Verification successful

Test implementation properly validates GridFS storage for large snapshots

The test implementation is well-designed and correctly validates the core functionality:

  • Creates a document with data larger than 16MB (16MB + 1 byte)
  • Verifies the snapshot creation succeeds without errors
  • Matches the GridFS implementation in the MongoDB client which handles documents > 16MB

The test case effectively covers the PR's objective of fixing snapshot overflow using GridFS.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the implementation of CreateLargeSnapshotTest

# Check the implementation of CreateLargeSnapshotTest
rg -A 10 "func CreateLargeSnapshotTest" server/backend/database/testcases/

# Verify if the test actually creates a large document
rg -B 2 -A 5 "16.*MB|GridFS" server/backend/database/testcases/

Length of output: 1795


Script:

#!/bin/bash
# Let's verify the complete test implementation and assertions

# Get the full test implementation
rg -B 5 -A 30 "func CreateLargeSnapshotTest" server/backend/database/testcases/testcases.go

# Check if there are any GridFS related implementations in the database package
rg -A 5 "GridFS|gridfs" server/backend/database/

Length of output: 4533

Comment on lines 1648 to 1650
// 스냅샷 생성 및 오류 확인
err := db.CreateSnapshotInfo(ctx, docRefKey, doc.InternalDocument())
assert.NoError(t, err)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add validation for the stored snapshot.

The test only verifies that the snapshot was created without error. Consider adding validation to ensure the stored snapshot matches the original document.

 err := db.CreateSnapshotInfo(ctx, docRefKey, doc.InternalDocument())
 assert.NoError(t, err)
+
+// Retrieve and validate the stored snapshot
+snapshot, err := db.FindClosestSnapshotInfo(ctx, docRefKey, 0, false)
+assert.NoError(t, err)
+assert.NotNil(t, snapshot)
+
+// Verify the snapshot contains the large field
+var storedDoc document.InternalDocument
+err = snapshot.LoadSnapshot(&storedDoc)
+assert.NoError(t, err)
+assert.Equal(t, largeData, storedDoc.Root().GetBytes("largeField"))

Committable suggestion skipped: line range outside the PR's diff.

server/backend/database/testcases/testcases.go Outdated Show resolved Hide resolved
if err != nil {
return fmt.Errorf("failed to open GridFS upload stream: %w", err)
}
defer uploadStream.Close()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle error returned by uploadStream.Close()

The deferred call to uploadStream.Close() does not check for errors. Since Close() can return an error, it's important to handle it to ensure resources are properly released and any potential errors are captured.

Apply this diff to handle the error:

-    		defer uploadStream.Close()
+    		defer func() {
+    			if err := uploadStream.Close(); err != nil {
+    				log.Printf("Failed to close GridFS upload stream: %v", err)
+    			}
+    		}()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
defer uploadStream.Close()
defer func() {
if err := uploadStream.Close(); err != nil {
log.Printf("Failed to close GridFS upload stream: %v", err)
}
}()
🧰 Tools
🪛 GitHub Check: build

[failure] 1097-1097:
Error return value of uploadStream.Close is not checked (errcheck)

@xx10222 xx10222 changed the title Fix Snapshot Overflow Fix mongo db client to use GridFS API on 16MB exceed file Nov 24, 2024
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (5)
server/backend/database/mongo/client.go (3)

1078-1081: Improve constant documentation and logging

  1. Add documentation for the constant to explain its purpose.
  2. Enhance the log message to be more descriptive and include the actual size.

Apply this diff:

-const maxSnapshotSize = 16 * 1024 * 1024 // 16MB
+// maxSnapshotSize is MongoDB's maximum document size limit (16MB)
+const maxSnapshotSize = 16 * 1024 * 1024
 
 if len(snapshot) > maxSnapshotSize {
-    log.Println("16MB over!!!")
+    log.Printf("Snapshot size %d bytes exceeds MongoDB's limit of %d bytes, using GridFS", len(snapshot), maxSnapshotSize)

1090-1090: Improve GridFS filename format

The current filename format could be enhanced to include more metadata for better organization and debugging.

Apply this diff:

-uploadStream, err := bucket.OpenUploadStream(fmt.Sprintf("%s_snapshot", docRefKey.DocID))
+uploadStream, err := bucket.OpenUploadStream(fmt.Sprintf("%s/%s_snapshot_%d", docRefKey.ProjectID, docRefKey.DocID, doc.Checkpoint().ServerSeq))

1115-1126: Consider using transactions for snapshot operations

For data consistency, consider wrapping the snapshot creation (both GridFS and regular) in a transaction to ensure atomicity.

Example transaction implementation:

session, err := c.client.StartSession()
if err != nil {
    return fmt.Errorf("failed to start session: %w", err)
}
defer session.EndSession(ctx)

_, err = session.WithTransaction(ctx, func(sessCtx mongo.SessionContext) (interface{}, error) {
    // Your snapshot creation code here
    return nil, nil
})
server/backend/database/testcases/testcases.go (2)

1623-1623: Add documentation for exported function.

The exported function RunCreateLargeSnapshotTest should have a documentation comment explaining its purpose and behavior, following Go best practices and maintaining consistency with other test functions in this file.

Add this documentation:

+// RunCreateLargeSnapshotTest runs the CreateSnapshotInfo test for large documents for the given db.
 func RunCreateLargeSnapshotTest(t *testing.T, db database.Database, projectID types.ID) {

1636-1639: Define constants and optimize memory usage.

The test uses magic numbers for size calculation and could be optimized:

  1. Define constants for clarity
  2. Consider using a more memory-efficient way to generate test data

Refactor the data generation:

+const (
+    megabyte = 1024 * 1024
+    maxDocumentSize = 16 * megabyte
+)
+
-largeData := make([]byte, 16*1024*1024+1)
-for i := range largeData {
-    largeData[i] = byte('A' + (i % 26))
-}
+// Create a pattern of 26 bytes (A-Z)
+pattern := make([]byte, 26)
+for i := range pattern {
+    pattern[i] = byte('A' + i)
+}
+
+// Create large data by repeating the pattern
+largeData := make([]byte, maxDocumentSize+1)
+for i := 0; i < len(largeData); i += len(pattern) {
+    copy(largeData[i:], pattern)
+}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between d84d31d and e99ab4c.

📒 Files selected for processing (2)
  • server/backend/database/mongo/client.go (2 hunks)
  • server/backend/database/testcases/testcases.go (1 hunks)
🧰 Additional context used
🪛 golangci-lint (1.61.0)
server/backend/database/mongo/client.go

1094-1094: Error return value of uploadStream.Close is not checked

(errcheck)

🔇 Additional comments (2)
server/backend/database/mongo/client.go (2)

1082-1095: ⚠️ Potential issue

Handle error from uploadStream.Close()

The error returned by uploadStream.Close() should be handled as it could indicate issues with resource cleanup.

🧰 Tools
🪛 golangci-lint (1.61.0)

1094-1094: Error return value of uploadStream.Close is not checked

(errcheck)


1096-1114: Verify GridFS file cleanup strategy

Consider implementing a cleanup strategy for GridFS files when snapshots are no longer needed to prevent storage bloat.

Run this script to check for potential orphaned GridFS files:

✅ Verification successful

Let me search for any cleanup or deletion related code to verify if there's a strategy in place.


Let me search for garbage collection related code since I found a reference to SnapshotDisableGC in the config.


GridFS cleanup is properly managed through configuration and GC

The codebase has a comprehensive cleanup strategy for snapshots and their associated GridFS files:

  1. Garbage Collection (GC) is enabled by default (DefaultSnapshotDisableGC = false) and can be controlled via the --backend-snapshot-disable-gc flag
  2. When GC runs during snapshot creation, it removes unnecessary data using doc.GarbageCollect()
  3. Additional cleanup is available through SnapshotWithPurgingChanges configuration which can delete previous changes when snapshots are created

The implementation follows proper retention practices by:

  • Running GC during snapshot creation (as seen in server/packs/snapshots.go)
  • Providing configuration options to control cleanup behavior
  • Ensuring synchronized changes are properly tracked before deletion
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for GridFS files without corresponding snapshot records

# List all GridFS files and their IDs
ast-grep --pattern 'bucket.OpenUploadStream($$$)' 

# Check snapshot collection references
rg -A 5 'snapshot_file_id'

Length of output: 482


Script:

#!/bin/bash
# Search for GridFS deletion/cleanup related code
rg -i "delete.*snapshot|remove.*snapshot|cleanup.*snapshot" -A 5

# Search for any GridFS bucket deletion operations
ast-grep --pattern 'bucket.Delete($$$)'

# Search for snapshot collection deletion operations
rg "collection\(ColSnapshots\).*Delete"

Length of output: 4936


Script:

#!/bin/bash
# Search for snapshot garbage collection implementation
rg "SnapshotDisableGC" -B 5 -A 10

# Search for any GC related functions for snapshots
ast-grep --pattern 'func $_($$$) {
  $$$
  snapshot
  $$$
}'

# Look for time-based cleanup or retention policies
rg -i "retention.*snapshot|gc.*snapshot|cleanup.*snapshot" -A 5

Length of output: 10481

Comment on lines +1624 to +1650
t.Run("store and validate large snapshot test", func(t *testing.T) {
ctx := context.Background()
docKey := key.Key(fmt.Sprintf("tests$%s", t.Name()))

clientInfo, _ := db.ActivateClient(ctx, projectID, t.Name())
bytesID, _ := clientInfo.ID.Bytes()
actorID, _ := time.ActorIDFromBytes(bytesID)
docInfo, _ := db.FindDocInfoByKeyAndOwner(ctx, clientInfo.RefKey(), docKey, true)

doc := document.New(docKey)
doc.SetActor(actorID)

largeData := make([]byte, 16*1024*1024+1)
for i := range largeData {
largeData[i] = byte('A' + (i % 26))
}

assert.NoError(t, doc.Update(func(root *json.Object, p *presence.Presence) error {
root.SetBytes("largeField", largeData)
return nil
}))

docRefKey := docInfo.RefKey()

err := db.CreateSnapshotInfo(ctx, docRefKey, doc.InternalDocument())
assert.NoError(t, err)
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance test coverage and validation.

The test only verifies that the snapshot was created without error. Consider:

  1. Validating the stored snapshot matches the original document
  2. Testing error cases (e.g., even larger documents)
  3. Adding cleanup of test data

Add validation and error cases:

 func RunCreateLargeSnapshotTest(t *testing.T, db database.Database, projectID types.ID) {
-    t.Run("store and validate large snapshot test", func(t *testing.T) {
+    t.Run("store large snapshot test", func(t *testing.T) {
         // ... existing setup code ...
         
         err := db.CreateSnapshotInfo(ctx, docRefKey, doc.InternalDocument())
         assert.NoError(t, err)
+        
+        // Validate stored snapshot
+        snapshot, err := db.FindClosestSnapshotInfo(ctx, docRefKey, 0, false)
+        assert.NoError(t, err)
+        
+        var storedDoc document.InternalDocument
+        err = snapshot.LoadSnapshot(&storedDoc)
+        assert.NoError(t, err)
+        assert.Equal(t, largeData, storedDoc.Root().GetBytes("largeField"))
     })
+    
+    t.Run("store extremely large snapshot test", func(t *testing.T) {
+        // Test with even larger document (e.g., 32MB)
+        // Verify appropriate error handling
+    })
+    
+    t.Cleanup(func() {
+        // Add cleanup code
+    })
 }

Committable suggestion skipped: line range outside the PR's diff.

server/backend/database/mongo/client.go Outdated Show resolved Hide resolved
server/backend/database/mongo/client.go Outdated Show resolved Hide resolved
@injae-kim injae-kim mentioned this pull request Nov 25, 2024
BSON max snapshot size variable extract as static
Remove unnecessary log
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (1)
server/backend/database/mongo/client.go (1)

1112-1123: Maintain consistent error handling patterns

The error handling in the else branch could be more consistent with the GridFS branch. Consider wrapping the error with more context.

Apply this change:

-            return fmt.Errorf("insert snapshot: %w", err)
+            return fmt.Errorf("failed to insert snapshot into collection: %w", err)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between e99ab4c and cc026bb.

📒 Files selected for processing (1)
  • server/backend/database/mongo/client.go (3 hunks)
🧰 Additional context used
🪛 golangci-lint (1.61.0)
server/backend/database/mongo/client.go

1091-1091: Error return value of uploadStream.Close is not checked

(errcheck)

🔇 Additional comments (2)
server/backend/database/mongo/client.go (2)

47-48: LGTM: Well-defined constant for MongoDB's document size limit.

The constant BSONMaxSnapshotSize is appropriately named and correctly set to MongoDB's 16MB document size limit.


1078-1123: LGTM: Clean implementation of GridFS for large snapshots

The implementation successfully:

  • Handles snapshots exceeding MongoDB's 16MB limit using GridFS
  • Maintains backward compatibility for smaller snapshots
  • Properly stores metadata in the snapshots collection
🧰 Tools
🪛 golangci-lint (1.61.0)

1091-1091: Error return value of uploadStream.Close is not checked

(errcheck)

Comment on lines +1078 to +1111
if len(snapshot) > BSONMaxSnapshotSize {
db := c.client.Database(c.config.YorkieDatabase)

// create GridFS bucket
bucket, err := gridfs.NewBucket(db)
if err != nil {
return fmt.Errorf("failed to create GridFS bucket: %w", err)
}

uploadStream, err := bucket.OpenUploadStream(fmt.Sprintf("%s_snapshot", docRefKey.DocID))
if err != nil {
return fmt.Errorf("failed to open GridFS upload stream: %w", err)
}
defer uploadStream.Close()

_, err = uploadStream.Write(snapshot)
if err != nil {
return fmt.Errorf("failed to write to GridFS: %w", err)
}

fileID := uploadStream.FileID

if _, err := c.collection(ColSnapshots).InsertOne(ctx, bson.M{
"project_id": docRefKey.ProjectID,
"doc_id": docRefKey.DocID,
"server_seq": doc.Checkpoint().ServerSeq,
"lamport": doc.Lamport(),
"version_vector": doc.VersionVector(),
"snapshot_file_id": fileID, // GridFS file ID
"created_at": gotime.Now(),
}); err != nil {
return fmt.Errorf("insert snapshot info: %w", err)
}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle error from uploadStream.Close()

While the GridFS implementation is solid, the deferred Close() call should handle potential errors.

Apply this fix:

-defer uploadStream.Close()
+defer func() {
+    if err := uploadStream.Close(); err != nil {
+        log.Printf("Failed to close GridFS upload stream: %v", err)
+    }
+}()

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 golangci-lint (1.61.0)

1091-1091: Error return value of uploadStream.Close is not checked

(errcheck)

@hackerwins
Copy link
Member

hackerwins commented Nov 26, 2024

@xx10222

Thank you for your contribution.

As this appears to be your first contribution, I recommend reviewing our contribution guidelines: https://github.com/yorkie-team/yorkie/blob/main/CONTRIBUTING.md

Key focus areas:

  • Passing the CI and Checking CodeRabbit comments
  • Ensure the proposed changes meet our project's standards

I also noticed your PR introduces Snapshot storage using GridFS. We'll need to carefully verify:

  • The snapshot can be successfully read and retrieved
  • Performance and storage efficiency of the GridFS implementation

FindSnapshotInfoByRefKey(
ctx context.Context,
refKey types.SnapshotRefKey,
) (*SnapshotInfo, error)
// FindClosestSnapshotInfo finds the closest snapshot info in a given serverSeq.
FindClosestSnapshotInfo(
ctx context.Context,
docRefKey types.DocRefKey,
serverSeq int64,
includeSnapshot bool,
) (*SnapshotInfo, error)

Please update the PR addressing these points. Looking forward to your revisions.
Thank you.

@hackerwins hackerwins marked this pull request as draft November 27, 2024 00:35
@hackerwins hackerwins removed their request for review December 2, 2024 03:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Snapshot overflow
4 participants