-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: blocking migrations #604
Conversation
Warning Rate limit exceeded@gfyrag has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 4 minutes and 48 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThe changes involve modifications to SQL migration scripts related to index management in a database. The first script removes several index creation commands and a trigger/function associated with enforcing reference uniqueness. The second script deletes an existing SQL file that dropped two indices from the schema. Additionally, various request models in the client documentation have been updated to remove the Changes
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (1)
internal/storage/bucket/migrations/15-create-ledger-indexes/up.sql (1)
Line range hint
1-7
: Ensure migration is reversibleThis migration makes significant schema changes. Please ensure there's a corresponding
down.sql
migration that can restore the previous state if needed.Also, consider the following recommendations:
- Add a comment explaining why these constraints and indexes are being removed
- Consider executing this migration during low-traffic periods
- Monitor query performance after deployment
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
internal/storage/bucket/migrations/23-delete-orphan-indices/notes.yaml
is excluded by!**/*.yaml
📒 Files selected for processing (2)
internal/storage/bucket/migrations/15-create-ledger-indexes/up.sql
(1 hunks)internal/storage/bucket/migrations/23-delete-orphan-indices/up.sql
(0 hunks)
💤 Files with no reviewable changes (1)
- internal/storage/bucket/migrations/23-delete-orphan-indices/up.sql
🔇 Additional comments (2)
internal/storage/bucket/migrations/15-create-ledger-indexes/up.sql (2)
6-7
: Verify index usage before modification
The index renaming operation (transactions_reference2
to transactions_reference
) suggests a consolidation of indexes. However, we should verify that this won't negatively impact query performance.
✅ Verification successful
Index modification appears safe to proceed
Based on the migration history and usage patterns found:
- The original
transactions_reference
index was created in migration 6 - A new unique index
transactions_reference2
was created concurrently in migration 14 with an improved definition (ledger, reference) - The current migration (15) safely replaces the old index with the new one
- No critical queries were found that would be negatively impacted by this change
The index modification follows best practices:
- Creating the new index concurrently before dropping the old one
- Maintaining the same index name for compatibility
- Enhancing the index definition with the
ledger
column for better query performance
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Look for queries that might be affected by the index changes
# Search for SQL queries involving transactions table
rg -A 5 'SELECT.*FROM.*transactions' --type go
# Look for repository methods that might use these indexes
ast-grep --pattern 'func $_(.*Transaction.*|.*tx.*) {
$$$
}'
Length of output: 113
Script:
#!/bin/bash
# Let's try a different approach to find SQL queries and index usage
# Search for any mentions of the index name
rg -i "transactions_reference" --type sql -A 3 -B 3
# Look for table definitions and indexes
fd -e sql -x cat {} \; | grep -A 10 -B 10 "CREATE.*TABLE.*transactions"
# Search for SELECT queries on transactions table with different case patterns
rg -i "select.*transactions" --type sql -A 5
# Look for any transaction-related repository or database code
rg -i "transaction" --type go -g "**/*repository*.go" -g "**/*db*.go" -A 5
Length of output: 2282
Line range hint 3-4
: Verify data integrity after removing uniqueness constraint
Removing the enforce_reference_uniqueness
trigger and function eliminates a database-level constraint. This could potentially allow duplicate references in the transactions table.
✅ Verification successful
The output shows that uniqueness is enforced at the ORM level through the bun
tag unique
in the Reference
field of the TransactionData
struct. Let me verify if this is consistently used throughout the codebase and if there are any direct database operations bypassing the ORM.
Let me try a different approach to verify the transaction handling and uniqueness enforcement:
Based on the evidence gathered, I can now provide a final response:
Removing uniqueness trigger is safe - uniqueness is enforced at ORM level
The uniqueness constraint for transaction references is properly enforced through:
- ORM-level unique constraint via
bun
tag:Reference string 'json:"reference,omitempty" bun:"reference,type:varchar,unique,nullzero"'
- All transaction insertions are performed through the ORM as evidenced by
InsertTransaction
calls using thebun.DB
interface - Tests explicitly verify reference uniqueness:
test/e2e/api_transactions_create_test.go
shows that duplicate references are rejected
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if there are any application-level constraints or validations
# that replace the removed database-level uniqueness enforcement
# Look for any reference uniqueness validation in the codebase
rg -A 5 "reference.*unique" --type go
# Check for any transaction validation logic
ast-grep --pattern 'func $_(transaction $_ | tx $_) {
$$$
}'
Length of output: 607
Script:
#!/bin/bash
# Check for direct SQL operations on transactions table
rg -A 5 "INSERT INTO transactions" --type sql --type go
# Check if the struct with unique tag is consistently used
ast-grep --pattern 'type $_ struct {
$$$
Reference string `$$$`
$$$
}'
# Check for any raw SQL executions
rg -A 5 "Exec\(.*INSERT.*transactions" --type go
Length of output: 187
Script:
#!/bin/bash
# Check for transaction insertion/creation methods
ast-grep --pattern 'func $_(CreateTransaction|InsertTransaction|SaveTransaction)($$$) {
$$$
}'
# Look for SQL files containing transaction table operations
fd -e sql . | xargs grep -l "transactions"
# Check for any database operations using the bun ORM
rg "bun.DB|\.NewInsert|\.Insert" -A 5 --type go
Length of output: 32076
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (4)
internal/storage/ledger/main_test.go (1)
83-84
: Consider moving migration to TestMainThe bucket migration is currently run for every test case in
newLedgerStore
. This could impact test performance and might be better suited in theTestMain
setup along with the existingsystemstore.Migrate
call.Additionally, consider extracting the hardcoded "_default" bucket name into a constant.
func TestMain(m *testing.M) { WithTestMain(func(t *TestingTForMain) int { srv.LoadAsync(func() *pgtesting.PostgresServer { ret := pgtesting.CreatePostgresServer(t, docker.NewPool(t, logging.Testing()), pgtesting.WithExtension("pgcrypto")) defaultBunDB.LoadAsync(func() *bun.DB { db, err := sql.Open("pgx", ret.GetDSN()) require.NoError(t, err) bunDB := bun.NewDB(db, pgdialect.New(), bun.WithDiscardUnknownColumns()) if os.Getenv("DEBUG") == "true" { bunDB.AddQueryHook(bundebug.NewQueryHook()) } bunDB.SetMaxOpenConns(100) require.NoError(t, systemstore.Migrate(logging.TestingContext(), bunDB)) + require.NoError(t, bucket.GetMigrator(bunDB, "_default").Up(logging.TestingContext())) defaultDriver.SetValue(driver.New( ledgerstore.NewFactory(bunDB), systemstore.New(bunDB), bucket.NewDefaultFactory(bunDB), )) return bunDB }) return ret }) return m.Run() }) } +const defaultBucketName = "_default" func newLedgerStore(t T) *ledgerstore.Store { t.Helper() <-defaultDriver.Done() ledgerName := uuid.NewString()[:8] ctx := logging.TestingContext() l := ledger.MustNewWithDefault(ledgerName) - err := bucket.GetMigrator(defaultBunDB.GetValue(), "_default").Up(ctx) - require.NoError(t, err) store, err := defaultDriver.GetValue().CreateLedger(ctx, &l) require.NoError(t, err)pkg/client/docs/models/operations/v2countaccountsrequest.md (1)
9-10
: Consider adding examples for Pit and RequestBody fieldsThe empty examples for these fields could make it harder for API users to understand proper usage. Consider adding:
- For
Pit
: An ISO-8601 timestamp example- For
RequestBody
: A JSON example showing common filtering patternspkg/client/models/operations/v2countaccounts.go (1)
13-15
: Consider documenting breaking changeThe removal of the Query field is a breaking change that will affect API clients. Consider:
- Adding a deprecation notice in previous versions
- Documenting migration steps for existing clients
- Updating the major version number if following semver
pkg/client/models/operations/v2counttransactions.go (1)
Line range hint
1-1
: Note: This is a generated fileAny modifications should be made to the Speakeasy source templates rather than directly editing this file.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (16)
docs/database/_default/diagrams/summary/relationships.real.compact.dot
is excluded by!**/*.dot
,!**/*.dot
docs/database/_default/diagrams/summary/relationships.real.compact.png
is excluded by!**/*.png
,!**/*.png
docs/database/_default/diagrams/summary/relationships.real.large.dot
is excluded by!**/*.dot
,!**/*.dot
docs/database/_default/diagrams/summary/relationships.real.large.png
is excluded by!**/*.png
,!**/*.png
docs/database/_default/diagrams/tables/accounts.1degree.dot
is excluded by!**/*.dot
,!**/*.dot
docs/database/_default/diagrams/tables/accounts.1degree.png
is excluded by!**/*.png
,!**/*.png
docs/database/_default/diagrams/tables/accounts.2degrees.dot
is excluded by!**/*.dot
,!**/*.dot
docs/database/_default/diagrams/tables/accounts.2degrees.png
is excluded by!**/*.png
,!**/*.png
docs/database/_default/diagrams/tables/moves.1degree.dot
is excluded by!**/*.dot
,!**/*.dot
docs/database/_default/diagrams/tables/moves.1degree.png
is excluded by!**/*.png
,!**/*.png
docs/database/_default/diagrams/tables/moves.2degrees.dot
is excluded by!**/*.dot
,!**/*.dot
docs/database/_default/diagrams/tables/transactions.1degree.dot
is excluded by!**/*.dot
,!**/*.dot
docs/database/_default/diagrams/tables/transactions.1degree.png
is excluded by!**/*.png
,!**/*.png
docs/database/_default/diagrams/tables/transactions.2degrees.dot
is excluded by!**/*.dot
,!**/*.dot
docs/database/_default/diagrams/tables/transactions.2degrees.png
is excluded by!**/*.png
,!**/*.png
pkg/client/.speakeasy/gen.lock
is excluded by!**/*.lock
,!**/*.lock
📒 Files selected for processing (12)
internal/storage/ledger/main_test.go
(1 hunks)pkg/client/docs/models/components/v2errorsenum.md
(1 hunks)pkg/client/docs/models/operations/v2countaccountsrequest.md
(1 hunks)pkg/client/docs/models/operations/v2counttransactionsrequest.md
(1 hunks)pkg/client/docs/models/operations/v2listaccountsrequest.md
(0 hunks)pkg/client/docs/models/operations/v2listtransactionsrequest.md
(0 hunks)pkg/client/docs/sdks/v2/README.md
(0 hunks)pkg/client/models/components/v2errorsenum.go
(2 hunks)pkg/client/models/operations/v2countaccounts.go
(1 hunks)pkg/client/models/operations/v2counttransactions.go
(1 hunks)pkg/client/models/operations/v2listaccounts.go
(1 hunks)pkg/client/models/operations/v2listtransactions.go
(0 hunks)
💤 Files with no reviewable changes (4)
- pkg/client/models/operations/v2listtransactions.go
- pkg/client/docs/models/operations/v2listaccountsrequest.md
- pkg/client/docs/models/operations/v2listtransactionsrequest.md
- pkg/client/docs/sdks/v2/README.md
🔇 Additional comments (8)
pkg/client/docs/models/components/v2errorsenum.md (1)
24-25
: LGTM: New error enum is well documented
The addition of V2ErrorsEnumBucketOutdated
is properly documented and follows the existing format.
pkg/client/models/components/v2errorsenum.go (1)
30-30
: LGTM: Error enum implementation is complete
The new error enum is properly implemented with both the constant definition and UnmarshalJSON handling.
Also applies to: 75-76
pkg/client/docs/models/operations/v2countaccountsrequest.md (1)
6-10
: Verify impact of Query field removal
The removal of the Query field represents a significant change in how filtering works. Let's verify this change is documented in the API changelog and migration guide.
pkg/client/docs/models/operations/v2counttransactionsrequest.md (1)
6-10
: Documentation changes are consistent
The changes align with those made in V2CountAccountsRequest, maintaining consistency across the API documentation.
pkg/client/models/operations/v2countaccounts.go (1)
13-15
: Verify complete removal of Query-related code
The Query field has been removed from the struct. Let's verify all related code has been cleaned up consistently.
✅ Verification successful
Let me search for specific references to the removed Query field to ensure complete cleanup.
Based on the results from both searches, I can now generate the final response:
Query field removal is clean and consistent
The removal of the Query field from the V2CountAccounts struct appears to be clean. No remaining references to this specific Query field were found in the codebase. The existing query-related code in the codebase is related to standard query parameter handling and other valid query parameters, not the removed Query field.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining Query-related code
rg -i "query.*field|query.*param" pkg/client/
Length of output: 17897
Script:
#!/bin/bash
# Search for any Query field or method references in the codebase
rg -i "\bQuery\b.*\bfield\b|\bQuery\b.*\bstruct\b|\bGetQuery\b" pkg/client/
Length of output: 77
pkg/client/models/operations/v2counttransactions.go (1)
13-14
: Verify impact of Query field removal on API consumers
The removal of the Query field represents a breaking change in the API. This could impact existing clients that rely on query filtering capabilities.
Let's check for potential API consumers that might be affected:
pkg/client/models/operations/v2listaccounts.go (2)
22-24
: Consistent removal of Query field
The removal of the Query field is consistent with similar changes in other request models (V2CountTransactionsRequest, etc.). This maintains API consistency across different endpoints.
22-24
: Verify alternative filtering mechanisms
With the removal of the Query field, please ensure that:
- Alternative filtering mechanisms are available through the RequestBody
- Documentation is updated to reflect the new filtering approach
- Migration guide is provided for API consumers
Let's check the related documentation and implementations:
✅ Verification successful
Alternative filtering is available through RequestBody
The verification shows that the V2ListAccountsRequest struct includes a RequestBody map[string]any
field that can be used for filtering, replacing the removed Query field. This is consistent with the API's design pattern as seen in other endpoints.
Key findings:
- The struct includes
RequestBody
field of typemap[string]any
withrequest:"mediaType=application/json"
tag - Pagination is handled through
Cursor
andPageSize
fields - Point-in-time querying is supported via the
Pit
field - The change follows a consistent pattern across the V2 API endpoints
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for documentation files related to account filtering
fd -e md -e yaml -e json | rg -l "account.*filter|query"
# Check if RequestBody is being used for filtering in the codebase
ast-grep --pattern 'RequestBody map[string]any' -l
Length of output: 217
Script:
#!/bin/bash
# Search for documentation files related to account filtering
fd -e md -e yaml -e json | rg -l "account.*filter|query"
# Check for RequestBody or similar filtering structures in Go files
ast-grep -l go --pattern 'type V2ListAccountsRequest struct {
$$$
RequestBody $_
$$$
}'
# Look for any filtering mechanisms in the accounts API
rg -A 5 "ListAccounts|accounts.*filter" pkg/
Length of output: 29090
err := bucket.GetMigrator(defaultBunDB.GetValue(), "_default").Up(ctx) | ||
require.NoError(t, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the gathered information, I can now provide a final response:
Move bucket migration to TestMain
The bucket migration should be moved to TestMain as it's a one-time setup requirement. Evidence:
- The TestMain in
internal/storage/ledger/main_test.go
already performs system-level migrations (systemstore.Migrate
) during test initialization - Other test files show bucket migrations are typically part of the initial setup, not per-test requirements
- The current implementation runs migrations redundantly before each ledger creation, which is unnecessary overhead
🔗 Analysis chain
Verify migration requirement
Please verify if the bucket migration needs to run before every ledger creation or if it's a one-time setup that can be moved to TestMain
.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other occurrences of bucket migration in tests and production code
rg -A 5 "GetMigrator.*Up"
# Search for any migration-related comments or documentation
rg -g "*.md" -i "bucket.*migration"
Length of output: 2073
Script:
#!/bin/bash
# Let's check how migrations are used in bucket creation and initialization
ast-grep --pattern 'func $_(ctx context.Context, $$$) (*DefaultBucket, error) {
$$$
}'
# Let's also check for any test setup patterns
ast-grep --pattern 'func TestMain($_) {
$$$
}'
# And check for other test files that might create buckets
rg "CreateBucket|NewBucket" -A 5
Length of output: 8969
91400fc
to
a2a6a19
Compare
a2a6a19
to
e63b724
Compare
No description provided.