Skip to content

Commit

Permalink
backup: cut user and table counts in cluster backup test
Browse files Browse the repository at this point in the history
This test was creating 1k users and 100 tables, altering 100 tables and dropping 100 tables
in both the system and tenent branches. This many schema change jobs made the test very, very slow
and sensitive to schema changer delays adding up. In one recently observed CI run, creating the 1000
users took upwards of nine minutes -- for a single test's setup, before it even started testing the
actual backup feature it is supposed to test.

This changes the user counts and table counts to be much smaller, hopefully reducing the number of
schema changes and waits on schema changes that are involved in setting up this test.

Release note: none.
Epic: none.
  • Loading branch information
dt committed Dec 30, 2024
1 parent e5ef95e commit 496be22
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions pkg/backup/full_cluster_backup_restore_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -142,11 +142,12 @@ CREATE TABLE data2.foo (a int);

// Setup the system systemTablesToVerify to ensure that they are copied to the new cluster.
// Populate system.users.
numBatches := 100
numBatches := 5
usersPerBatch := 20
if util.RaceEnabled {
numBatches = 1
usersPerBatch = 5
}
usersPerBatch := 10
userID := 0
for b := 0; b < numBatches; b++ {
sqlDB.RunWithRetriableTxn(t, func(txn *gosql.Tx) error {
Expand Down Expand Up @@ -193,7 +194,7 @@ CREATE TABLE data2.foo (a int);

// Create a bunch of user tables on the restoring cluster that we're going
// to delete.
numTables := 50
numTables := 10
if util.RaceEnabled {
numTables = 2
}
Expand Down

0 comments on commit 496be22

Please sign in to comment.