Skip to content

Commit

Permalink
Incremental logical backup and point in time recovery (#11097)
Browse files Browse the repository at this point in the history
* Unexplode Backup() function, pass BackupRequest as argument

Signed-off-by: Shlomi Noach <[email protected]>

* use tabletmanagerdata.BackupRequest

Signed-off-by: Shlomi Noach <[email protected]>

* make proto

Signed-off-by: Shlomi Noach <[email protected]>

* removed duplicate tabletmanagerdata imports

Signed-off-by: Shlomi Noach <[email protected]>

* tabletmanagerdatapb

Signed-off-by: Shlomi Noach <[email protected]>

* vschemapb

Signed-off-by: Shlomi Noach <[email protected]>

* require.NoError

Signed-off-by: Shlomi Noach <[email protected]>

* proto: BackupRequest.incremental_from_pos

Signed-off-by: Shlomi Noach <[email protected]>

* pass IncrementalFromPos

Signed-off-by: Shlomi Noach <[email protected]>

* populate incremental_from_pos

Signed-off-by: Shlomi Noach <[email protected]>

* Storing ServerUUID and TabletAlias as part of backup MANIFESTO

Signed-off-by: Shlomi Noach <[email protected]>

* use IncrementalFromPos in BackupParams, populate

Signed-off-by: Shlomi Noach <[email protected]>

* executeIncrementalBackup

Signed-off-by: Shlomi Noach <[email protected]>

* add unit tests for GTID 'Contains'

Signed-off-by: Shlomi Noach <[email protected]>

* Add binlog related functions in MySQLDaemon

Signed-off-by: Shlomi Noach <[email protected]>

* More functionality in incremental backup

Signed-off-by: Shlomi Noach <[email protected]>

* add backupBinlogDir as a valid backup directory

Signed-off-by: Shlomi Noach <[email protected]>

* include FromPosition in the backup manifest. Find binlog files to backup

Signed-off-by: Shlomi Noach <[email protected]>

* complete incremental backup

Signed-off-by: Shlomi Noach <[email protected]>

* clarify the difference between user's requested position, and the FromPosition in the manifest. Add 'Incremental' (bool) to the manifest

Signed-off-by: Shlomi Noach <[email protected]>

* make vtadmin_web_proto_types

Signed-off-by: Shlomi Noach <[email protected]>

* Add Keyspace, Shard to backup manifest

Signed-off-by: Shlomi Noach <[email protected]>

* for good order, keyspace comes first

Signed-off-by: Shlomi Noach <[email protected]>

* take into account purged GTIDs. Fix value of 'incrementalBackupToGTID'

Signed-off-by: Shlomi Noach <[email protected]>

* endtoend tests for incremental backup. No restore validation as yet. Tests do not have a GitHub workflow yet.

Signed-off-by: Shlomi Noach <[email protected]>

* Adding CI shard: 'backup_pitr'

Signed-off-by: Shlomi Noach <[email protected]>

* cleanup

Signed-off-by: Shlomi Noach <[email protected]>

* backup_pitr tested via mysql80

Signed-off-by: Shlomi Noach <[email protected]>

* insert data with hint

Signed-off-by: Shlomi Noach <[email protected]>

* refactor

Signed-off-by: Shlomi Noach <[email protected]>

* FindPITRPath: find a shortest path to recover a GTID position, base on one full backup and zero or more inremental backups

Signed-off-by: Shlomi Noach <[email protected]>

* more validation

Signed-off-by: Shlomi Noach <[email protected]>

* more test cases

Signed-off-by: Shlomi Noach <[email protected]>

* RestoreFromBackupRequest: RestoreToPos

Signed-off-by: Shlomi Noach <[email protected]>

* vtctl Restore: '--restore_to_pos'

Signed-off-by: Shlomi Noach <[email protected]>

* make vtadmin_web_proto_types

Signed-off-by: Shlomi Noach <[email protected]>

* RestoreFromBackupRequest: RestoreToPos

Signed-off-by: Shlomi Noach <[email protected]>

* Unexplode: RestoreFromBackup() receives req *tabletmanagerdatapb.RestoreFromBackupRequest

Signed-off-by: Shlomi Noach <[email protected]>

* Unexplode: RestoreFromBackup() receives req *tabletmanagerdatapb.RestoreFromBackupRequest

Signed-off-by: Shlomi Noach <[email protected]>

* make vtadmin_web_proto_types

Signed-off-by: Shlomi Noach <[email protected]>

* populate restoreParams.RestoreToPos

Signed-off-by: Shlomi Noach <[email protected]>

* simplifying the logic of finding relevant backup

Signed-off-by: Shlomi Noach <[email protected]>

* fix switch/break logic

Signed-off-by: Shlomi Noach <[email protected]>

* towards a restore path

Signed-off-by: Shlomi Noach <[email protected]>

* golang version

Signed-off-by: Shlomi Noach <[email protected]>

* fix workflows ubuntu version

Signed-off-by: Shlomi Noach <[email protected]>

* skip nil manifests

Signed-off-by: Shlomi Noach <[email protected]>

* FindBackupToRestore() returns a RestorePath, which is an ordered sequence of backup manifests/handles

Signed-off-by: Shlomi Noach <[email protected]>

* linter suggestion

Signed-off-by: Shlomi Noach <[email protected]>

* fix backup time comparison logic

Signed-off-by: Shlomi Noach <[email protected]>

* vtctl Restore supports --dry_run flag

Signed-off-by: Shlomi Noach <[email protected]>

* flag --incremental-from-pos accepts the value 'auto', which takes the next incremental backup from last good backup

Signed-off-by: Shlomi Noach <[email protected]>

* make vtadmin_web_proto_types

Signed-off-by: Shlomi Noach <[email protected]>

* endtoend: validate --incremental_from_pos=auto

Signed-off-by: Shlomi Noach <[email protected]>

* towards applying binary logs: extracting onto temporary directory

Signed-off-by: Shlomi Noach <[email protected]>

* apply binary log file

Signed-off-by: Shlomi Noach <[email protected]>

* do not restore replication at end of PITR

Signed-off-by: Shlomi Noach <[email protected]>

* take dryrun into consideration

Signed-off-by: Shlomi Noach <[email protected]>

* testing restore to pos

Signed-off-by: Shlomi Noach <[email protected]>

* testing restore to pos: wait for replication, avoid bogus writes

Signed-off-by: Shlomi Noach <[email protected]>

* validating PITR path when binary logs are missing history

Signed-off-by: Shlomi Noach <[email protected]>

* full backup manifest now includes 'PurgedPosition', which is necessary to build a restore path. Now evaluated in IsValidIncrementalBakcup

Signed-off-by: Shlomi Noach <[email protected]>

* more recovery paths tests

Signed-off-by: Shlomi Noach <[email protected]>

* restrucutre tests

Signed-off-by: Shlomi Noach <[email protected]>

* log restore path

Signed-off-by: Shlomi Noach <[email protected]>

* generate CI workflows

Signed-off-by: Shlomi Noach <[email protected]>

* code comments

Signed-off-by: Shlomi Noach <[email protected]>

* code comments

Signed-off-by: Shlomi Noach <[email protected]>

* code comments

Signed-off-by: Shlomi Noach <[email protected]>

* CI 57 and 80

Signed-off-by: Shlomi Noach <[email protected]>

* flags test

Signed-off-by: Shlomi Noach <[email protected]>

* copyright year

Signed-off-by: Shlomi Noach <[email protected]>

* go version

Signed-off-by: Shlomi Noach <[email protected]>

* removed legacy mysql80 test

Signed-off-by: Shlomi Noach <[email protected]>

* PITR: stop search for possible resotre path with the first valid path, even if it's not the optimal

Signed-off-by: Shlomi Noach <[email protected]>

* support incrementally union-izing of previous-GTIDs when iterating binary logs

Signed-off-by: Shlomi Noach <[email protected]>

* removed local metadata info

Signed-off-by: Shlomi Noach <[email protected]>

* merged main, regenerated workflows

Signed-off-by: Shlomi Noach <[email protected]>

* go mod tidy

Signed-off-by: Shlomi Noach <[email protected]>

* rename conflicting variable

Signed-off-by: Shlomi Noach <[email protected]>

* refactor: const value

Signed-off-by: Shlomi Noach <[email protected]>

* dry run restore now returns with 0 exit code, no error

Signed-off-by: Shlomi Noach <[email protected]>

* release notes

Signed-off-by: Shlomi Noach <[email protected]>

Signed-off-by: Shlomi Noach <[email protected]>
  • Loading branch information
shlomi-noach authored Nov 29, 2022
1 parent f43bc2b commit bd0a5b8
Show file tree
Hide file tree
Showing 40 changed files with 3,864 additions and 1,258 deletions.
134 changes: 134 additions & 0 deletions .github/workflows/cluster_endtoend_backup_pitr.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
# DO NOT MODIFY: THIS FILE IS GENERATED USING "make generate_ci_workflows"

name: Cluster (backup_pitr)
on: [push, pull_request]
concurrency:
group: format('{0}-{1}', ${{ github.ref }}, 'Cluster (backup_pitr)')
cancel-in-progress: true

env:
LAUNCHABLE_ORGANIZATION: "vitess"
LAUNCHABLE_WORKSPACE: "vitess-app"
GITHUB_PR_HEAD_SHA: "${{ github.event.pull_request.head.sha }}"

jobs:
build:
name: Run endtoend tests on Cluster (backup_pitr)
runs-on: ubuntu-20.04

steps:
- name: Skip CI
run: |
if [[ "${{contains( github.event.pull_request.labels.*.name, 'Skip CI')}}" == "true" ]]; then
echo "skipping CI due to the 'Skip CI' label"
exit 1
fi
- name: Check if workflow needs to be skipped
id: skip-workflow
run: |
skip='false'
if [[ "${{github.event.pull_request}}" == "" ]] && [[ "${{github.ref}}" != "refs/heads/main" ]] && [[ ! "${{github.ref}}" =~ ^refs/heads/release-[0-9]+\.[0-9]$ ]] && [[ ! "${{github.ref}}" =~ "refs/tags/.*" ]]; then
skip='true'
fi
echo Skip ${skip}
echo "::set-output name=skip-workflow::${skip}"
- name: Check out code
if: steps.skip-workflow.outputs.skip-workflow == 'false'
uses: actions/checkout@v3

- name: Check for changes in relevant files
if: steps.skip-workflow.outputs.skip-workflow == 'false'
uses: frouioui/paths-filter@main
id: changes
with:
token: ''
filters: |
end_to_end:
- 'go/**/*.go'
- 'test.go'
- 'Makefile'
- 'build.env'
- 'go.[sumod]'
- 'proto/*.proto'
- 'tools/**'
- 'config/**'
- 'bootstrap.sh'
- '.github/workflows/cluster_endtoend_backup_pitr.yml'
- name: Set up Go
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
uses: actions/setup-go@v3
with:
go-version: 1.19.3

- name: Set up python
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
uses: actions/setup-python@v4

- name: Tune the OS
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
run: |
echo '1024 65535' | sudo tee -a /proc/sys/net/ipv4/ip_local_port_range
# Increase the asynchronous non-blocking I/O. More information at https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_use_native_aio
echo "fs.aio-max-nr = 1048576" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf
- name: Get dependencies
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
run: |
# Get key to latest MySQL repo
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 467B942D3A79BD29
# Setup MySQL 8.0
wget -c https://dev.mysql.com/get/mysql-apt-config_0.8.20-1_all.deb
echo mysql-apt-config mysql-apt-config/select-server select mysql-8.0 | sudo debconf-set-selections
sudo DEBIAN_FRONTEND="noninteractive" dpkg -i mysql-apt-config*
sudo apt-get update
# Install everything else we need, and configure
sudo apt-get install -y mysql-server mysql-client make unzip g++ etcd curl git wget eatmydata xz-utils
sudo service mysql stop
sudo service etcd stop
sudo ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld
go mod download
# install JUnit report formatter
go install github.com/vitessio/go-junit-report@HEAD
- name: Setup launchable dependencies
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
run: |
# Get Launchable CLI installed. If you can, make it a part of the builder image to speed things up
pip3 install --user launchable~=1.0 > /dev/null
# verify that launchable setup is all correct.
launchable verify || true
# Tell Launchable about the build you are producing and testing
launchable record build --name "$GITHUB_RUN_ID" --source .
- name: Run cluster endtoend test
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
timeout-minutes: 45
run: |
# We set the VTDATAROOT to the /tmp folder to reduce the file path of mysql.sock file
# which musn't be more than 107 characters long.
export VTDATAROOT="/tmp/"
source build.env
set -x
# run the tests however you normally do, then produce a JUnit XML file
eatmydata -- go run test.go -docker=false -follow -shard backup_pitr | tee -a output.txt | go-junit-report -set-exit-code > report.xml
- name: Print test output and Record test result in launchable
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true' && always()
run: |
# send recorded tests to launchable
launchable record tests --build "$GITHUB_RUN_ID" go-test . || true
# print test output
cat output.txt
147 changes: 147 additions & 0 deletions .github/workflows/cluster_endtoend_backup_pitr_mysql57.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# DO NOT MODIFY: THIS FILE IS GENERATED USING "make generate_ci_workflows"

name: Cluster (backup_pitr) mysql57
on: [push, pull_request]
concurrency:
group: format('{0}-{1}', ${{ github.ref }}, 'Cluster (backup_pitr) mysql57')
cancel-in-progress: true

env:
LAUNCHABLE_ORGANIZATION: "vitess"
LAUNCHABLE_WORKSPACE: "vitess-app"
GITHUB_PR_HEAD_SHA: "${{ github.event.pull_request.head.sha }}"

jobs:
build:
name: Run endtoend tests on Cluster (backup_pitr) mysql57
runs-on: ubuntu-20.04

steps:
- name: Skip CI
run: |
if [[ "${{contains( github.event.pull_request.labels.*.name, 'Skip CI')}}" == "true" ]]; then
echo "skipping CI due to the 'Skip CI' label"
exit 1
fi
- name: Check if workflow needs to be skipped
id: skip-workflow
run: |
skip='false'
if [[ "${{github.event.pull_request}}" == "" ]] && [[ "${{github.ref}}" != "refs/heads/main" ]] && [[ ! "${{github.ref}}" =~ ^refs/heads/release-[0-9]+\.[0-9]$ ]] && [[ ! "${{github.ref}}" =~ "refs/tags/.*" ]]; then
skip='true'
fi
echo Skip ${skip}
echo "::set-output name=skip-workflow::${skip}"
- name: Check out code
if: steps.skip-workflow.outputs.skip-workflow == 'false'
uses: actions/checkout@v3

- name: Check for changes in relevant files
if: steps.skip-workflow.outputs.skip-workflow == 'false'
uses: frouioui/paths-filter@main
id: changes
with:
token: ''
filters: |
end_to_end:
- 'go/**/*.go'
- 'test.go'
- 'Makefile'
- 'build.env'
- 'go.[sumod]'
- 'proto/*.proto'
- 'tools/**'
- 'config/**'
- 'bootstrap.sh'
- '.github/workflows/cluster_endtoend_backup_pitr_mysql57.yml'
- name: Set up Go
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
uses: actions/setup-go@v3
with:
go-version: 1.19.3

- name: Set up python
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
uses: actions/setup-python@v4

- name: Tune the OS
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
run: |
echo '1024 65535' | sudo tee -a /proc/sys/net/ipv4/ip_local_port_range
# Increase the asynchronous non-blocking I/O. More information at https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_use_native_aio
echo "fs.aio-max-nr = 1048576" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf
- name: Get dependencies
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
run: |
sudo apt-get update
# Uninstall any previously installed MySQL first
sudo ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/
sudo apparmor_parser -R /etc/apparmor.d/usr.sbin.mysqld
sudo systemctl stop apparmor
sudo DEBIAN_FRONTEND="noninteractive" apt-get remove -y --purge mysql-server mysql-client mysql-common
sudo apt-get -y autoremove
sudo apt-get -y autoclean
sudo deluser mysql
sudo rm -rf /var/lib/mysql
sudo rm -rf /etc/mysql
# Get key to latest MySQL repo
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 467B942D3A79BD29
wget -c https://dev.mysql.com/get/mysql-apt-config_0.8.14-1_all.deb
# Bionic packages are still compatible for Focal since there's no MySQL 5.7
# packages for Focal.
echo mysql-apt-config mysql-apt-config/repo-codename select bionic | sudo debconf-set-selections
echo mysql-apt-config mysql-apt-config/select-server select mysql-5.7 | sudo debconf-set-selections
sudo DEBIAN_FRONTEND="noninteractive" dpkg -i mysql-apt-config*
sudo apt-get update
sudo DEBIAN_FRONTEND="noninteractive" apt-get install -y mysql-client=5.7* mysql-community-server=5.7* mysql-server=5.7*
sudo apt-get install -y make unzip g++ etcd curl git wget eatmydata
sudo service mysql stop
sudo service etcd stop
# install JUnit report formatter
go install github.com/vitessio/go-junit-report@HEAD
- name: Setup launchable dependencies
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
run: |
# Get Launchable CLI installed. If you can, make it a part of the builder image to speed things up
pip3 install --user launchable~=1.0 > /dev/null
# verify that launchable setup is all correct.
launchable verify || true
# Tell Launchable about the build you are producing and testing
launchable record build --name "$GITHUB_RUN_ID" --source .
- name: Run cluster endtoend test
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true'
timeout-minutes: 45
run: |
# We set the VTDATAROOT to the /tmp folder to reduce the file path of mysql.sock file
# which musn't be more than 107 characters long.
export VTDATAROOT="/tmp/"
source build.env
set -x
# run the tests however you normally do, then produce a JUnit XML file
eatmydata -- go run test.go -docker=false -follow -shard backup_pitr | tee -a output.txt | go-junit-report -set-exit-code > report.xml
- name: Print test output and Record test result in launchable
if: steps.skip-workflow.outputs.skip-workflow == 'false' && steps.changes.outputs.end_to_end == 'true' && always()
run: |
# send recorded tests to launchable
launchable record tests --build "$GITHUB_RUN_ID" go-test . || true
# print test output
cat output.txt
63 changes: 61 additions & 2 deletions doc/releasenotes/16_0_0_summary.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,16 @@ It is possible to enable/disable, to change throttling threshold as well as the

See https://github.com/vitessio/vitess/pull/11604

### Incremental backup and point in time recovery

In [PR #11097](https://github.com/vitessio/vitess/pull/11097) we introduced native incremental backup and point in time recovery:

- It is possible to take an incremental backup, starting with last known (full or incremental) backup, and up to either a specified (GTID) position, or current ("auto") position.
- The backup is done by copying binary logs. The binary logs are rotated as needed.
- It is then possible to restore a backup up to a given point in time (GTID position). This involves finding a restore path consisting of a full backup and zero or more incremental backups, applied up to the given point in time.
- A server restored to a point in time remains in `DRAINED` tablet type, and does not join the replication stream (thus, "frozen" in time).
- It is possible to take incremental backups from different tablets. It is OK to have overlaps in incremental backup contents. The restore process chooses a valid path, and is valid as long as there are no gaps in the backed up binary log content.

### Breaking Changes

#### Orchestrator Integration Deletion
Expand Down Expand Up @@ -54,11 +64,11 @@ Other aspects of the VReplication copy-phase logic are preserved:
#### VTTablet: --queryserver-config-pool-conn-max-lifetime
`--queryserver-config-pool-conn-max-lifetime=[integer]` allows you to set a timeout on each connection in the query server connection pool. It chooses a random value between its value and twice its value, and when a connection has lived longer than the chosen value, it'll be removed from the pool the next time it's returned to the pool.

### vttablet --throttler-config-via-topo
#### vttablet --throttler-config-via-topo

The flag `--throttler-config-via-topo` switches throttler configuration from `vttablet`-flags to the topo service. This flag is `false` by default, for backwards compatibility. It will default to `true` in future versions.

### vtctldclient UpdateThrottlerConfig
#### vtctldclient UpdateThrottlerConfig

Tablet throttler configuration is now supported in `topo`. Updating the throttler configuration is done via `vtctldclient UpdateThrottlerConfig` and applies to all tablet in all cells for a given keyspace.

Expand All @@ -85,6 +95,55 @@ $ vtctldclient UpdateThrottlerConfig --custom_query "" --check_as_check_shard --

See https://github.com/vitessio/vitess/pull/11604

#### vtctldclient Backup --incremental_from_pos

The `Backup` command now supports `--incremental_from_pos` flag, which can receive a valid position or the value `auto`. For example:

```shell
$ vtctlclient -- Backup --incremental_from_pos "MySQL56/16b1039f-22b6-11ed-b765-0a43f95f28a3:1-615" zone1-0000000102
$ vtctlclient -- Backup --incremental_from_pos "auto" zone1-0000000102
```

When the value is `auto`, the position is evaluated as the last successful backup's `Position`. The idea with incremental backups is to create a contiguous (overlaps allowed) sequence of backups that store all changes from last full backup.

The incremental backup copies binary log files. It does not take MySQL down nor places any locks. It does not interrupt traffic on the MySQL server. The incremental backup copies comlete binlog files. It initially rotates binary logs, then copies anything from the requested position and up to the last completed binary log.

The backup thus does not necessarily start _exactly_ at the requested position. It starts with the first binary log that has newer entries than requested position. It is OK if the binary logs include transactions prior to the equested position. The restore process will discard any duplicates.

Normally, you can expect the backups to be precisely contiguous. Consider an `auto` value: due to the nature of log rotation and the fact we copy complete binlog files, the next incremental backup will start with the first binay log not covered by the previous backup, which in itself copied the one previous binlog file in full. Again, it is completely valid to enter any good position.

The incremental backup fails if it is unable to attain binary logs from given position (ie binary logs have been purged).

The manifest of an incremental backup has a non-empty `FromPosition` value, and a `Incremental = true` value.

#### vtctldclient RestoreFromBackup --restore_to_pos

- `--restore_to_pos`: request to restore the server up to the given position (inclusive) and not one step further.
- `--dry_run`: when `true`, calculate the restore process, if possible, evaluate a path, but exit without actually making any changes to the server.

Examples:

```
$ vtctlclient -- RestoreFromBackup --restore_to_pos "MySQL56/16b1039f-22b6-11ed-b765-0a43f95f28a3:1-220" zone1-0000000102
```

The restore process seeks a restore _path_: a sequence of backups (handles/manifests) consisting of one full backup followed by zero or more incremental backups, that can bring the server up to the requested position, inclusive.

The command fails if it cannot evaluate a restore path. Possible reasons:

- there's gaps in the incremental backups
- existing backups don't reach as far as requested position
- all full backups exceed requested position (so there's no way to get into an ealier position)

The command outputs the restore path.

There may be multiple restore paths, the command prefers a path with the least number of backups. This has nothing to say about the amount and size of binary logs involved.

The `RestoreFromBackup --restore_to_pos` ends with:

- the restored server in intentionally broken replication setup
- tablet type is `DRAINED`

### Important bug fixes

#### Corrupted results for non-full-group-by queries with JOINs
Expand Down
Loading

0 comments on commit bd0a5b8

Please sign in to comment.