Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

Commit

Permalink
Ignoring Python tests which are moved to GO (#40)
Browse files Browse the repository at this point in the history
Signed-off-by: Ajeet jain <[email protected]>

* readme for go endtoend test cases

Signed-off-by: Ajeet jain <[email protected]>

* Update README.md

Signed-off-by: Arindam Nayak <[email protected]>
  • Loading branch information
ajeetj authored and arindamnayak committed Dec 10, 2019
1 parent f09f3be commit 8d20666
Show file tree
Hide file tree
Showing 14 changed files with 171 additions and 209 deletions.
50 changes: 50 additions & 0 deletions go/test/endtoend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
This document describe the testing strategy we use for all Vitess components, and the progression in scope / complexity.

As Vitess developers, our goal is to have great end to end test coverage. In the past, these tests were mostly written in python 2.7 is coming to end of life we are moving all of those into GO.


## End to End Tests

These tests are meant to test end-to-end behaviors of the Vitess ecosystem, and complement the unit tests. For instance, we test each RPC interaction independently (client to vtgate, vtgate to vttablet, vttablet to MySQL, see previous sections). But is also good to have an end-to-end test that validates everything works together.

These tests almost always launch a topology service, a few mysqld instances, a few vttablets, a vtctld process, a few vtgates, ... They use the real production processes, and real replication. This setup is mandatory for properly testing re-sharding, cluster operations, ... They all however run on the same machine, so they might be limited by the environment.


## Strategy

All the end to end test are placed under path go/test/endtoend.
The main purpose of grouping them together is to make sure we have single place for reference and to combine similar test to run them in the same cluster and save test running time.

### Setup
All the tests should be launching a real cluster just like the production setup and execute the tests on that setup followed by a teardown of all the services.

The cluster launch functions are provided under go/test/endtoend/cluster. This is still work in progress so feel free to add new function as required or update the existing ones.

In general the cluster is build in following order
- Define Keyspace
- Define Shards
- Start topology service [default etcd]
- Start vtctld client
- Start required mysqld instances
- Start corresponding vttablets (atleast 1 master and 1 replica)
- Start Vtgate

A good example to refer will be go/test/endtoend/clustertest

## Progress
So far we have converted the following Python end to end test cases
- Keyspace tests
- mysqlctl tests
- sharded tests
- tabletmanager tests
- vtgate v3 tests

### In-progress
- Inital sharding
- resharding
- vsplit


After a Python test is migrated in Go it will be removed from end to end ci test run by updating the shard value to 5 in `test/config.json`


14 changes: 5 additions & 9 deletions go/test/endtoend/clustertest/vtgate_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ import (
"strings"
"testing"

"github.com/stretchr/testify/require"

"vitess.io/vitess/go/mysql"
"vitess.io/vitess/go/sqltypes"
)
Expand All @@ -35,9 +37,7 @@ func TestVtgateProcess(t *testing.T) {
verifyVtgateVariables(t, clusterInstance.VtgateProcess.VerifyURL)
ctx := context.Background()
conn, err := mysql.Connect(ctx, &vtParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer conn.Close()

exec(t, conn, "insert into customer(id, email) values(1,'email1')")
Expand All @@ -54,9 +54,7 @@ func verifyVtgateVariables(t *testing.T, url string) {
resultMap := make(map[string]interface{})
respByte, _ := ioutil.ReadAll(resp.Body)
err := json.Unmarshal(respByte, &resultMap)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
if resultMap["VtgateVSchemaCounts"] == nil {
t.Error("Vschema count should be present in variables")
}
Expand Down Expand Up @@ -113,8 +111,6 @@ func isMasterTabletPresent(tablets map[string]interface{}) bool {
func exec(t *testing.T, conn *mysql.Conn, query string) *sqltypes.Result {
t.Helper()
qr, err := conn.ExecuteFetch(query, 1000, true)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
return qr
}
30 changes: 9 additions & 21 deletions go/test/endtoend/tabletmanager/commands_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ import (
"testing"
"time"

"github.com/stretchr/testify/require"

"github.com/stretchr/testify/assert"
"vitess.io/vitess/go/mysql"
)
Expand All @@ -33,15 +35,11 @@ func TestTabletCommands(t *testing.T) {
ctx := context.Background()

masterConn, err := mysql.Connect(ctx, &masterTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer masterConn.Close()

replicaConn, err := mysql.Connect(ctx, &replicaTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer replicaConn.Close()

// Sanity Check
Expand Down Expand Up @@ -115,9 +113,7 @@ func TestTabletCommands(t *testing.T) {
func assertExcludeFields(t *testing.T, qr string) {
resultMap := make(map[string]interface{})
err := json.Unmarshal([]byte(qr), &resultMap)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

rowsAffected := resultMap["rows_affected"]
want := float64(2)
Expand All @@ -130,9 +126,7 @@ func assertExcludeFields(t *testing.T, qr string) {
func assertExecuteFetch(t *testing.T, qr string) {
resultMap := make(map[string]interface{})
err := json.Unmarshal([]byte(qr), &resultMap)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

rows := reflect.ValueOf(resultMap["rows"])
got := rows.Len()
Expand Down Expand Up @@ -185,15 +179,11 @@ func runHookAndAssert(t *testing.T, params []string, expectedStatus string, expe
if expectedError {
assert.Error(t, err, "Expected error")
} else {
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

resultMap := make(map[string]interface{})
err = json.Unmarshal([]byte(hr), &resultMap)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

exitStatus := reflect.ValueOf(resultMap["ExitStatus"]).Float()
status := fmt.Sprintf("%.0f", exitStatus)
Expand Down Expand Up @@ -229,9 +219,7 @@ func TestShardReplicationFix(t *testing.T) {
func assertNodeCount(t *testing.T, result string, want int) {
resultMap := make(map[string]interface{})
err := json.Unmarshal([]byte(result), &resultMap)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

nodes := reflect.ValueOf(resultMap["nodes"])
got := nodes.Len()
Expand Down
23 changes: 8 additions & 15 deletions go/test/endtoend/tabletmanager/custom_rule_topo_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ import (
"testing"
"time"

"github.com/stretchr/testify/require"

"github.com/stretchr/testify/assert"
"vitess.io/vitess/go/mysql"
"vitess.io/vitess/go/test/endtoend/cluster"
Expand All @@ -31,14 +33,10 @@ func TestTopoCustomRule(t *testing.T) {

ctx := context.Background()
masterConn, err := mysql.Connect(ctx, &masterTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer masterConn.Close()
replicaConn, err := mysql.Connect(ctx, &replicaTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer replicaConn.Close()

// Insert data for sanity checks
Expand All @@ -51,9 +49,7 @@ func TestTopoCustomRule(t *testing.T) {
topoCustomRulePath := "/keyspaces/ks/configs/CustomRules"
data := []byte("[]\n")
err = ioutil.WriteFile(topoCustomRuleFile, data, 0777)
if err != nil {
t.Error("Write to file failed")
}
require.NoError(t, err)

// Copy config file into topo.
err = clusterInstance.VtctlclientProcess.ExecuteCommand("TopoCp", "-to_topo", topoCustomRuleFile, topoCustomRulePath)
Expand Down Expand Up @@ -86,9 +82,8 @@ func TestTopoCustomRule(t *testing.T) {
result, err := vtctlExec("select id, value from t1", rTablet.Alias)
resultMap := make(map[string]interface{})
err = json.Unmarshal([]byte(result), &resultMap)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

rowsAffected := resultMap["rows_affected"]
want := float64(2)
assert.Equal(t, want, rowsAffected)
Expand All @@ -101,9 +96,7 @@ func TestTopoCustomRule(t *testing.T) {
"Query" : "(select)|(SELECT)"
}]`)
err = ioutil.WriteFile(topoCustomRuleFile, data, 0777)
if err != nil {
t.Error("Write to file failed")
}
require.NoError(t, err)

err = clusterInstance.VtctlclientProcess.ExecuteCommand("TopoCp", "-to_topo", topoCustomRuleFile, topoCustomRulePath)
assert.Nil(t, err, "error should be Nil")
Expand Down
70 changes: 19 additions & 51 deletions go/test/endtoend/tabletmanager/lock_unlock_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ import (
"testing"
"time"

"github.com/stretchr/testify/require"

"github.com/stretchr/testify/assert"
"vitess.io/vitess/go/mysql"
)
Expand All @@ -32,15 +34,11 @@ func TestLockAndUnlock(t *testing.T) {
ctx := context.Background()

masterConn, err := mysql.Connect(ctx, &masterTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer masterConn.Close()

replicaConn, err := mysql.Connect(ctx, &replicaTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer replicaConn.Close()

// first make sure that our writes to the master make it to the replica
Expand All @@ -50,18 +48,14 @@ func TestLockAndUnlock(t *testing.T) {

// now lock the replica
err = tmcLockTables(ctx, replicaTablet.GrpcPort)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
// make sure that writing to the master does not show up on the replica while locked
exec(t, masterConn, "insert into t1(id, value) values(3,'c')")
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")] [VARCHAR("b")]]`)

// finally, make sure that unlocking the replica leads to the previous write showing up
err = tmcUnlockTables(ctx, replicaTablet.GrpcPort)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")] [VARCHAR("b")] [VARCHAR("c")]]`)

// Unlocking when we do not have a valid lock should lead to an exception being raised
Expand All @@ -80,70 +74,50 @@ func TestStartSlaveUntilAfter(t *testing.T) {
ctx := context.Background()

masterConn, err := mysql.Connect(ctx, &masterTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer masterConn.Close()

replicaConn, err := mysql.Connect(ctx, &replicaTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer replicaConn.Close()

//first we stop replication to the replica, so we can move forward step by step.
err = tmcStopSlave(ctx, replicaTablet.GrpcPort)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

exec(t, masterConn, "insert into t1(id, value) values(1,'a')")
pos1, err := tmcMasterPosition(ctx, masterTablet.GrpcPort)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

exec(t, masterConn, "insert into t1(id, value) values(2,'b')")
pos2, err := tmcMasterPosition(ctx, masterTablet.GrpcPort)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

exec(t, masterConn, "insert into t1(id, value) values(3,'c')")
pos3, err := tmcMasterPosition(ctx, masterTablet.GrpcPort)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

// Now, we'll resume stepwise position by position and make sure that we see the expected data
checkDataOnReplica(t, replicaConn, `[]`)

// starts the mysql replication until
timeout := 10 * time.Second
err = tmcStartSlaveUntilAfter(ctx, replicaTablet.GrpcPort, pos1, timeout)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
// first row should be visible
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")]]`)

err = tmcStartSlaveUntilAfter(ctx, replicaTablet.GrpcPort, pos2, timeout)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")] [VARCHAR("b")]]`)

err = tmcStartSlaveUntilAfter(ctx, replicaTablet.GrpcPort, pos3, timeout)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
checkDataOnReplica(t, replicaConn, `[[VARCHAR("a")] [VARCHAR("b")] [VARCHAR("c")]]`)

// Strat replication to the replica
err = tmcStartSlave(ctx, replicaTablet.GrpcPort)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
// Clean the table for further testing
exec(t, masterConn, "delete from t1")
}
Expand All @@ -153,15 +127,11 @@ func TestLockAndTimeout(t *testing.T) {
ctx := context.Background()

masterConn, err := mysql.Connect(ctx, &masterTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer masterConn.Close()

replicaConn, err := mysql.Connect(ctx, &replicaTabletParams)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)
defer replicaConn.Close()

// first make sure that our writes to the master make it to the replica
Expand All @@ -170,9 +140,7 @@ func TestLockAndTimeout(t *testing.T) {

// now lock the replica
err = tmcLockTables(ctx, replicaTablet.GrpcPort)
if err != nil {
t.Fatal(err)
}
require.NoError(t, err)

// make sure that writing to the master does not show up on the replica while locked
exec(t, masterConn, "insert into t1(id, value) values(2,'b')")
Expand Down
Loading

0 comments on commit 8d20666

Please sign in to comment.