diff --git a/.github/workflows/cd-website_oss.yaml b/.github/workflows/cd-website_oss.yaml deleted file mode 100644 index 00bcf95fd69..00000000000 --- a/.github/workflows/cd-website_oss.yaml +++ /dev/null @@ -1,41 +0,0 @@ -name: CD - Documentation Website -on: - push: - branches: - - master - paths: - - 'doc/**' - - '.github/workflows/cd-website_oss.yaml' -jobs: - docs: - name: deploy atlasgo.io - runs-on: ubuntu-latest - defaults: - run: - working-directory: ./doc/website - steps: - - uses: actions/checkout@v4 - - uses: actions/setup-node@v4 - with: - node-version: 18 - - name: Cache dependencies - uses: actions/cache@v4 - with: - path: ~/.npm - key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} - restore-keys: | - ${{ runner.os }}-node- - - name: Install Dependencies - run: npm install - - name: Build website - run: npm run build - - name: Configure AWS credentials - uses: aws-actions/configure-aws-credentials@v4 - with: - aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} - aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} - aws-region: us-east-1 - - name: Deploy Website - run: npm run upload - - name: Invalidate Cache - run: npm run invalidate-cdn diff --git a/.github/workflows/ci-website_oss.yaml b/.github/workflows/ci-website_oss.yaml deleted file mode 100644 index 69328f2f377..00000000000 --- a/.github/workflows/ci-website_oss.yaml +++ /dev/null @@ -1,28 +0,0 @@ -name: CI - Documentation Website -on: - push: - branches-ignore: - - master -jobs: - docsite_ci: - name: verify build doesn't crash - runs-on: ubuntu-latest - defaults: - run: - working-directory: ./doc/website - steps: - - uses: actions/checkout@v4 - - uses: actions/setup-node@v4 - with: - node-version: 18 - - name: Cache dependencies - uses: actions/cache@v4 - with: - path: ~/.npm - key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }} - restore-keys: | - ${{ runner.os }}-node- - - name: Install Dependencies - run: npm install - - name: Build website - run: npm run build diff --git a/doc/md/about.md b/doc/md/about.md deleted file mode 100644 index f4d666c0c4c..00000000000 --- a/doc/md/about.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: About -id: about -slug: /about ---- -# About Atlas - - - -Atlas is an open-source project maintained by [Ariga Technologies Ltd](https://ariga.io), -a tech company founded by the maintainers of [Ent](https://entgo.io). - -### Reaching Out - -If you have a question or want to chat with the Atlas community, -join our [Discord Server](https://discord.com/invite/QhsmBAWzrC) or -[open an issue](https://github.com/ariga/atlas/issues) on our [GitHub Repo](https://github.com/ariga/atlas). \ No newline at end of file diff --git a/doc/md/atlas-schema/external.mdx b/doc/md/atlas-schema/external.mdx deleted file mode 100644 index ffa973250d1..00000000000 --- a/doc/md/atlas-schema/external.mdx +++ /dev/null @@ -1,155 +0,0 @@ ---- -id: external-schema -title: External Integrations -slug: /atlas-schema/external ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -Atlas allows loading the desired state of the database schema from external programs or ORMs, regardless of the -programing language they are written in. Once the schema is loaded, it can be used by the various Atlas commands -such as `atlas schema` and `atlas migrate`. - -## Loading an External Schema - -In order to load an external schema, you need first to create an `atlas.hcl` [config file](projects.mdx), if you don't -already have one and declare a new data source of type [`external_schema`](projects.mdx#data-source-external_schema) that -can be used later as the desired state. Let's explain this with an example. - -Given the following `atlas.hcl` file: - - - - -```hcl title="atlas.hcl" -data "external_schema" "orm" { - # The first argument is the command to run, - # and the rest are optional arguments. - program = [ - "npm", - "run", - "generate-schema" - ] -} - -env "orm" { - src = data.external_schema.orm.url - dev = "docker://mysql/8/dev" -} -``` - - - - -```hcl title="atlas.hcl" -data "external_schema" "orm" { - # The first argument is the command to run, - # and the rest are optional arguments. - program = [ - "npm", - "run", - "generate-schema" - ] -} - -env "orm" { - src = data.external_schema.orm.url - dev = "docker:/maria/latest/dev" -} -``` - - - - -```hcl title="atlas.hcl" -data "external_schema" "orm" { - # The first argument is the command to run, - # and the rest are optional arguments. - program = [ - "npm", - "run", - "generate-schema" - ] -} - -env "orm" { - src = data.external_schema.orm.url - dev = "docker://postgres/15/dev?search_path=public" -} -``` - - - - -```hcl title="atlas.hcl" -data "external_schema" "orm" { - # The first argument is the command to run, - # and the rest are optional arguments. - program = [ - "npm", - "run", - "generate-schema" - ] -} - -env "orm" { - src = data.external_schema.orm.url - dev = "sqlite://dev?mode=memory" -} -``` - - - - - -Let's explain what is happening when running `atlas` with the `--env orm` command: - -1. The `external_schema.orm` data source is loaded, by running the command `npm run generate-schema` and - capturing its output as the desired state of the schema. -2. The program output should be defined as a list of SQL DDL statements separated by semicolon (`;`) or a - [custom delimiter](../versioned/new.mdx#custom-statements-delimiter). More info about the format can be found in - the [SQL schema page](sql.mdx#schema-definition). For example: - ```sql {1,3} - CREATE TABLE users (id int PRIMARY KEY, name text NOT NULL); - - CREATE TABLE posts (id int PRIMARY KEY, content text NOT NULL, author_id int NOT NULL REFERENCES users(id)); - ``` -3. After the schema is loaded, Atlas utilizes the [dev-database](sql.mdx#dev-database) to parse and validate the - SQL definition and converts them into its internal graph representation. -4. The loaded schema can be used by the various Atlas commands. For example: - ```bash - # Generating a new migration. - atlas migrate diff --env orm - # Applying the schema to the database. - atlas schema apply --env orm - ``` - -## Supported ORMs - -Atlas supports loading the desired schema from popular ORMs in various languages. Developers who connect their ORM to -Atlas can use it to automatically plan schema migrations based on the desired state defined in the ORM, rather than -crafting them by hand. The supported ORMs are: - -| **Language** | **ORMs** | **Supported Databases** | -|-----------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Python | [SQLAlchemy](/guides/orms/sqlalchemy), [Django](/guides/orms/django) | MySQL MariaDB PostgreSQL SQLite SQL Server | -| Go | [GORM](/guides/orms/gorm) | MySQL MariaDB PostgreSQL SQLite SQL Server | -| Go | [Ent](https://entgo.io/docs/versioned-migrations), [Beego](/guides/orms/beego) | MySQL MariaDB PostgreSQL SQLite | -| Java | [Hibernate](/guides/orms/hibernate) | MySQL MariaDB PostgreSQL SQLite | -| JavaScript TypeScript | [Sequelize](/guides/orms/sequelize), [TypeORM](/guides/orms/typeorm) | MySQL MariaDB PostgreSQL SQLite SQL Server | -| PHP | [Doctrine](/guides/orms/doctrine) | MySQL MariaDB PostgreSQL SQLite SQL Server | -| C# | [Entity Framework Core](/guides/orms/efcore) | MySQL MariaDB PostgreSQL SQLite SQL Server | - -Coming soon are Laravel and Prisma. If you are using an ORM that is not listed here and would like to see -it supported, [let us know!](https://github.com/ariga/atlas/issues/new) - -## Write an external loader - -Most ORMs offer a way to generate a series of DDL statements from model definitions. For example, Java Hibernate enables -"schema exporting" using the `hbm2ddl` option, and Microsoft EF supplies a helper method called `GenerateCreateScript` -that lets users craft a small script to produce DDLs from their EF models. In a similar way, TypeORM users can use -the `createSchemaBuilder().log()` API, and so on. - -A fully working implementation can be found in the [`atlas-provider-gorm`](https://github.com/ariga/atlas-provider-gorm) -repository, which is an external loader for the [GORM](https://gorm.io/) ORM. diff --git a/doc/md/atlas-schema/hcl-input.md b/doc/md/atlas-schema/hcl-input.md deleted file mode 100644 index ec7ac10952e..00000000000 --- a/doc/md/atlas-schema/hcl-input.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -id: hcl-variables -slug: /atlas-schema/input-variables -title: HCL Input Variables ---- -In some cases, it is desirable to reuse an Atlas HCL document in different contexts. -For example, many organizations manage a multi-tenant architecture where the same database -schema is replicated per tenant. For this reason, the Atlas DDL supports input variables. - -Input variables are defined using the `variable` block: - -```hcl -variable "comment" { - type = string // | int | bool | list(string) | etc. - default = "default value" -} -``` - -Once defined, their value can be referenced using `var.`: - -```hcl -schema "main" { - comment = var.comment -} -``` - -Finally, input variables are passed to Atlas in the `schema apply` command using the -`--var` flag: - -```shell -atlas schema apply -u ... -f atlas.hcl --var comment="hello" -``` - -If a variable is not set from the command line, Atlas tries to use its default value. -If no default value is set, an error is returned: - -```text -schemahcl: failed decoding: input value "tenant" expected but missing -``` - -### Variable schema names - -Returning to the use case we described above, let's see how we can use input variables -to manage a multi-tenant architecture. - -First, we define our schema in a file named `multi.hcl`: - -```hcl title="multi.hcl" -// Define the input variable that contains the tenant name. -variable "tenant" { - type = string - description = "The name of the tenant (schema) to create" -} - -// Define the schema, "tenant" here is a placeholder for the final -// schema name that will be defined at runtime. -schema "tenant" { - // Reference to the input variable. - name = var.tenant -} -table "users" { - // Refer to the "tenant" schema. It's actual name will be - // defined at runtime. - schema = schema.tenant - column "id" { - type = int - } -} -``` - -Now suppose we have two tenants, `jerry` and `george`. We can apply the same schema twice: - -Once for Jerry: -```text -atlas schema apply -u mysql://user:pass@localhost:3306/ --schema jerry --var tenant=jerry -``` -Observe the generated queries apply to the `jerry` schema: -```text --- Planned Changes: --- Add new schema named "jerry" -CREATE DATABASE `jerry` --- Create "users" table -CREATE TABLE `jerry`.`users` (`id` int NOT NULL) -✔ Apply -``` -And again for George: -```text -atlas schema apply -u mysql://user:pass@localhost:3306/ --schema george --var tenant=george -``` -The generated queries create the `george` schema: -```text --- Planned Changes: --- Add new schema named "george" -CREATE DATABASE `george` --- Create "users" table -CREATE TABLE `george`.`users` (`id` int NOT NULL) -✔ Apply -``` diff --git a/doc/md/atlas-schema/hcl-types.md b/doc/md/atlas-schema/hcl-types.md deleted file mode 100644 index 9f4f62e3fff..00000000000 --- a/doc/md/atlas-schema/hcl-types.md +++ /dev/null @@ -1,1874 +0,0 @@ ---- -id: hcl-types -title: HCL Column Types -slug: /atlas-schema/hcl-types ---- - -The following guide describes the column types supported by Atlas HCL, and how to use them. - -## MySQL - -### Bit - -The `bit` type allows creating [BIT](https://dev.mysql.com/doc/refman/8.0/en/bit-type.html) columns. -An optional size attribute allows controlling the number of bits stored in a column, ranging from 1 to 64. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = bit - } - column "c2" { - type = bit(4) - } -} -``` - -### Binary - -The `varbinary` and `binary` types allow storing binary byte strings. - -```hcl -table "t" { - schema = schema.test - column "c1" { - // Equals to binary(1). - type = binary - } - column "c2" { - type = binary(10) - } - column "c3" { - type = varbinary(255) - } -} -``` - -### Blob - -The `tinyblob`, `mediumblob`, `blob` and `longblob` types allow storing binary large objects. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = tinyblob - } - column "c2" { - type = mediumblob - } - column "c3" { - type = blob - } - column "c4" { - type = longblob - } -} -``` - -### Boolean - -The `bool` and `boolean` types are mapped to `tinyint(1)` in MySQL. Still, Atlas allows maintaining columns of type `bool` -in the schema for simplicity reasons. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = bool - } - column "c2" { - type = boolean - } -} -``` - -Learn more about the motivation for these types in the -[MySQL website](https://dev.mysql.com/doc/refman/8.0/en/other-vendor-data-types.html). - -### Date and Time - -Atlas supports the standard MySQL types for storing date and time values: `time`, `timestamp`, `date`, `datetime`, -and `year`. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = time - } - column "c2" { - type = timestamp - } - column "c3" { - type = date - } - column "c4" { - type = datetime - } - column "c5" { - type = year - } - column "c6" { - type = time(1) - } - column "c7" { - type = timestamp(2) - } - column "c8" { - type = datetime(4) - } -} -``` - -### Fixed Point (Decimal) - -The `decimal` and `numeric` types are supported for storing exact numeric values. Note that in MySQL the two types -are identical. - -```hcl -table "t" { - schema = schema.test - column "c1" { - // Equals to decimal(10) as the - // default precision is 10. - type = decimal - } - column "c2" { - // Equals to decimal(5,0). - type = decimal(5) - } - column "c3" { - type = decimal(5,2) - } - column "c4" { - type = numeric - unsigned = true - } -} -``` - -### Floating Point (Float) - -The `float` and `double` types are supported for storing approximate numeric values. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = float - } - column "c2" { - type = double - } - column "c3" { - type = float - unsigned = true - } - column "c4" { - type = double - unsigned = true - } -} -``` - -### Enum - -The `enum` type allows storing a set of enumerated values. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = enum("a", "b") - } - column "c2" { - type = enum( - "c", - "d", - ) - } -} -``` - -### Integer - -The `tinyint`, `smallint`, `int`, `mediumint`, `bigint` integer types are support by Atlas. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = int - } - column "c2" { - type = tinyint - } - column "c3" { - type = smallint - } - column "c4" { - type = mediumint - } - column "c5" { - type = bigint - } -} -``` - -#### Integer Attributes - -The [`auto_increment`](https://dev.mysql.com/doc/refman/8.0/en/numeric-type-attributes.html), and `unsigned` attributes -are also supported by integer types. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = tinyint - unsigned = true - } - column "c2" { - type = smallint - auto_increment = true - } - primary_key { - columns = [column.c2] - } -} -``` - -### JSON - -The `json` type allows storing JSON objects. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = json - } -} -``` - -### Set - -The `set` type allows storing a set of values. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = set("a", "b") - } - column "c2" { - type = set( - "c", - "d", - ) - } -} -``` - -### String - -Atlas supports the standard MySQL types for storing string values. `varchar`, `char`, `tinytext`, `mediumtext`, `text` -and `longtext`. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = varchar(255) - } - column "c2" { - type = char(1) - } - column "c3" { - type = tinytext - } - column "c4" { - type = mediumtext - } - column "c5" { - type = text - } - column "c6" { - type = longtext - } -} -``` - -### Spatial - -The `geometry`, `point`, `multipoint`, `linestring` and the rest of the -[MySQL spatial types](https://dev.mysql.com/doc/refman/8.0/en/spatial-type-overview.html) are supported by Atlas. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = geometry - } - column "c2" { - type = point - } - column "c3" { - type = multipoint - } - column "c4" { - type = linestring - } -} -``` - -## PostgreSQL - -### Array - -Atlas supports defining PostgreSQL array types using the `sql` function. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = sql("int[]") - } - column "c2" { - type = sql("text[]") - } - column "c3" { - type = sql("int ARRAY") - } - column "c4" { - type = sql("varchar(255)[]") - } - column "c5" { - // The current PostgreSQL implementation - // ignores any supplied array size limits. - type = sql("point[4][4]") - } -} -``` - -### Bit - -The `bit` and `bit varying` types allow creating -[bit string](https://www.postgresql.org/docs/current/datatype-bit.html) columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - // Equals to bit(1). - type = bit - } - column "c2" { - type = bit(2) - } - column "c3" { - // Unlimited length. - type = bit_varying - } - column "c4" { - type = bit_varying(1) - } -} - -``` - -### Boolean - -The `boolean` type allows creating standard SQL boolean columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = boolean - } - column "c2" { - type = boolean - default = true - } -} -``` - -### Binary - -The `bytea` type allows creating binary string columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = bytea - } -} -``` - -### Date, Time and Interval - -Atlas supports the standard PostgreSQL types for creating date, time and interval columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = date - } - column "c2" { - // Equals to "time without time zone". - type = time - } - column "c3" { - // Equals to "time with time zone". - type = timetz - } - column "c4" { - // Equals "timestamp without time zone". - type = timestamp - } - column "c5" { - // Equals "timestamp with time zone". - type = timestamptz - } - column "c6" { - type = timestamp(4) - } - column "c7" { - type = interval - } -} -``` - -### Domain - -The `domain` type is a user-defined data type that is based on an existing data type but with optional constraints -and default values. Learn more about it in the [PostgreSQL website](https://www.postgresql.org/docs/current/domains.html). - -```hcl -domain "us_postal_code" { - schema = schema.public - type = text - null = true - check "us_postal_code_check" { - expr = "((VALUE ~ '^\\d{5}$'::text) OR (VALUE ~ '^\\d{5}-\\d{4}$'::text))" - } -} - -domain "username" { - schema = schema.public - type = text - null = false - default = "anonymous" - check "username_length" { - expr = "(length(VALUE) > 3)" - } -} - -table "users" { - schema = schema.public - column "name" { - type = domain.username - } - column "zip" { - type = domain.us_postal_code - } -} -``` - -### Enum - -The `enum` type allows storing a set of enumerated values. Learn more about it in the [PostgreSQL website](https://www.postgresql.org/docs/current/datatype-enum.html). - -```hcl -enum "status" { - schema = schema.test - values = ["on", "off"] -} - -table "t1" { - schema = schema.test - column "c1" { - type = enum.status - } -} - -table "t2" { - schema = schema.test - column "c1" { - type = enum.status - } -} -``` - -### Fixed Point (Decimal) - -The `decimal` and `numeric` types are supported for storing exact numeric values. Note that in PostgreSQL the two types -are identical. - -```hcl -table "t" { - schema = schema.test - column "c1" { - // Equals to decimal. - type = numeric - } - column "c2" { - // Equals to decimal(5). - type = numeric(5) - } - column "c3" { - // Equals to decimal(5,2). - type = numeric(5,2) - } -} -``` - - -### Floating Point (Float) - -The `real` and `double_precision` types are supported for storing -[approximate numeric values](https://www.postgresql.org/docs/current/datatype-numeric.html#DATATYPE-FLOAT). - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = real - } - column "c2" { - type = double_precision - } - column "c3" { - // Equals to real when precision is between 1 to 24. - type = float(10) - } - column "c2" { - // Equals to double_precision when precision is between 1 to 24. - type = float(30) - } -} -``` - -### Geometric - -Atlas supports the standard PostgreSQL types for creating geometric columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = circle - } - column "c2" { - type = line - } - column "c3" { - type = lseg - } - column "c4" { - type = box - } - column "c5" { - type = path - } - column "c6" { - type = polygon - } - column "c7" { - type = point - } -} -``` - -### Integer - -The `smallint`, `integer` / `int`, `bigint` types allow creating integer types. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = smallint - } - column "c2" { - type = integer - } - column "c3" { - type = int - } - column "c4" { - type = bigint - default = 1 - } -} -``` - -### JSON - -The `json` and `jsonb` types allow creating columns for storing JSON objects. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = json - } - column "c2" { - type = jsonb - } -} -``` - -### Money - -The `money` data type allows creating columns for storing currency amount with a fixed fractional precision. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = money - } -} -``` - -### Network Address - -The `inet`, `cidr`, `macaddr` and `macaddr8` types allow creating network address columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = inet - } - column "c2" { - type = cidr - } - column "c3" { - type = macaddr - } - column "c4" { - type = macaddr8 - } -} -``` - -### Range - -PostgreSQL supports the creation of range types for storing range of values of some element type. -Learn more about them in the [PostgreSQL website](https://www.postgresql.org/docs/current/rangetypes.html). - - -```hcl -table "t" { - schema = schema.test - column "r1" { - type = int4range - } - column "r2" { - type = int8range - } - column "r3" { - type = numrange - } - column "r4" { - type = tsrange - } - column "r5" { - type = tstzrange - } - column "r6" { - type = daterange - } - column "r7" { - type = int4multirange - } - column "r8" { - type = int8multirange - } - column "r9" { - type = nummultirange - } - column "r10" { - type = tsmultirange - } - column "r11" { - type = tstzmultirange - } - column "r12" { - type = datemultirange - } -} -``` - -### Serial - -PostgreSQL supports creating columns of types `smallserial`, `serial`, and `bigserial`. Note that these types are not -_actual_ types, but more like "macros" for creating non-nullable integer columns with sequences attached. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = smallserial - } - column "c2" { - type = serial - } - column "c3" { - type = bigserial - } -} -``` - -### String - -The `varchar`, `char`, `character_varying`, `character` and `text` types allow creating string columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - // Unlimited length. - type = varchar - } - column "c2" { - // Alias to character_varying(255). - type = varchar(255) - } - column "c3" { - // Equals to char(1). - type = char - } - column "c4" { - // Alias to character(5). - type = char(5) - } - column "c5" { - type = text - } -} -``` - -### Text Search - -The `tsvector` and `tsquery` data types are designed to store and query full text search. Learn more about them in the -[PostgreSQL website](https://www.postgresql.org/docs/current/datatype-textsearch.html). - -```hcl -table "t" { - schema = schema.test - column "tsv" { - type = tsvector - } - column "tsq" { - type = tsquery - } -} -``` - -### UUID - -The `uuid` data type allows creating columns for storing Universally Unique Identifiers (UUID). - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = uuid - } - column "c2" { - type = uuid - default = sql("gen_random_uuid()") - } -} -``` - -### XML - -The `xml` data type allows creating columns for storing XML data. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = xml - } -} -``` - -## SQLite - -Values in SQLite are stored in one of the four native types: `BLOB`, `INTEGER`, `NULL`, `TEXT` and `REAL`. Still, Atlas -supports variety of data types that are commonly used by ORMs. These types are mapped to column affinities based on -the rules described in [SQLite website](https://www.sqlite.org/datatype3.html#type_affinity). - -### Blob - -The `blob` data type allows creating columns with `BLOB` type affinity. - -```hcl -table "t" { - schema = schema.main - column "c" { - type = blob - } -} -``` - -### Integer - -The `int` and `integer` data types allow creating columns with `INTEGER` type affinity. - -```hcl -table "t" { - schema = schema.main - column "c" { - type = int - } -} -``` - -### Numeric - -The `numeric` and `decimal` data types allow creating columns with `NUMERIC` type affinity. - -```hcl -table "t" { - schema = schema.main - column "c" { - type = decimal - } -} -``` - -### Text - -The `text`, `varchar`, `clob`, `character` and `varying_character` data types allow creating columns with `text` type -affinity. i.e. stored as text strings. - -```hcl -table "t" { - schema = schema.main - column "c" { - type = text - } -} -``` - -### Real - -The `real`, `double`, `double_precision`, and `float` data types allow creating columns with `real` type -affinity. - -```hcl -table "t" { - schema = schema.main - column "c" { - type = real - } -} -``` - -### Additional Types - -As mentioned above, Atlas supports variety of data types that are commonly used by ORMs. e.g. [Ent](https://entgo.io). - -```hcl -table "t" { - schema = schema.main - column "c1" { - type = bool - } - column "c2" { - type = date - } - column "c3" { - type = datetime - } - column "c4" { - type = uuid - } - column "c5" { - type = json - } -} -``` - -## SQL Server - -### Bit - -The `bit` type allows creating [BIT](https://learn.microsoft.com/en-us/sql/t-sql/data-types/bit-transact-sql) columns. - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - type = bit - } -} -``` - -### Binary strings - -The `varbinary` and `binary` types allow storing binary byte strings. - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - // Equals to binary(1). - type = binary - } - column "c2" { - type = binary(10) - } - column "c3" { - type = varbinary(255) - } - column "c4" { - // Max length: 8,000 bytes. - type = varbinary(MAX) - } -} -``` - -### Date and Time - -Atlas supports the standard SQL Server types for storing date and time values: `date`, `datetime`, `datetime2`, `datetimeoffset`, `smalldatetime` and `time`. -The document on Microsoft website has more information on [date and time types](https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-types-transact-sql#date-and-time). - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - type = date - } - column "c2" { - type = datetime - } - column "c3" { - type = datetime2 - } - column "c4" { - type = datetimeoffset - } - column "c5" { - type = smalldatetime - } - column "c6" { - // Equals to time(7). - type = time - } - column "c7" { - type = time(1) - } - column "c8" { - type = time(2) - } - column "c9" { - type = time(3) - } - column "c10" { - type = time(4) - } - column "c11" { - type = time(5) - } - column "c12" { - type = time(6) - } -} -``` - -### Integer - -The `int`, `bigint`, `smallint`, and `tinyint` integer types are support by Atlas. -See document on Microsoft website for more information on [integer types](https://learn.microsoft.com/en-us/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql). - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - type = int - } - column "c2" { - type = tinyint - } - column "c3" { - type = smallint - } - column "c4" { - type = bigint - } -} -``` - -#### Integer Blocks - -The [`identity`](https://learn.microsoft.com/en-us/sql/t-sql/functions/identity-function-transact-sql) block can be used to create an identity column. - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - type = tinyint - } - column "c2" { - type = bigint - identity { - seed = 701 - increment = 1000 - } - } - primary_key { - columns = [column.c2] - } -} -``` - -### Fixed Point (Decimal) - -The [`decimal` and `numeric`](https://learn.microsoft.com/en-us/sql/t-sql/data-types/decimal-and-numeric-transact-sql) types are supported for storing exact numeric values. Note that in SQL Server the two types are identical. - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - // Equals to decimal(18, 0) as the - // default precision is 18. - type = decimal - } - column "c2" { - // Equals to decimal(5,0). - type = decimal(5) - } - column "c3" { - type = decimal(5,2) - } - column "c4" { - type = numeric - } -} -``` - -### Floating Point (Float) - -The `float` and `real` types are supported for storing approximate numeric values. -The document on Microsoft website has more information on [float types](https://learn.microsoft.com/en-us/sql/t-sql/data-types/float-and-real-transact-sql). - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - // Equals to float(53). - type = float - } - column "c2" { - // float(n) is between 1 and 53. - type = float(12) - } - column "c3" { - // The ISO synonym for real is `float(24)`. - type = real - } -} -``` - -### Money - -The [`money` and `smallmoney`](https://learn.microsoft.com/en-us/sql/t-sql/data-types/money-and-smallmoney-transact-sql) data types allows creating columns for storing currency amount with a fixed fractional precision. - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - type = money - } - column "c2" { - type = smallmoney - } -} -``` - -### Character strings - -The `char`, and `varchar` types allow creating string columns. The document on Microsoft website has more information on [string types](https://learn.microsoft.com/en-us/sql/t-sql/data-types/char-and-varchar-transact-sql). - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - // Equals to varchar(1). - type = varchar - } - column "c2" { - type = varchar(255) - } - column "c3" { - type = varchar(MAX) - } - column "c4" { - // Equals to char(1). - type = char - } - column "c5" { - type = char(5) - } -} -``` - -### Unicode character strings - -The `nchar`, and `nvarchar` types allow creating string columns. The document on Microsoft website has more information on [unicode string types](https://learn.microsoft.com/en-us/sql/t-sql/data-types/nchar-and-nvarchar-transact-sql). - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - // Equals to nvarchar(1). - type = nvarchar - } - column "c2" { - type = nvarchar(255) - } - column "c3" { - type = nvarchar(MAX) - } - column "c4" { - // Equals to nchar(1). - type = nchar - } - column "c5" { - type = nchar(5) - } -} -``` - -### `ntext`, `text` and `image` - -Atlas supports some deprecated types for backward compatibility. The document on Microsoft website has more information on [ntext, text and image types](https://learn.microsoft.com/en-us/sql/t-sql/data-types/ntext-text-and-image-transact-sql). - -```hcl -table "t" { - schema = schema.dbo - column "c1" { - type = ntext - } - column "c2" { - type = text - } - column "c3" { - type = image - } -} -``` - -### User-defined types - -There are two types of [user-defined types](https://learn.microsoft.com/en-us/sql/t-sql/statements/create-type-transact-sql) are supported by Atlas: Alias Types and Table Types. - -The CLR user-defined types are not supported by Atlas. - -#### Alias Types - -The `type_alias` type allows creating columns with user-defined types. - -```hcl -type_alias "ssn" { - schema = schema.dbo - type = varchar(11) - null = false -} -type_alias "age" { - schema = schema.dbo - type = smallint - null = false -} -table "t" { - schema = schema.dbo - column "ssn" { - type = type_alias.ssn - } - column "age" { - type = type_alias.age - } -} -``` - -#### Table Types - -The `type_table` type allows the creation of columns with user-defined table types. The User-Defined table type only allows to use of functions/procedures arguments and not on table columns. - -```hcl -type_table "address" { - schema = schema.dbo - column "ssn" { - type = type_alias.ssn - } - column "street" { - type = varchar(255) - } - column "city" { - type = varchar(255) - } - column "state" { - type = varchar(2) - } - column "zip" { - type = type_alias.zip - } - index { - unique = true - columns = [column.ssn] - } - check "zip_check" { - expr = "len(zip) = 5" - } -} -function "fn1" { - schema = schema.dbo - lang = SQL - arg "@a1" { - type = type_table.address - readonly = true // The table type is readonly argument. - } - arg "@zip" { - type = type_alias.zip - } - return = int - as = <<-SQL - BEGIN - RETURN (SELECT COUNT(1) FROM @a1 WHERE zip = @zip); - END - SQL -} -type_alias "ssn" { - schema = schema.foo - type = varchar(11) - null = false -} -type_alias "zip" { - schema = schema.foo - type = varchar(5) - null = false -} -``` - -:::note -SQL Server doesn't support creating a named unique constraint on a user-defined table type. Atlas was unable to handle duplicate unique constraints (the unique constraints on the same columns) on table types. The below example will cause schema diff for every time it applies schema. - -```sql -CREATE TYPE [typ1] AS TABLE ( - [c1] int NOT NULL UNIQUE ([c1] DESC), - UNIQUE ([c1] ASC) -); -``` -::: - -## ClickHouse - -### Array -Atlas supports defining ClickHouse array types using the `sql` function. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - type = sql("Array(Int32)") - } - column "c2" { - type = sql("Array(String)") - } - column "c3" { - type = sql("Array(Array(Int32))") - } -} -``` -### Boolean -The `Bool` type allows creating standard SQL boolean columns. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - type = Bool - } - column "c2" { - type = Bool - default = true - } -} -``` - -### Date and Time -Atlas supports the standard ClickHouse types for creating date and time columns: `Date`, `DateTime`, `DateTime32` `DateTime64`. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = Date - } - column "c2" { - null = false - type = Date32 - } - column "c3" { - null = false - type = DateTime - } - column "c4" { - null = false - type = DateTime("America/New_York") - } - column "c5" { - null = false - type = DateTime - } - column "c6" { - null = false - type = DateTime32("America/New_York") - } - column "c7" { - null = false - type = DateTime64(3) - } - column "c8" { - null = false - type = DateTime64(3, "America/New_York") - } -} -``` - -### Fixed Point (Decimal) -The `Decimal` type allows creating columns for storing exact numeric values. -The precision and scale are specified as below. -- `Decimal` Precision: 9, Scale: 0 -- `Decimal32(Scale)` Precision: 9, Scale: Scale -- `Decimal64(Scale)` Precision: 18, Scale: Scale -- `Decimal128(Scale)` Precision: 38, Scale: Scale -- `Decimal256(Scale)` Precision: 76, Scale: Scale -- `Decimal(Precision, Scale)` Precision: Precision, Scale: Scale - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = Decimal - } - column "c2" { - null = false - type = Decimal32(2) - } - column "c3" { - null = false - type = Decimal64(2) - } - column "c4" { - null = false - type = Decimal128(2) - } - column "c5" { - null = false - type = Decimal256(2) - } - column "c6" { - null = false - type = Decimal(11, 2) - } -} -``` - -### Enum -The `Enum` type allows storing a set of enumerated values and supports defining ClickHouse enum types using the `sql` function. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = Enum("a", "b") - } - column "c2" { - null = false - type = Enum8("a", "b") - } - column "c3" { - null = false - type = Enum16("a", "b") - } -} -``` - -### Fixed String -The `FixedString` type allows creating columns for storing fixed-length string values. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = FixedString(10) - } -} -``` - -### Floating Point (Float) -The `Float32` and `Float64` types are supported for storing approximate numeric values. -The aliases for these types are `Float` and `Double`. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = Float - } - column "c2" { - null = false - type = Double - } -} -``` - -### Integer -The `Int8`, `Int16`, `Int32`, `Int64`, `Int128`, `Int256` types allow creating integer types. -The aliases for these types are `Tinyint`, `Smallint`, `Int`, `Bigint`. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = Tinyint - } - column "c2" { - null = false - type = Smallint - } - column "c3" { - null = false - type = Int - } - column "c4" { - null = false - type = Bigint - } - column "c5" { - null = false - type = Int128 - } - column "c6" { - null = false - type = Int256 - } -} -``` - -#### Integer Attributes - -The `Unsigned` attribute is also supported by integer types. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = Int - unsigned = true - } -} -``` - -### IPv4 and IPv6 -The `IPv4` and `IPv6` types allow creating columns for storing IPv4 and IPv6 addresses. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = IPv4 - } - column "c2" { - null = false - type = IPv6 - } -} -``` - -### Spatial -Atlas supports the standard ClickHouse types for creating spatial columns. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = Point - } - column "c2" { - null = false - type = Polygon - } - column "c3" { - null = false - type = MultiPolygon - } -} -``` - -### Ring -The `Ring` type allows creating columns for storing ring values. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = Ring - } -} -``` - -### String -The `String` type allows creating columns for storing string values. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = String - } -} -``` - -### UUID -The `UUID` type allows creating columns for storing Universally Unique Identifiers (UUID). - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = UUID - } -} -``` - -### Tuple -Atlas supports defining ClickHouse tuple types using the `sql` function. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = sql("Tuple(Int32, String)") - } -} -``` - -### LowCardinality -Atlas supports defining ClickHouse low cardinality types using the `sql` function. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = sql("LowCardinality(String)") - } -} -``` - -### Nullable -Atlas supports defining ClickHouse nullable types using the `sql` function. -`Null` attribute is needed to be set to `true` for nullable types. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = true - type = sql("Nullable(String)") - } -} -``` - -### JSON -The `JSON` type allows creating columns for storing JSON objects. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - type = JSON - } -} -``` - -### AggregateFunction -Atlas supports defining ClickHouse aggregate data types by using the `sql` function. - -```hcl -table "t" { - schema = schema.test - engine = Memory - column "c1" { - null = false - type = sql("AggregateFunction(uniq, UInt64)") - } - column "c2" { - null = false - type = sql("SimpleAggregateFunction(sum, Int32)") - } -} -``` - -:::info -The `AggregateFunction` and `SimpleAggregateFunction` are complex data types. Therefore, we recommend using a [Dev Database](../../md/concepts//dev.mdx) to normalize these types. -::: - -## Redshift - -### Boolean -The `boolean` and `bool` types allow creating standard SQL boolean columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = boolean - } - column "c2" { - # Alias to boolean. - type = bool - } -} -``` - -### Binary -The `binary_varying`, `varbinary` and `varbyte` types allow creating binary string columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = binary_varying(255) - } - column "c2" { - # Alias to binary_varying - type = varbinary(255) - } - column "c3" { - # Alias to binary_varying - type = varbyte(255) - } -} -``` - -### Date, Time and Interval -Atlas supports the standard Redshift types for creating date, time and interval columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = date - } - column "c2" { - # Equals to "time without time zone". - type = time - } - column "c3" { - # Equals to "time with time zone". - type = timetz - } - column "c4" { - # Equals "timestamp without time zone". - type = timestamp - } - column "c5" { - # Equals "timestamp with time zone". - type = timestamptz - } - column "c6" { - type = sql("interval year to month") - } -} -``` - -### Fixed Point (Decimal) -The `decimal` and `numeric` types are supported for storing exact numeric values. Note that in Redshift the two types are identical. - -```hcl -table "t" { - schema = schema.test - column "c1" { - # Equals to numeric. - type = decimal - } - column "c2" { - # Equals to numeric(5). - type = decimal(5) - } - column "c3" { - # Equals to numeric(5,2). - type = decimal(5,2) - } -} -``` - -### Floating Point (Float) -The `real` and `double_precision` types are supported for storing approximate numeric values. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = real - } - column "c2" { - type = double_precision - } - column "c3" { - type = float(10) - } - column "c4" { - type = float(30) - } - column "c5" { - # Alias to real. - type = float4 - } - column "c6" { - # Alias to double_precision. - type = float8 - } -} -``` - -### Integer -The `smallint`, `integer` / `int`, `bigint` types allow creating integer types. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = smallint - } - column "c2" { - type = integer - } - column "c3" { - type = int - } - column "c4" { - type = bigint - } - column "c5" { - # Alias to smallint. - type = int2 - } - column "c6" { - # Alias to integer. - type = int4 - } - column "c7" { - # Alias to bigint. - type = int8 - } -} -``` - -### String -The `varchar`, `nvarchar`, `char`, `nchar`, `bpchar`, `character_varying`, `character` and `text` types allow creating string columns. - -```hcl -table "t" { - schema = schema.test - column "c1" { - # Equals character_varying(256). - type = varchar - } - column "c2" { - # Alias to character_varying(255). - type = varchar(255) - } - column "c3" { - # Equals to character_varying(255). - type = nvarchar(255) - } - column "c4" { - # Equals to char(1). - type = char - } - column "c5" { - # Equals to char(5). - type = nchar(5) - } - column "c6" { - # Alias to character(5). - type = char(5) - } - column "c7" { - # Alias to character(5). - type = bpchar(5) - } - column "c8" { - # Equals to character_varying(256). - type = text - } -} -``` - -### Other Types - -The `hllsketch`, `super`, `geometry` and `geography` types are supported by Atlas. - -```hcl -table "t" { - schema = schema.test - column "c1" { - type = hllsketch - } - column "c2" { - type = super - } - column "c3" { - type = geometry - } - column "c4" { - type = geography - } -} -``` \ No newline at end of file diff --git a/doc/md/atlas-schema/hcl.mdx b/doc/md/atlas-schema/hcl.mdx deleted file mode 100644 index a340f6e2ad7..00000000000 --- a/doc/md/atlas-schema/hcl.mdx +++ /dev/null @@ -1,2357 +0,0 @@ ---- -id: hcl-schema -title: HCL Schema -slug: /atlas-schema/hcl ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -Atlas schemas can be defined in SQL, external ORMs and programs, or by using the Atlas HCL language. -The HCL-based language allows developers to describe database schemas in a declarative manner, and it supports all SQL -features supported by Atlas. The main advantages of using HCL are that it enables developers to manage their database -schemas like regular code, facilitates sharing and reusing files between projects, allows variable injection, and -provides the ability to attach annotations to objects, such as PII or sensitive data. - -## Schema - -The `schema` object describes a database schema. A `DATABASE` in MySQL and SQLite, or a `SCHEMA` in PostgreSQL. -An HCL file can contain 1 or more schema objects. - - - - -In MySQL and MariaDB, the `schema` resource can contain the `charset` and `collate` attributes. Read more about them -in [MySQL](https://dev.mysql.com/doc/refman/8.0/en/charset.html) or -[MariaDB](https://mariadb.com/kb/en/setting-character-sets-and-collations/) websites. - -```hcl -# Schema with attributes. -schema "market" { - charset = "utf8mb4" - collate = "utf8mb4_0900_ai_ci" - comment = "A schema comment" -} - -# Schema without attributes. -schema "orders" {} -``` - - - - -```hcl -schema "public" { - comment = "A schema comment" -} - -schema "private" {} -``` - - - - -Atlas does not support [attached databases](https://www.sqlite.org/lang_attach.html), and support only the default -database (i.e. `main`). - -```hcl -schema "main" {} -``` - - - - -```hcl -schema "dbo" { - comment = "A schema comment" -} - -schema "private" {} -``` - - - - -In Clickhouse, the `schema` resource can contain the `engine` attribute. If not specified, the default engine depends on ClickHouse settings. -Use `sql()` to specify the engine in advanced cases. - -Read more about database engines in [ClickHouse documentation](https://clickhouse.com/docs/en/engines/database-engines). - -```hcl -schema "default" { - engine = sql("Replicated('/clickhouse/databases/default', '{shard}', '{replica}')") -} - -schema "atomic" { - engine = Atomic -} -``` - - - - -```hcl -schema "public" { - comment = "A schema comment" -} - -schema "private" {} -``` - - - - -## Table - -A `table` describes a table in a SQL database. A table hold its columns, indexes, constraints, and additional attributes -that are supported by the different drivers. - -```hcl -table "users" { - schema = schema.public - column "id" { - type = int - } - column "name" { - type = varchar(255) - } - column "manager_id" { - type = int - } - primary_key { - columns = [ - column.id - ] - } - index "idx_name" { - columns = [ - column.name - ] - unique = true - } - foreign_key "manager_fk" { - columns = [column.manager_id] - ref_columns = [column.id] - on_delete = CASCADE - on_update = NO_ACTION - } -} -``` - -### Check - -A `check` is a child resource of a `table` that describes a `CHECK` constraint. - -```hcl -table "products" { - column "price" { - type = float - } - check "positive price" { - expr = "price > 0" - } -} -``` - -### Partitions - -The `partition` option is a PostgreSQL-specific option that allows defining table partitioning. Table partitioning refers -to splitting logical large tables into smaller physical ones. - -```hcl -table "logs" { - schema = schema.public - column "date" { - type = date - } - column "text" { - type = integer - } - // highlight-start - partition { - type = RANGE - columns = [column.date] - } - // highlight-end -} - -table "metrics" { - schema = schema.public - column "x" { - type = integer - } - column "y" { - type = integer - } - // highlight-start - partition { - type = RANGE - by { - column = column.x - } - by { - expr = "floor(y)" - } - } - // highlight-end -} -``` - -### Row Level Security - -The `row_security` option is a PostgreSQL-specific option that allows enabling row-level security policies for a table. - -```hcl -table "users" { - schema = schema.public - column "id" { - type = int - } - // highlight-start - row_security { - enabled = true // ENABLE ROW LEVEL SECURITY - enforced = true // FORCE ROW LEVEL SECURITY - } - // highlight-end -} -``` - -:::note Defining Policies -To define row-level security policies for a table, refer to the [policy](#row-level-security-policy) example. -::: - -### Table Qualification - -In some cases, an Atlas DDL document may contain multiple tables of the same name. This usually happens -when the same table name appears in two different schemas. In these cases, the table names must be -disambiguated by using resource qualifiers. The following document describes a -database that contains two schemas named `a` and `b`, and both of them contain a table named `users`. - -```hcl -schema "a" {} -schema "b" {} - -table "a" "users" { - schema = schema.a - // .. columns -} -table "b" "users" { - schema = schema.b - // .. columns -} -``` - -#### Storage Engine - -The `engine` attribute allows for overriding the default storage engine of the table. Supported by MySQL and MariaDB. - -```hcl -table "users" { - schema = schema.public - // highlight-next-line - engine = MyISAM -} - -table "posts" { - schema = schema.public - // highlight-next-line - engine = InnoDB -} - -table "orders" { - schema = schema.public - // highlight-next-line - engine = "MyRocks" -} -``` - -#### System-Versioned Tables - -The `system_versioned` attribute allows marking a table as a [system-versioned table](https://mariadb.com/kb/en/system-versioned-tables/). -Supported by MariaDB. - -:::info [Atlas Pro Feature](/features#pro) -System-Versioned tables are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -```hcl -table "f" { - schema = schema.public - column "id" { - type = int - } - // ... - //highlight-next-line - system_versioned = true -} -``` - -#### Distribution - -The `distribution` block is a Redshift-specific option that allows specifying the distribution method of the table. - -```hcl -table "users" { - schema = schema.public - column "id" { - type = int - } - // highlight-start - distribution { - style = KEY // EVEN | ALL | AUTO - key = column.id // only for KEY style - } - // highlight-end -} -``` - - -#### Sorting - -The `sort` block is a Redshift-specific option that allows specifying the sorting method of the table. - -```hcl -table "users" { - schema = schema.public - column "id" { - type = int - } - // highlight-start - sort { - style = COMPOUND // INTERLEAVED | COMPOUND | AUTO - columns = [column.id] - } - // highlight-end -} - -``` - -:::info Sorting Style `AUTO` -Redshift restricts user access to certain external tables which are used to inspect the sort style. -Therefore, Atlas will ignore differences when changing the style to `AUTO`. -You will need to manually adjust the sort style on your target Redshift database after modifying it in the Atlas schema. - -To change the sort style to `AUTO`, run the following SQL command: -```sql -ALTER TABLE "my_table" ALTER SORTKEY AUTO; -``` -::: - -## View - -A `view` is a virtual table in the database, defined by a statement that queries rows from one or more existing -tables or views. - -:::info [Atlas Pro Feature](/features#pro) -Views are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -```hcl -view "clean_users" { - schema = schema.public - column "id" { - type = int - } - column "name" { - type = text - } - as = <<-SQL - SELECT u.id, u.name - FROM ${table.users.name} AS u - JOIN ${view.active_users.name} AS au USING (id) - SQL - depends_on = [table.users, view.t1] - comment = "A view to active users without sensitive data" -} - -view "comedies" { - schema = schema.public - column "id" { - type = int - } - column "name" { - type = text - } - as = "SELECT id, name FROM films WHERE kind = 'Comedy'" - depends_on = [table.films] - check_option = CASCADED - security = INVOKER // DEFINER | INVOKER (MySQL/MariaDB only). -} -``` - -:::info Testing Views - -Atlas's testing framework allows you to write unit tests for your views. The following example demonstrates how -to write tests for the `clean_users` view defined above. For more detail, read the [schema testing docs](/testing/schema) -or see the [full example](/guides/testing/views). - - - - -```hcl title="schema.test.hcl" -test "schema" "view" { - # Seeding to test view. - exec { - sql = "INSERT INTO users (id, name) VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Charlie');" - } - log { - message = "Seeded the database" - } - # Expected exec to pass. - exec { - sql = < - - -```hcl title="schema.test.hcl" -test "schema" "view" { - for_each = [ - {id = 1, name = "Alice"}, - {id = 2, name = "Bob"}, - {id = 3, name = "Charlie"} - ] - # Seed the `users` table. - exec { - sql = "INSERT INTO users (id, name) VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Charlie');" - } - # Query the `clean_users` view. - exec { - sql = "SELECT id, name FROM clean_users WHERE id IN (1, 2, 3);" - } - # Check each ID returns the right user. - log { - message = "Testing ${each.value.id} -> ${each.value.name}" - } -} -``` - - - - -::: - -## Materialized View - -A `materialized` view is a table-like structure that holds the results of a query. Unlike a regular view, the results of -a materialized view are stored in the database and can be refreshed periodically to reflect changes in the underlying data. - -:::info [Atlas Pro Feature](/features#pro) -Materialized views are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - - - - -```hcl -materialized "mat_view" { - schema = schema.public - column "total" { - null = true - type = numeric - } - index "idx_expr" { - unique = true - on { - expr = "((total > (0)::numeric))" - } - } - index "idx_pred" { - unique = true - columns = [column.total] - where = "(total < (0)::numeric)" - } - as = <<-SQL - SELECT sum(total) AS total - FROM m1; - SQL - depends_on = [materialized.m1] -} -``` - - - - -When creating materialized views with `TO [db.]table`, -the view will be created with the same structure as the table or view specified in the `TO` clause. -```hcl -materialized "mat_view" { - schema = schema.public - to = table.dest - as = "SELECT * FROM table.src" - depends_on = [table.src] -} -``` - -The `engine` and `primary_key` attributes are required if the `TO` clause is not specified. -In this syntax, `populate` can be used for the first time to populate the materialized view. - -```hcl -materialized "mat_view" { - schema = schema.public - engine = MergeTree - column "id" { - type = UInt32 - } - column "name" { - type = String - } - primary_key { - columns = [column.id] - } - as = "SELECT * FROM table.src" - populate = true - depends_on = [table.src] -} -``` - -:::info -Note that modifying the materialized view structure -after the initial creation is not supported by Atlas currently. -::: - - - - -```hcl -materialized "mat_view" { - schema = schema.public - column "c1" { - null = true - type = smallint - } - as = "SELECT * FROM t1;" - comment = "example materialized view" - distribution { - style = EVEN - } - sort { - style = AUTO - } - depends_on = [table.t1] -} -``` - - - - -## Column - -A `column` is a child resource of a `table`. - -```hcl -column "name" { - type = text - null = false -} - -column "age" { - type = integer - default = 42 -} - -column "active" { - type = tinyint(1) - default = true -} -``` - -#### Properties - -| Name | Kind | Type | Description | -|---------|-----------|-------------------------|------------------------------------------------------------| -| null | attribute | bool | Defines whether the column is nullable. | -| type | attribute | *schemahcl.Type | Defines the type of data that can be stored in the column. | -| default | attribute | *schemahcl.LiteralValue | Defines the default value of the column. | - -### Generated Columns - -Generated columns are columns whose their values are computed using other columns or by deterministic expressions. - - - - -```hcl -table "users" { - schema = schema.test - column "a" { - type = int - } - column "b" { - type = int - # In MySQL, generated columns are VIRTUAL by default. - as = "a * 2" - } - column "c" { - type = int - as { - expr = "a * b" - type = STORED - } - } -} -``` - - - - -```hcl -table "users" { - schema = schema.test - column "a" { - type = int - } - column "b" { - type = int - # In PostgreSQL, generated columns are STORED by default. - as = "a * 2" - } - column "c" { - type = int - as { - expr = "a * b" - type = STORED - } - } -} -``` - - - - -```hcl -table "users" { - schema = schema.test - column "a" { - type = int - } - column "b" { - type = int - # In SQLite, generated columns are VIRTUAL by default. - as = "a * 2" - } - column "c" { - type = int - as { - expr = "a * b" - type = STORED - } - } -} -``` - - - - -```hcl -table "users" { - schema = schema.test - column "a" { - type = int - } - column "b" { - type = int - as = "a * 2" - } - column "c" { - type = int - as { - expr = "a * b" - # In SQLServer, computed columns are non-PERSISTED by default. - type = PERSISTED - } - } -} -``` - - - - -:::info -Note, it is recommended to use the [`--dev-url`](../concepts/dev-database) option when generated columns are used. -::: - -### Encodings - -Encodings are used to define the compression algorithm for the column data. Supported by ClickHouse and Redshift. - - - - -```hcl -table "users" { - schema = schema.public - column "name" { - type = text - encode = LZ4 // AZ64 | RAW | LZ4 | ZSTD - } -} -``` - - - -## Column Types - -The SQL dialects supported by Atlas (Postgres, MySQL, MariaDB, and SQLite) vary in -the types they support. At this point, the Atlas DDL does not attempt to abstract -away the differences between various databases. This means that the schema documents -are tied to a specific database engine and version. This may change in a future version -of Atlas as we plan to add "Virtual Types" support. This section lists the various -types that are supported in each database. - -For a full list of supported column types, [click here](hcl-types.md). - -## Primary Key - -A `primary_key` is a child resource of a `table`, and it defines the table's -primary key. - -#### Example - - - - -```hcl -primary_key { - columns = [column.id] -} -``` - -#### Properties - -| Name | Kind | Type | Description | -|---------|-----------|--------------------------|----------------------------------------------------------------| -| columns | attribute | reference (list) | A list of references to columns that comprise the primary key. | - - - - -:::info -Note, primary key expressions are supported by ClickHouse. -::: - -```hcl -primary_key { - on { - column = column.id - } - on { - expr = "c1 + c2" - } -} -``` - -#### Properties - -| Name | Kind | Type | Description | -|---------|-----------|--------------------------|----------------------------------------------------------------| -| on | resource | schema.IndexPart (list) | The index parts that comprise the index | - - - - - - -## Foreign Key - -Foreign keys are child resources of a `table`, and it defines some columns in the table -as references to columns in other tables. - -#### Example - -```hcl title="schema.hcl" {18-19} -table "users" { - schema = schema.public - column "id" { - type = integer - } - primary_key { - columns = [column.id] - } -} - -table "orders" { - schema = schema.market - // ... - column "owner_id" { - type = integer - } - foreign_key "owner_id" { - columns = [column.owner_id] - ref_columns = [table.users.column.id] - on_update = NO_ACTION - on_delete = NO_ACTION - } -} -``` - -#### Referencing Qualified Tables - -If a foreign key references a column in a [qualified](#table-qualification) table, it is referenced -using `table...column.`: - -```hcl title="schema.hcl" {18-19} -table "public" "users" { - schema = schema.public - column "id" { - type = integer - } - primary_key { - columns = [column.id] - } -} - -table "admin" "users" { - schema = schema.admin - // ... - column "external_id" { - type = integer - } - foreign_key "external_id" { - columns = [column.external_id] - ref_columns = [table.admin.users.column.id] - on_update = NO_ACTION - on_delete = NO_ACTION - } -} -``` - -#### Properties - -| Name | Kind | Type | Description | -|-------------|-----------|------------------------|-------------------------------------------| -| columns | attribute | reference (list) | The columns that reference other columns. | -| ref_columns | attribute | reference (list) | The referenced columns. | -| on_update | attribute | schema.ReferenceOption | Defines what to do on update. | -| on_delete | attribute | schema.ReferenceOption | Defines what to do on delete. | - -## Index - -Indexes are child resources of a `table`, and it defines an index on the table. - -#### Example - -```hcl -# Columns only. -index "idx_name" { - unique = true - columns = [column.name] -} - -# Columns and order. -index "idx_name" { - unique = true - on { - column = column.rank - } - on { - column = column.score - desc = true - } -} - -# Custom index type. -index "idx_name" { - type = GIN - columns = [column.data] -} - -# Control storage options. -index "idx_range" { - type = BRIN - columns = [column.range] - page_per_range = 128 -} - -# Include non-key columns. -index "idx_include" { - columns = [column.range] - include = [column.version] -} - -# Define operator class. -index "idx_operator_class" { - type = GIN - on { - column = column.j - ops = jsonb_path_ops - } -} - -# Full-text index with ngram parser. -index "index_parser" { - type = FULLTEXT - columns = [column.text] - parser = ngram -} - -# Postgres-specific NULLS [NOT] DISTINCT option. -index "index_nulls_not_distinct" { - unique = true - columns = [column.text] - nulls_distinct = false -} -``` - -#### Properties - -| Name | Kind | Type | Description | -|-----------|-----------|-------------------------|----------------------------------------------------------------| -| unique | attribute | boolean | Defines whether a uniqueness constraint is set on the index. | -| type | attribute | IndexType (enum) | Defines the index type. e.g. `HASH`, `GIN`, `FULLTEXT`. | -| columns | attribute | reference (list) | The columns that comprise the index. | -| on | resource | schema.IndexPart (list) | The index parts that comprise the index. | -| options | attribute | schema.Attr | Additional driver specific attributes. e.g. `page_per_range` | - -### Index Expressions - -Index expressions allow setting indexes over functions or computed expressions. Supported by PostgreSQL, SQLite and -MySQL8. - -```hcl {9-16} -table "t" { - schema = schema.test - column "c1" { - type = int - } - column "c2" { - type = int - } - index "i" { - on { - expr = "c1 - c2" - } - on { - expr = "c2 - c1" - } - } -} -``` - -:::info -Note, it is recommended to use the [`--dev-url`](../concepts/dev-database) option when index expressions are used. -::: - -### Partial Indexes - -[Partial indexes](https://www.postgresql.org/docs/current/indexes-partial.html) allow setting indexes over subset of -the table. Supported by PostgreSQL and SQLite. - -```hcl {11} -table "t" { - schema = schema.public - column "b" { - type = bool - } - column "c" { - type = int - } - index "i" { - columns = [column.c] - where = "b AND c > 0" - } -} -``` - -:::info -Note, it is recommended to use the [`--dev-url`](../concepts/dev-database) option when partial indexes are used. -::: - -### Index Prefixes - -[Index prefixes](https://dev.mysql.com/doc/refman/8.0/en/column-indexes.html#column-indexes-prefix) allow setting an index -on the first `N` characters of string columns. Supported by MySQL and MariaDB. - -```hcl {9} -table "users" { - schema = schema.test - column "name" { - type = varchar(255) - } - index "user_name" { - on { - column = column.name - prefix = 128 - } - } -} -``` - -### Unique Constraints - -The `unique` block allows defining a [unique constraint](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-UNIQUE-CONSTRAINTS) -supported by PostgreSQL: - -```hcl -# Columns only. -unique "name" { - columns = [column.name] -} - -# Include non-key columns. -unique "name_include_version" { - columns = [column.name] - include = [column.version] -} -``` - -:::info Adding unique constraints concurrently -In order to add a unique constraint in non-blocking mode, the index supporting the constraint needs to be created -concurrently first and then converted to a unique constraint. To achieve this, follow the steps below: -1. Define a unique `index` block on the desired table. -2. Ensure a [Diff Policy](/versioned/diff#diff-policy) is used to instruct Atlas to create the index concurrently. -3. Apply the migration and ensure the index was created. -4. Replace the `index` block with a `unique` block to create a new unique constraint using the existing index. -::: - -### Exclude Constraints - -The `exclude` block allows defining a [exclusion constraint](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-EXCLUSION) -supported by PostgreSQL: - -```hcl -exclude "excl_speaker_during" { - type = GIST - on { - column = column.speaker - op = "=" - } - on { - column = column.during - op = "&&" - } -} - -# Include non-key columns. -exclude "excl_speaker_during" { - type = GIST - on { - column = column.speaker - op = "=" - } - on { - column = column.during - op = "&&" - } - include = [column.another] -} -``` - -## Trigger - -:::info [Atlas Pro Feature](/features#pro) -Triggers are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `trigger` block allows defining SQL triggers in HCL format. - - - - - -```hcl -function "audit_orders" { - schema = schema.public - lang = PLpgSQL - return = trigger - as = <<-SQL - BEGIN - INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, TG_OP); - RETURN NEW; - END; - SQL -} -trigger "trigger_orders_audit" { - on = table.orders - after { - insert = true - update_of = [table.orders.column.amount] - } - execute { - function = function.audit_orders - } -} -``` - - - - -```hcl -trigger "after_orders_insert" { - on = table.orders - after { - insert = true - } - as = <<-SQL - BEGIN - INSERT INTO orders_audit(order_id, changed_at, operation) - VALUES (NEW.order_id, NOW(), 'INSERT'); - END - SQL -} -trigger "after_orders_update" { - on = table.orders - after { - update = true - } - as = <<-SQL - BEGIN - INSERT INTO orders_audit(order_id, changed_at, operation) - VALUES (NEW.order_id, NOW(), 'UPDATE'); - END - SQL -} -``` - - - - -```hcl -trigger "after_orders_insert" { - on = table.orders - after { - insert = true - } - as = <<-SQL - BEGIN - INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, 'INSERT'); - END - SQL -} -trigger "after_orders_update" { - on = table.orders - after { - update_of = [table.orders.column.amount] - } - as = <<-SQL - BEGIN - INSERT INTO orders_audit(order_id, operation) VALUES (NEW.order_id, 'UPDATE'); - END - SQL -} -``` - - - - -```hcl -trigger "t1_trg" { - on = table.orders - after { - insert = true - update = true - delete = true - } - as = <<-SQL - BEGIN - SET NOCOUNT ON; - DECLARE @c INT; - SELECT @c = COUNT(*) FROM [dbo].[orders]; - IF @c > 1000 - RAISERROR('Too many rows in orders', 16, 1); - END - SQL -} -trigger "t2_trg" { - on = table.customers - instead_of { - insert = true - } - as = <<-SQL - BEGIN - SET NOCOUNT ON; - INSERT INTO [dbo].[customers] ([name]) - SELECT [ins].[name] - FROM [inserted] [ins] - WHERE [ins].[name] NOT IN ( - SELECT [name] FROM [dbo].[blacklist_customers] - ); - END - SQL -} -``` - - - - -### Computed Triggers - -To configure the same trigger for multiple tables/views, users can utilize the `for_each` meta-argument. By setting it -up, a `trigger` block will be computed for each item in the provided value. Note that `for_each` accepts either a `map` -or a `set` of references. - -```hcl title="schema.pg.hcl" {2-3} -trigger "audit_log_trigger" { - for_each = [table.users, table.orders, table.payments] - on = each.value - after { - insert = true - update = true - delete = true - } - execute { - function = function.audit_log_table - } -} -``` - -## Event Trigger - -:::info [Atlas Pro Feature](/features#pro) -Event Triggers are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `event_trigger` block allows defining [PostgreSQL event trigger](https://www.postgresql.org/docs/current/event-trigger-definition.html) functions -that automatically execute in response to specific events within the database system, like table creation or schema modifications. - -```hcl -# Block table rewrites. -event_trigger "block_table_rewrite" { - on = table_rewrite - execute = function.no_rewrite_allowed -} - -# Filter specific events. -event_trigger "record_table_creation" { - on = ddl_command_start - tags = ["CREATE TABLE"] - execute = function.record_table_creation -} -``` - -## Function - -:::info [Atlas Pro Feature](/features#pro) -Functions are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `function` block allows defining functions in HCL format. The `lang` attribute specifies the language of the -function. For example, `PLpgSQL`, `SQL`, `CRL`, etc. - - - - -```hcl -function "positive" { - schema = schema.public - lang = SQL - arg "v" { - type = integer - } - return = boolean - as = "SELECT v > 0" -} - -function "sql_body1" { - schema = schema.public - lang = SQL - arg "v" { - type = integer - } - return = integer - as = <<-SQL - BEGIN ATOMIC - SELECT v; - END - SQL -} - -function "sql_body2" { - schema = schema.public - lang = SQL - arg { - type = integer - } - return = integer - as = "RETURN $1" - volatility = IMMUTABLE // STABLE | VOLATILE - leakproof = true // NOT LEAKPROOF | LEAKPROOF - strict = true // (CALLED | RETURNS NULL) ON NULL INPUT - security = INVOKER // DEFINER | INVOKER -} -``` - - - - -```hcl -function "add2" { - schema = schema.public - arg "a" { - type = int - } - arg "b" { - type = int - } - return = int - as = "return a + b" - deterministic = true // NOT DETERMINISTIC | DETERMINISTIC - data_access = NO_SQL // CONTAINS_SQL | NO_SQL | READS_SQL_DATA | MODIFIES_SQL_DATA - security = INVOKER // DEFINER | INVOKER -} - -function "f1" { - schema = schema.public - arg "x" { - type = int - } - return = int - as = <<-SQL - BEGIN - INSERT INTO t1 VALUES (RAND(x)); - RETURN x+2; - END - SQL -} -``` - - - - -```hcl -function "fn_return_scalar" { - schema = schema.dbo - lang = SQL - arg "@a" { - type = int - } - arg "@b" { - type = int - default = 1 - } - return = int - as = <<-SQL - BEGIN - RETURN @a * @a + @b * @b - END - SQL - schema_bound = true // SCHEMABINDING - null_call = RETURNS_NULL // (RETURNS NULL | CALLED) ON NULL INPUT - inline = true // INLINE = { (OFF | ON) } -} - -function "fn_return_inline" { - schema = schema.dbo - lang = SQL - arg "@a" { - type = int - } - arg "@b" { - type = int - default = 1 - } - return = sql("table") - as = "RETURN SELECT @a as [a], @b as [b], (@a+@b)*2 as [p], @a*@b as [s]" -} - -function "fn_return_table" { - schema = schema.dbo - lang = SQL - arg "@a" { - type = int - } - arg "@b" { - type = int - default = 1 - } - return_table "@t1" { - column "c1" { - null = false - type = int - } - column "c2" { - null = false - type = nvarchar(255) - } - column "c3" { - null = true - type = nvarchar(255) - default = sql("N'G'") - } - column "c4" { - null = false - type = int - } - primary_key { - columns = [column.c1] - } - index { - unique = true - nonclustered = true - on { - desc = true - column = column.c3 - } - on { - column = column.c4 - } - } - index { - unique = true - nonclustered = true - on { - column = column.c2 - } - on { - desc = true - column = column.c3 - } - } - index "idx" { - columns = [column.c2] - nonclustered = true - } - check { - expr = "([c4]>(0))" - } - } - as = <<-SQL - BEGIN - INSERT @t1 - SELECT 1 AS [c1], 'A' AS [c2], NULL AS [c3], @a * @a + @b AS [c4]; - RETURN - END - SQL -} -``` - - - -:::info Testing Functions - -Atlas's testing framework allows you to write unit tests for your functions. The following example demonstrates how -to write tests for the `positive` function defined above. For more detail, read the [schema testing docs](/testing/schema) -or see the [full example](/guides/testing/functions). - - - - - -```hcl title="schema.test.hcl" -test "schema" "simple_test" { - parallel = true - assert { - sql = "SELECT positive(1)" - } - log { - message = "First assertion passed" - } - assert { - sql = < - - -```hcl title="schema.test.hcl" -test "schema" "simple_test" { - parallel = true - for_each = [ - {input: 1, expected: "t"}, - {input: 0, expected: "f"}, - {input: -1, expected: "f"}, - ] - exec { - sql = "SELECT positive(${each.value.input})" - output = each.value.expected - } -} -``` - - - - -::: - -#### Aggregate Functions - -The `aggregate` block defines a function that computes a single result from a set of values. Supported by -[PostgreSQL](https://www.postgresql.org/docs/current/sql-createaggregate.html). - -```hcl {1-8} -aggregate "sum_of_squares" { - schema = schema.public - arg { - type = double_precision - } - state_type = double_precision - state_func = function.sum_squares_sfunc -} - -function "sum_squares_sfunc" { - schema = schema.public - lang = PLpgSQL - arg "state" { - type = double_precision - } - arg "value" { - type = double_precision - } - return = double_precision - as = <<-SQL - BEGIN - RETURN state + value * value; - END; - SQL -} -``` - -## Procedure - -:::info [Atlas Pro Feature](/features#pro) -Procedures are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `procedure` block allows defining SQL procedure in HCL format. - - - - -```hcl -procedure "proc" { - schema = schema.public - lang = SQL - arg "a" { - type = integer - } - arg "b" { - type = text - } - arg "c" { - type = integer - default = 100 - } - as = <<-SQL - INSERT INTO t1 VALUES(a, b); - INSERT INTO t2 VALUES(c, b); - SQL -} -``` - - - - -```hcl -procedure "p1" { - schema = schema.public - arg "x" { - type = varchar(10) - } - as = "INSERT INTO t1 VALUES(x)" - comment = "A procedure comment" - deterministic = true -} - -procedure "p2" { - schema = schema.public - arg "x" { - type = char(10) - mode = INOUT - charset = "latin1" - } - arg "y" { - type = char(10) - mode = OUT - } - as = <<-SQL - BEGIN - DECLARE перем1 CHAR(10) CHARACTER SET utf8; - // ... - END - SQL -} -``` - - - - -```hcl -procedure "p1" { - schema = schema.dbo - as = <<-SQL - SET NOCOUNT ON; - SELECT [c1], [c2], [c3] - FROM [dbo].[t1]; - SQL -} -procedure "p2" { - schema = schema.dbo - as = <<-SQL - BEGIN - SELECT TOP(10) [c1], [c2], [c3] FROM [dbo].[t1]; - SELECT TOP(10) [c1], [c4] FROM [dbo].[t2]; END - SQL -} -procedure "p3" { - schema = schema.dbo - arg "@c2" { - type = nvarchar(50) - } - arg "@c3" { - type = nvarchar(50) - } - as = <<-SQL - SET NOCOUNT ON; - SELECT [c1], [c2], [c3] - FROM [dbo].[t1] - WHERE [c2] = @c2 AND [c3] = @c3; - SQL -} -procedure "p4" { - schema = schema.dbo - arg "@c2" { - type = nvarchar(50) - default = "D%" - } - arg "@c3" { - type = nvarchar(50) - default = "%" - } - as = <<-SQL - BEGIN - SET NOCOUNT ON; - SELECT [c1] as [c1], [c2], [c3] - FROM [dbo].[t1] - WHERE [c2] LIKE @c2 AND [c3] LIKE @c3; - END - SQL -} -procedure "p5" { - schema = schema.dbo - arg "@a" { - type = int - } - arg "@b" { - type = int - } - arg "@s" { - type = int - mode = OUT - } - arg "@p" { - type = int - mode = OUT - } - as = <<-SQL - SET NOCOUNT ON; - SET @s = @a * @b; - SET @p = (@a + @b) * 2; - SQL -} -procedure "p7" { - schema = schema.dbo - as = "TRUNCATE TABLE [dbo].[t1];" -} -procedure "p8" { - schema = schema.dbo - arg "@c" { - type = cursor - mode = OUT - } - as = <<-SQL - SET NOCOUNT ON; - SET @c = CURSOR - FORWARD_ONLY STATIC FOR - SELECT [c1], [c2] - FROM [dbo].[t1]; - OPEN @c; - SQL -} -``` - - - - -:::info Testing Procedures - -Atlas's testing framework allows you to write unit tests for your procedures. The following example demonstrates how -to write tests for a stored procedure, `archive_old_sales`, that moves old sales from the `sales` table to the `archive_sales` table according to a specified cutoff date. -For more detail, read the [schema testing docs](/testing/schema) -or see the [full example](/guides/testing/procedures). - - - - -```hcl title="schema.test.hcl" -test "schema" "procedure" { - # Seed data - exec { - sql = <<-SQL - INSERT INTO sales (id, sale_amount, sale_date) VALUES - (1, 150.00, '2024-07-18'), - (2, 200.00, '2024-06-20'), - (1, 350.00, '2024-07-10'); - SQL - } - # Execute the procedure with a specific cutoff date - exec { - sql = "CALL archive_old_sales('2024-07-18')" # Archive sales before this date - } - # Verify data in archive_sales table - exec { - sql = "SELECT COUNT(*) FROM archive_sales WHERE sale_date < '2024-07-18'" - output = "2" # Expect 2 archived sales - } - # Verify data in sales table - exec { - sql = "SELECT COUNT(*) FROM sales" - output = "1" # Expect 1 sale remaining in the sales table after cutoff date - } -} -``` - - - - -```hcl title="schema.test.hcl" -test "schema" "procedure" { - # Seed data - exec { - sql = <<-SQL - INSERT INTO sales (id, sale_amount, sale_date) VALUES - (1, 150.00, '2024-07-18'), - (2, 200.00, '2024-06-20'), - (1, 350.00, '2024-07-10'); - SQL - } - for_each = [ - {cutoff_date: "2024-07-18", expected_archived: "2", expected_remaining: "1"}, - {cutoff_date: "2024-08-01", expected_archived: "3", expected_remaining: "0"} - ] - # Execute the procedure with the current test case cutoff date - exec { - sql = "CALL archive_old_sales('${each.value.cutoff_date}')" - } - # Verify data in archive_sales table - exec { - sql = "SELECT COUNT(*) FROM archive_sales WHERE sale_date < '${each.value.cutoff_date}'" - output = each.value.expected_archived - } - # Verify data in sales table - exec { - sql = "SELECT COUNT(*) FROM sales" - output = each.value.expected_remaining - } - log { - message = "Testing cutoff date: ${each.value.cutoff_date} - Expected Archived: ${each.value.expected_archived}, Expected Remaining: ${each.value.expected_remaining}" - } -} -``` - - - - -::: - - -## Domain - -:::info [Atlas Pro Feature](/features#pro) -Domains are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `domain` type is a user-defined data type that is based on an existing data type but with optional constraints -and default values. Supported by PostgreSQL. - -```hcl -domain "us_postal_code" { - schema = schema.public - type = text - null = true - check "us_postal_code_check" { - expr = "((VALUE ~ '^\\d{5}$'::text) OR (VALUE ~ '^\\d{5}-\\d{4}$'::text))" - } -} - -domain "username" { - schema = schema.public - type = text - null = false - default = "anonymous" - check "username_length" { - expr = "(length(VALUE) > 3)" - } -} - -table "users" { - schema = schema.public - column "name" { - type = domain.username - } - column "zip" { - type = domain.us_postal_code - } -} - -schema "public" { - comment = "standard public schema" -} -``` - - -:::info Testing Domains - -Atlas's testing framework allows you to write unit tests for your domains. The following example demonstrates how -to write tests for the `us_postal_code` domain defined above. For more detail, read the [schema testing docs](/testing/schema) -or see the [full example](/guides/testing/domains). - - - - - -```hcl title="schema.test.hcl" -test "schema" "postal" { - parallel = true - exec { - sql = "select '12345'::us_postal_code" - } - catch { - sql = "select 'hello'::us_postal_code" - } -} -``` - - - - -```hcl title="schema.hcl" -test "schema" "us_postal_code_check_valid" { - parallel = true - for_each = [ - {input = "12345", expected = "valid"}, - {input = "12345-6789", expected = "valid"}, - ] - log { - message = "Testing postal code: ${each.value.input} -> Expected: ${each.value.expected}" - } - exec { - sql = "SELECT '${each.value.input}'::us_postal_code" - } -} - -test "schema" "us_postal_code_check_invalid" { - parallel = true - for_each = [ - {input = "hello", expected = "invalid"}, - {input = "123", expected = "invalid"}, - ] - log { - message = "Testing postal code: ${each.value.input} -> Expected: ${each.value.expected}" - } - catch { - sql = "SELECT '${each.value.input}'::us_postal_code" - } -} -``` - - - - -::: - -## Composite Type - -:::info [Atlas Pro Feature](/features#pro) -Composite types are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `composite` type is a user-defined data type that represents the structure of a row or record. Supported by PostgreSQL. - -```hcl -composite "address" { - schema = schema.public - field "street" { - type = text - } - field "city" { - type = text - } -} - -table "users" { - schema = schema.public - column "address" { - type = composite.address - } -} - -schema "public" { - comment = "standard public schema" -} -``` - -## Policies {#row-level-security-policy} - -:::info [Atlas Pro Feature](/features#pro) -Policies are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `policy` block allows defining [row-level security policies](https://www.postgresql.org/docs/current/ddl-rowsecurity.html). Supported by PostgreSQL. - -```hcl title="schema.pg.hcl" -policy "sales_rep_access" { - on = table.orders - for = SELECT - to = [PUBLIC] - using = "(sales_rep_id = (CURRENT_USER)::integer)" -} - -policy "restrict_sales_rep_updates" { - on = table.orders - as = RESTRICTIVE - for = UPDATE - to = ["custom_role"] - check = "(sales_rep_id = (CURRENT_USER)::integer)" - comment = "This is a restrictive policy" -} -``` - -:::note Enabling Row-Level Security -To enable and force row-level security on a table, refer to the [table row-level security](#row-level-security) example. -::: - -### Computed Policies - -To configure the same policy for multiple tables, users can utilize the `for_each` meta-argument. By setting it -up, a `policy` block will be computed for each item in the provided value. Note that `for_each` accepts either a `map` -or a `set` of references. - -```hcl title="schema.pg.hcl" {2-3} -policy "tenant_access_policy" { - for_each = [table.users, table.orders, table.payments] - on = each.value - as = RESTRICTIVE - using = "tenant_isolation_policy()" -} -``` - -## Sequence - -:::info [Atlas Pro Feature](/features#pro) -Sequences are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `sequence` block allows defining sequence number generator. Supported by PostgreSQL and SQL Server. - - - - -Note, a `sequence` block is printed by Atlas on inspection, or it may be manually defined in the schema only if it -represents a PostgreSQL sequence that is not implicitly created by the database for identity or `serial` columns. - -```hcl -#highlight-next-line -# Simple sequence with default values. -sequence "s1" { - schema = schema.public -} - -#highlight-next-line -# Sequence with custom configuration. -sequence "s2" { - schema = schema.public - type = smallint - start = 100 - increment = 2 - min_value = 100 - max_value = 1000 -} - -#highlight-next-line -# Sequence that is owned by a column. -sequence "s3" { - schema = schema.public - owner = table.t2.column.id - comment = "Sequence with column owner" -} - -#highlight-next-line -# The sequences created by this table are not printed on inspection. -table "users" { - schema = schema.public - column "id" { - type = int - identity { - generated = ALWAYS - start = 10000 - } - } - column "serial" { - type = serial - } - primary_key { - columns = [column.id] - } -} - -table "t2" { - schema = schema.public - column "id" { - type = int - } -} - -schema "public" { - comment = "standard public schema" -} -``` - - - -Atlas support define sequence in SQL Server by using `sequence` block. See more about [SQL Server sequence](https://learn.microsoft.com/en-us/sql/relational-databases/sequence-numbers/sequence-numbers). - -```hcl -#highlight-next-line -# Simple sequence with default values. -sequence "s1" { - schema = schema.dbo -} - -#highlight-next-line -# Sequence with custom configuration. -sequence "s2" { - schema = schema.dbo - type = decimal(18, 0) - start = 100000000000000000 - increment = 1 - min_value = 100000000000000000 - max_value = 999999999999999999 - cycle = true -} - -#highlight-next-line -# The sequences with alias-type. -sequence "s3" { - schema = schema.dbo - type = type_alias.ssn - start = 111111111 - increment = 1 - min_value = 111111111 -} -type_alias "ssn" { - schema = schema.dbo - type = dec(9, 0) - null = false -} -``` - - - -## Enum - -The `enum` type allows storing a set of enumerated values. Supported by PostgreSQL. - -```hcl -enum "status" { - schema = schema.test - values = ["on", "off"] -} - -table "t1" { - schema = schema.test - column "c1" { - type = enum.status - } -} - -table "t2" { - schema = schema.test - column "c1" { - type = enum.status - } -} -``` - -## Extension - -:::info [Atlas Pro Feature](/features#pro) -Extensions are currently available only to [Atlas Pro users](/features#pro). To use this feature, run: -``` -atlas login -``` -::: - -The `extension` block allows the definition of PostgreSQL extensions to be loaded into the database. The following -arguments are supported: -- `schema` (Optional) - The schema in which to install the extension's objects, given that the extension allows its contents to be relocated. -- `version` (Optional) - The version of the extension to install. Defaults to the version specified in the extension's control file. -- `comment` (Read-only) - The description of the extension. This field is populated in `atlas inspect` output. - -```hcl -extension "adminpack" { - version = "2.1" - comment = "administrative functions for PostgreSQL" -} -extension "postgis" { - schema = schema.public - version = "3.4.1" - comment = "PostGIS geometry and geography spatial types and functions" -} -extension "pgcrypto" { - schema = schema.public - version = "1.3" - comment = "cryptographic functions" -} -schema "public" { - comment = "standard public schema" -} -``` - -:::info Extensions work in [database-scope](/concepts/url#scope) only -Although the `schema` argument is supported, it only indicates where the extension's objects will be installed. However, -the extension itself is installed at the database level and cannot be loaded multiple times into different schemas. - -Therefore, to avoid conflicts with other schemas, when working with extensions, the scope of the migration should be -set to the database, where objects are qualified with the schema name. To learn more about the difference between -database and schema scopes, visit [this doc](/concepts/url#scope). -::: - -## Comment - -The `comment` attribute is an attribute of `schema`, `table`, `column`, and `index`. - -```hcl -schema "public" { - comment = "A schema comment" -} - -table "users" { - schema = schema.public - column "name" { - type = text - comment = "A column comment" - } - index "name_idx" { - columns = [column.name] - } - comment = "A table comment" -} -``` - -## Charset and Collation - -The `charset` and `collate` are attributes of `schema`, `table` and `column` and supported by MySQL, MariaDB and PostgreSQL. -Read more about them in [MySQL](https://dev.mysql.com/doc/refman/8.0/en/charset.html), -[MariaDB](https://mariadb.com/kb/en/setting-character-sets-and-collations/) and -[PostgreSQL](https://www.postgresql.org/docs/current/collation.html) websites. - - - - -```hcl -schema "public" { - charset = "utf8mb4" - collate = "utf8mb4_0900_ai_ci" -} - -table "products" { - column "name" { - type = text - collate = "binary" - } - collate = "utf8_general_ci" -} -``` - - - - -```hcl -schema "public" {} - -table "products" { - column "name" { - type = text - collate = "es_ES" - } -} -``` - - - - -SQLServer only support `collate` attribute on columns. -```hcl -schema "dbo" {} - -table "users" { - schema = schema.dbo - column "name" { - type = varchar(255) - collate = "Vietnamese_CI_AS" - } -} -``` - - - - -## Auto Increment - -`AUTO_INCREMENT` and `IDENTITY` columns are attributes of the `column` and `table` resource, and can be used to -generate a unique identity for new rows. - - - - - -In MySQL/MariaDB the `auto_increment` attribute can be set on columns and tables. - -```hcl -table "users" { - schema = schema.public - column "id" { - null = false - type = bigint - auto_increment = true - } - primary_key { - columns = [column.id] - } -} -``` - -The `auto_increment` column can be set on the table to configure a start value other than 1. - -```hcl -table "users" { - schema = schema.public - column "id" { - null = false - type = bigint - auto_increment = true - } - primary_key { - columns = [column.id] - } - auto_increment = 100 -} -``` - - - - -PostgreSQL supports `serial` columns and the `generated as identity` syntax for versions >= 10. - -```hcl -table "users" { - schema = schema.public - column "id" { - null = false - type = int - identity { - generated = ALWAYS - start = 10 - increment = 10 - } - } - primary_key { - columns = [column.id] - } -} -``` - - - - -SQLite allows configuring [`AUTOINCREMENT`](https://www.sqlite.org/autoinc.html) columns using the `auto_increment` -attribute. - -```hcl -table "users" { - schema = schema.main - column "id" { - null = false - type = integer - auto_increment = true - } - primary_key { - columns = [column.id] - } -} -``` - - - - -```hcl -table "users" { - schema = schema.dbo - column "id" { - null = false - type = bigint - identity { - seed = 701 - increment = 1000 - } - } - primary_key { - columns = [column.id] - } -} -``` - - - diff --git a/doc/md/atlas-schema/projects.mdx b/doc/md/atlas-schema/projects.mdx deleted file mode 100644 index ac08a4b1a46..00000000000 --- a/doc/md/atlas-schema/projects.mdx +++ /dev/null @@ -1,1335 +0,0 @@ ---- -title: Project Configuration -id: projects -slug: /atlas-schema/projects ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -import LoginRequired from '../components/login-required.mdx' - -### Project Files - -Project files provide a convenient way to describe and interact with multiple -environments when working with Atlas. A project file is a file named -`atlas.hcl` and contains one or more `env` blocks. For example: - - - - -```hcl -// Define an environment named "local" -env "local" { - // Declare where the schema definition resides. - // Also supported: ["file://multi.hcl", "file://schema.hcl"]. - src = "file://project/schema.hcl" - - // Define the URL of the database which is managed - // in this environment. - url = "mysql://user:pass@localhost:3306/schema" - - // Define the URL of the Dev Database for this environment - // See: https://atlasgo.io/concepts/dev-database - dev = "docker://mysql/8/dev" -} - -env "dev" { - // ... a different env -} -``` - - - - -```hcl -// Define an environment named "local" -env "local" { - // Declare where the schema definition resides. - // Also supported: ["file://multi.hcl", "file://schema.hcl"]. - src = "file://project/schema.hcl" - - // Define the URL of the database which is managed - // in this environment. - url = "maria://user:pass@localhost:3306/schema" - - // Define the URL of the Dev Database for this environment - // See: https://atlasgo.io/concepts/dev-database - dev = "docker://maria/latest/dev" -} - -env "dev" { - // ... a different env -} -``` - - - - -```hcl -// Define an environment named "local" -env "local" { - // Declare where the schema definition resides. - // Also supported: ["file://multi.hcl", "file://schema.hcl"]. - src = "file://project/schema.hcl" - - // Define the URL of the database which is managed - // in this environment. - url = "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" - - // Define the URL of the Dev Database for this environment - // See: https://atlasgo.io/concepts/dev-database - dev = "docker://postgres/15/dev?search_path=public" -} - -env "dev" { - // ... a different env -} -``` - - - - -```hcl -// Define an environment named "local" -env "local" { - // Declare where the schema definition resides. - // Also supported: ["file://multi.hcl", "file://schema.hcl"]. - src = "file://project/schema.hcl" - - // Define the URL of the database which is managed - // in this environment. - url = "sqlite://file.db?_fk=1" - - // Define the URL of the Dev Database for this environment - // See: https://atlasgo.io/concepts/dev-database - dev = "sqlite://file?mode=memory&_fk=1" -} - -env "dev" { - // ... a different env -} -``` - - - - -```hcl -// Define an environment named "local" -env "local" { - // Declare where the schema definition resides. - // Also supported: ["file://multi.hcl", "file://schema.hcl"]. - src = "file://project/schema.hcl" - - // Define the URL of the database which is managed - // in this environment. - url = "sqlserver://sa:pass@remote:1433?database=master" - - // Define the URL of the Dev Database for this environment - // See: https://atlasgo.io/concepts/dev-database - dev = "docker://sqlserver/2022-latest" -} - -env "dev" { - // ... a different env -} -``` - - - - -```hcl -// Define an environment named "local" -env "local" { - // Declare where the schema definition resides. - // Also supported: ["file://multi.hcl", "file://schema.hcl"]. - src = "file://project/schema.hcl" - - // Define the URL of the database which is managed - // in this environment. - url = "clickhouse://user:pass@remote:9000/default" - - // Define the URL of the Dev Database for this environment - // See: https://atlasgo.io/concepts/dev-database - dev = "docker://clickhouse/23.11/default" -} - -env "dev" { - // ... a different env -} -``` - - - -// Define an environment named "local" - -```hcl -env "local" { - // Declare where the schema definition resides. - // Also supported: ["file://multi.hcl", "file://schema.hcl"]. - src = "file://project/schema.hcl" - - // Define the URL of the database which is managed - // in this environment. - url = "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" - - // Define the URL of the Dev Database for this environment - // See: https://atlasgo.io/concepts/dev-database - dev = "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -} - -env "dev" { - // ... a different env -} -``` - - - - -### Flags - -Once the project configuration has been defined, you can interact with it using one of the following options: - - - - -To run the `schema apply` command using the `prod` configuration defined in `atlas.hcl` file located in your working directory: - -```shell -atlas schema apply --env prod -``` - - - - -To run the `schema apply` command using the `prod` configuration defined in `atlas.hcl` in arbitrary location: - -```shell {2} -atlas schema apply \ - -c file://path/to/atlas.hcl \ - --env prod -``` - - - - -Some commands accept global configuration blocks such as [`lint`](#configure-migration-linting) and -[`diff`](#configure-diff-policy) policies. If no `env` is defined, you can instruct Atlas to explicitly use the config -file using the `-c` (or `--config`) flag: - -```shell {2} -atlas migrate lint \ - -c file://path/to/atlas.hcl \ - --dir "file://path/to/migrations" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -Will run the `schema apply` command against the database that is defined for the `local` -environment. - -:::info Unlabeled `env` blocks -It is possible to define an `env` block whose name is dynamically set during command execution using the `--env` flag. -This is useful when multiple environments share the same configuration and the arguments are dynamically set during -execution: - -```hcl {2} -env { - name = atlas.env - url = var.url - format { - migrate { - apply = format( - "{{ json . | json_merge %q }}", - jsonencode({ - EnvName : atlas.Env - }) - ) - } - } -} -``` - -::: - -### Projects with Versioned Migrations - -Environments may declare a `migration` block to configure how versioned migrations -work in the specific environment: - -```hcl -env "local" { - // .. - migration { - // URL where the migration directory resides. - dir = "file://migrations" - } -} -``` - -Once defined, `migrate` commands can use this configuration, for example: -```shell -atlas migrate validate --env local -``` -Will run the `migrate validate` command against the Dev Database defined in the -`local` environment. - -### Passing Input Values - -Project files may pass [input values](hcl-input.md) to variables defined in Atlas HCL schemas. To do this, -define an [`hcl_schema`](#data-source-hcl_schema) data source, pass it the input values, and then designate it as the -desired schema within the `env` block: - - - - -```hcl -data "hcl_schema" "app" { - path = "schema.hcl" - vars = { - // Variables are passed as input values to "schema.hcl". - tenant = "ariga" - } -} - -env "local" { - src = data.hcl_schema.app.url - url = "sqlite://test?mode=memory&_fk=1" -} -``` - - - - -```hcl title="schema.hcl" -// This variable is passed as an input value from "atlas.hcl". -variable "tenant" { - type = string -} - -schema "main" { - name = var.tenant -} -``` - - - - -## Builtin Functions - -### `file` - -The `file` function reads the content of a file and returns it as a string. The file path is relative to the project -directory or an absolute path. - -```hcl -variable "cloud_token" { - type = string - default = file("/var/run/secrets/atlas_token") -} -``` - -### `fileset` - -The `fileset` function returns the list of files that match the given pattern. The pattern is relative to the project -directory. - -```hcl -data "hcl_schema" "app" { - paths = fileset("schema/*.pg.hcl") -} -``` - -### Project Input Variables - -Project files may also declare [input variables](hcl-input.md) that can be supplied to the CLI -at runtime. For example: - -```hcl title="atlas.hcl" -variable "tenant" { - type = string -} - -data "hcl_schema" "app" { - path = "schema.hcl" - vars = { - // Variables are passed as input values to "schema.hcl". - tenant = var.tenant - } -} - -env "local" { - src = data.hcl_schema.app.url - url = "sqlite://test?mode=memory&_fk=1" -} -``` -To set the value for this variable at runtime, use the `--var` flag: - -```shell -atlas schema apply --env local --var tenant=rotemtam -``` - -It is worth mentioning that when running Atlas commands within a project using -the `--env` flag, all input values supplied at the command-line are passed only -to the project file, and not propagated automatically to children schema files. -This is done with the purpose of creating an explicit contract between the environment -and the schema file. - -## Schema Arguments and Attributes - -Project configuration files support different types of blocks. - -### Input Variables - -Project files support defining input variables that can be injected through the CLI, [read more here](hcl-input.md). - -- `type` - The type constraint of a variable. -- `default` - Define if the variable is optional by setting its default value. - -```hcl -variable "tenants" { - type = list(string) -} - -variable "url" { - type = string - default = "mysql://root:pass@localhost:3306/" -} - -variable "cloud_token" { - type = string - default = getenv("ATLAS_TOKEN") -} - -env "local" { - // Reference an input variable. - url = var.url -} -``` - -### Local Values - -The `locals` block allows defining a list of local variables that can be reused multiple times in the project. - -```hcl -locals { - tenants = ["tenant_1", "tenant_2"] - base_url = "mysql://${var.user}:${var.pass}@${var.addr}" - - // Reference local values. - db1_url = "${local.base_url}/db1" - db2_url = "${local.base_url}/db2" -} -``` - -### Data Sources - -Data sources enable users to retrieve information stored in an external service or database. The currently supported -data sources are: -- [`sql`](#data-source-sql) -- [`external`](#data-source-external) -- [`runtimevar`](#data-source-runtimevar) -- [`hcl_schema`](#data-source-hcl_schema) -- [`external_schema`](#data-source-external_schema) -- [`composite_schema`](#data-source-composite_schema) -- [`remote_dir`](#data-source-remote_dir) -- [`template_dir`](#data-source-template_dir) -- [`aws_rds_token`](#data-source-aws_rds_token) -- [`gcp_cloudsql_token`](#data-source-gcp_cloudsql_token) - -:::note -Data sources are evaluated only if they are referenced by top-level blocks like `locals` or `variables`, or by the -selected environment, for instance, `atlas schema apply --env dev`. -::: - -#### Data source: `sql` - -The `sql` data source allows executing SQL queries on a database and using the results in the project. - -##### Arguments {#data-source-sql-arguments} - -- `url` - The [URL](../concepts/url.mdx) of the target database. -- `query` - Query to execute. -- `args` - Optional arguments for any placeholder parameters in the query. - -##### Attributes {#data-source-sql-attributes} - -- `count` - The number of returned rows. -- `values` - The returned values. e.g. `list(string)`. -- `value` - The first value in the list, or `nil`. - -```hcl -data "sql" "tenants" { - url = var.url - query = < - - -The data source uses [Application Default Credentials](https://cloud.google.com/docs/authentication/production) by default; -if you have authenticated via [`gcloud auth application-default login`](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login), -it will use those credentials. - -```hcl title="atlas.hcl" -data "runtimevar" "db" { - url = "gcpruntimeconfig://projects//configs//variables/?decoder=string" -} - -env "dev" { - src = "schema.hcl" - url = "mysql://root:pass@host:3306/${data.runtimevar.db}" -} -``` - -#### Usage example - -```shell -gcloud auth application-default login -atlas schema apply --env dev -``` - -```shell -GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json" atlas schema apply --env dev -``` - - - - -The data source uses [Application Default Credentials](https://cloud.google.com/docs/authentication/production) by default; -if you have authenticated via [`gcloud auth application-default login`](https://cloud.google.com/sdk/gcloud/reference/auth/application-default/login), -it will use those credentials. - -```hcl title="atlas.hcl" -data "runtimevar" "pass" { - url = "gcpsecretmanager://projects//secrets/" -} - -env "dev" { - src = "schema.hcl" - url = "mysql://root:${data.runtimevar.pass}@host:3306/database" -} -``` - -#### Usage example - -```shell -gcloud auth application-default login -atlas schema apply --env dev -``` - -```shell -GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json" atlas schema apply --env dev -``` - - - - -The data source provides two ways to work with AWS Parameter Store: -- If the `awssdk` query parameter is not set or is set to `v1`, a default AWS Session will be created with the - SharedConfigEnable option enabled; if you have authenticated with the AWS CLI, it will use those credentials. -- If the `awssdk` query parameter is set to `v2`, the data source will create an AWS Config based on the AWS SDK V2. - -Using local AWS Profiles: -* Using a local AWS Credentials profile is supported in the `v2` mode. The `profile` query parameter is used - to specify the profile name. To use profiles set `awssdk=v2` and `profile=` in the URL. - -```hcl title="atlas.hcl" -data "runtimevar" "db" { - url = "awsparamstore://?region=&decoder=string" -} - -data "runtimevar" "from_profile" { - url = "awsparamstore://?region=&decoder=string&awssdk=v2&profile=" -} - -env "dev" { - src = "schema.hcl" - url = "mysql://root:pass@host:3306/${data.runtimevar.db}" -} -``` - -:::note -It's common case when you use [the hierarchies format](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-hierarchies.html) for you parameters in AWS Parameter Store. So the `url` should -contain the path to the hierarchy, for example `awsparamstore:///production/tenant_a/password?region=&decoder=string` - there are three slashes after the protocol. -::: - - -#### Usage example - -```shell -# Default credentials reside in ~/.aws/credentials. -atlas schema apply --env dev -``` - -```shell -AWS_ACCESS_KEY_ID="ACCESS_ID" AWS_SECRET_ACCESS_KEY="SECRET_KEY" atlas schema apply --env dev -``` - - - - -The data source provides two ways to work with AWS Secrets Manager: -- If the `awssdk` query parameter is not set or is set to `v1`, a default AWS Session will be created with the - SharedConfigEnable option enabled; if you have authenticated with the AWS CLI, it will use those credentials. -- If the `awssdk` query parameter is set to `v2`, the data source will create an AWS Config based on the AWS SDK V2. - -Using local AWS Profiles: -* Using a local AWS Credentials profile is supported in the `v2` mode. The `profile` query parameter is used - to specify the profile name. To use profiles set `awssdk=v2` and `profile=` in the URL. - -```hcl title="atlas.hcl" -data "runtimevar" "pass" { - url = "awssecretsmanager://?region=" -} - -data "runtimevar" "pass_from_profile" { - url = "awssecretsmanager://?region=&awssdk=v2&profile=" -} - -env "dev" { - src = "schema.hcl" - url = "mysql://root:${data.runtimevar.pass}@host:3306/database" -} -``` - -#### Usage example - -```shell -# Default credentials reside in ~/.aws/credentials. -atlas schema apply --env dev -``` - -```shell -AWS_ACCESS_KEY_ID="ACCESS_ID" AWS_SECRET_ACCESS_KEY="SECRET_KEY" atlas schema apply --env dev -``` - - - - -```hcl title="atlas.hcl" -data "runtimevar" "pass" { - url = "http://service.com/foo.txt" -} - -env "dev" { - src = "schema.hcl" - url = "mysql://root:${data.runtimevar.pass}@host:3306/database" -} -``` - - - - -```hcl title="atlas.hcl" -data "runtimevar" "pass" { - url = "file:///path/to/config.txt" -} - -env "dev" { - src = "schema.hcl" - url = "mysql://root:${data.runtimevar.pass}@host:3306/database" -} -``` - - - - -#### Data source: `hcl_schema` - -The `hcl_schema` data source allows the loading of an Atlas HCL schema from a file or directory, with optional variables. - -##### Arguments {#data-source-hcl-schema-arguments} - -- `path` - The path to the HCL file or directory (cannot be used with `paths`). -- `paths` - List of paths to HCL files or directories (cannot be used with `path`). -- `vars` - A map of variables to pass to the HCL schema. - -##### Attributes {#data-source-hcl-schema-attributes} - -- `url` - The [URL](../concepts/url.mdx) of the loaded schema. - - - - - -```hcl {5-10} -variable "tenant" { - type = string -} - -data "hcl_schema" "app" { - path = "schema.hcl" - vars = { - tenant = var.tenant - } -} - - -env "local" { - src = data.hcl_schema.app.url - url = "sqlite://test?mode=memory&_fk=1" -} -``` - - - - -```hcl title="schema.hcl" -// This variable is passed as an input value from "atlas.hcl". -variable "tenant" { - type = string -} - -schema "main" { - name = var.tenant -} -``` - - - - -#### Data source: `external_schema` - -The `external_schema` data source enables the import of an SQL schema from an external program into Atlas' desired state. -With this data source, users have the flexibility to represent the desired state of the database schema in any language. - -##### Arguments {#data-source-external-schema-arguments} - -- `program` - The first element of the string is the program to run. The remaining elements are optional command line arguments. -- `working_dir` - The working directory to run the program from. Defaults to the current working directory. - -##### Attributes {#data-source-external-schema-attributes} - -- `url` - The [URL](../concepts/url.mdx) of the loaded schema. - -#### Usage example - -By running `atlas migrate diff` with the given configuration, the external program will be executed and its loaded state -will be compared against the current state of the migration directory. In case of a difference between the two states, -a new migration file will be created with the necessary SQL statements. - -```hcl title="atlas.hcl" -data "external_schema" "graph" { - program = [ - "npm", - "run", - "generate-schema" - ] -} - -env "local" { - src = data.external_schema.graph.url - dev = "docker://mysql/8/dev" - migration { - dir = "file://migrations" - } -} -``` - - -#### Data source: `composite_schema` {#data-source-composite_schema} - -The `composite_schema` data source allows the composition of multiple Atlas schemas into a unified schema graph. This -functionality is useful when projects schemas are split across various sources such as HCL, SQL, or application ORMs. -For example, each service have its own database schema, or an ORM schema is extended or relies on other database schemas. - -Referring to the `url` returned by this data source allows reading the entire project schemas as a single unit by any of -the Atlas commands, such as `migrate diff`, `schema apply`, or `schema inspect`. - -##### Arguments {#data-source-composite-schema-arguments} - -`schema` - one or more blocks containing the [URL](../concepts/url.mdx) to read the schema from. - -
-Usage Details - -##### Mapping to Database Schemas - -The name of the `schema` block represents the database schema to be created in the composed graph. For example, the -following schemas refer to the `public` and `private` schemas within a PostgreSQL database: - -```hcl -data "composite_schema" "project" { - schema "public" { - url = ... - } - schema "private" { - url = ... - } -} -``` - -##### Schema Dependencies - -The order of the `schema` blocks defines the order in which Atlas will load the schemas to compose the entire database -graph. This is useful in the case of dependencies between the schemas. For example, the following schemas refer to the -`inventory` and `auth` schemas, where the `auth` schema depends on the `inventory` schema and therefore should be loaded -after it: - -```hcl -data "composite_schema" "project" { - schema "inventory" { - url = ... - } - schema "auth" { - url = ... - } -} -``` - -##### Schema Composition - -Defining multiple `schema` blocks with the same name enables extending the same database schema from multiple sources. -For example, the following configuration shows how an ORM schema, which relies on database types that cannot be defined -within the ORM itself, can load them separately from another schema source that supports it: - -```hcl -data "composite_schema" "project" { - schema "public" { - url = "file://types.pg.hcl" - } - schema "public" { - url = "ent://ent/schema" - } -} -``` - -##### Labeled vs. Unlabeled Schema Blocks - -Note, if the `schema` block is labeled (e.g., `schema "public"`), the schema will be created if it does not exist, -and the computation for loading the state from the URL will be done within the scope of this schema. - -If the `schema` block is unlabeled (e.g., `schema { ... }`), no schema will be created, and the computation for loading -the state from the URL will be done within the scope of the database. Read more about this in [Schema vs. Database Scope](/concepts/url#scope) -doc. - -
- -##### Attributes {#data-source-composite-schema-attributes} - -- `url` - The [URL](../concepts/url.mdx) of the composite schema. - -#### Usage example - -By running `atlas migrate diff` with the given configuration, Atlas loads the `inventory` schema from the [SQLAlchemy schema](/guides/orms/sqlalchemy), -the `graph` schema from [ent/schema](https://entgo.io), and the `auth` and `internal` schemas from HCL and SQL schemas defined in -Atlas format. Then, the composite schema, which represents these four schemas combined, will be compared against the -current state of the migration directory. In case of a difference between the two states, a new migration file will be -created with the necessary SQL statements. - -```hcl title="atlas.hcl" {1-14} -data "composite_schema" "project" { - schema "inventory" { - url = data.external_schema.sqlalchemy.url - } - schema "graph" { - url = "ent://ent/schema" - } - schema "auth" { - url = "file://path/to/schema.hcl" - } - schema "internal" { - url = "file://path/to/schema.sql" - } -} - -env "dev" { - src = data.composite_schema.project.url - dev = "docker://postgres/15/dev" - migration { - dir = "file://migrations" - } -} -``` - -#### Data source: `remote_dir` - -The `remote_dir` data source reads the state of a migration directory from [Atlas Cloud](https://atlasgo.cloud). For -instructions on how to connect a migration directory to Atlas Cloud, please refer to the [cloud documentation](../cloud/directory.mdx). - -##### Arguments {#data-source-remote_dir-arguments} - -- `name` - The slug of the migration directory, as defined in Atlas Cloud. -- `tag` (optional) - The tag of the migration directory, such as Git commit. If not specified, the latest -tag (e.g., `master` branch) will be used. - -##### Attributes {#data-source-remote_dir-attributes} - -- `url` - A [URL](../concepts/url.mdx) to the loaded migration directory. - -:::note -The `remote_dir` data source predates the `atlas://` URL scheme. The example below is equivalent to executing Atlas with -`--dir "atlas://myapp"`. -::: - -```hcl title="atlas.hcl" {6-10} -variable "database_url" { - type = string - default = getenv("DATABASE_URL") -} - -data "remote_dir" "migrations" { - // The slug of the migration directory in Atlas Cloud. - // In this example, the directory is named "myapp". - name = "myapp" -} - -env { - // Set environment name dynamically based on --env value. - name = atlas.env - url = var.database_url - migration { - dir = data.remote_dir.migrations.url - } -} -``` - -#### Usage example - -```shell -ATLAS_TOKEN="" \ - atlas migrate apply \ - --url "" \ - -c file://path/to/atlas.hcl \ - --env prod -``` - -```shell -DATABASE_URL="" ATLAS_TOKEN="" \ - atlas migrate apply \ - -c file://path/to/atlas.hcl \ - --env prod -``` - -:::info Reporting Cloud Deployments -In case the `cloud` block was activated with a valid token, Atlas logs migration runs in your cloud account -to facilitate the monitoring and troubleshooting of executed migrations. The following is a demonstration of how it -appears in action: - -
-Screenshot example - -![](https://atlasgo.io/uploads/cloud/images/full-deployment-example.png) - -
- -::: - -#### Data source: `template_dir` - -The `template_dir` data source renders a migration directory from a template directory. It does this by parsing the -entire directory as [Go templates](https://golang.org/pkg/text/template), executing top-level (template) files that -have the `.sql` file extension, and generating an in-memory migration directory from them. - -##### Arguments {#data-source-template_dir-arguments} - -- `path` - A path to the template directory. -- `vars` - A map of variables to pass to the template. - -##### Attributes {#data-source-template_dir-attributes} - -- `url` - A [URL](../concepts/url.mdx) to the generated migration directory. - - - - -```hcl title="atlas.hcl" {6-14,19} -variable "path" { - type = string - description = "A path to the template directory" -} - -data "template_dir" "migrations" { - path = var.path - vars = { - Key1 = "value1" - Key2 = "value2" - // Pass the --env value as a template variable. - Env = atlas.env - } -} - -env "dev" { - url = var.url - migration { - dir = data.template_dir.migrations.url - } -} -``` - - - - -```hcl title="atlas.hcl" {13-18,24} -variable "schema_name" { - type = string - default = "Database schema name injected to both migrations directory and HCL schema" -} - -data "hcl_schema" "app" { - path = "path/to/schema.hcl" - vars = { - schema_name = var.schema_name - } -} - -data "template_dir" "migrations" { - path = "path/to/directory" - vars = { - schema_name = var.schema_name - } -} - -env "local" { - src = data.hcl_schema.app.url - dev = "sqlite://file?mode=memory&_fk=1" - migration { - dir = data.template_dir.migrations.url - } -} -``` - - - - -#### Data source: `aws_rds_token` - -The `aws_rds_token` data source generates a short-lived token for an [AWS RDS](https://aws.amazon.com/rds/) database -using [IAM Authentication](https://aws.github.io/aws-sdk-go-v2/docs/sdk-utilities/rds/#iam-authentication). - -To use this data source: -1. Enable IAM Authentication for your database. For instructions on how to do this, - [see the AWS documentation](https://aws.github.io/aws-sdk-go-v2/docs/sdk-utilities/rds/#iam-authentication). -2. Create a database user and grant it permission to authenticate using IAM, see - [the AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html) - for instructions. -3. Create an IAM role with the `rds-db:connect` permission for the specific database and user. For instructions on how to do this, - [see the AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html). - -##### Arguments {#data-source-aws_rds_token-arguments} - -- `region` - The AWS region of the database (Optional). -- `endpoint` - The endpoint of the database (hostname:port). -- `username` - The database user to authenticate as. -- `profile` - The AWS profile to use for authentication (Optional). - -##### Attributes {#data-source-aws_rds_token-attributes} - -- The loaded variable is a `string` type with no attributes. Notice that the token contains special characters that - need to be escaped when used in a URL. To escape the token, use the `urlescape` function. - -##### Example - -```hcl title="atlas.hcl" -locals { - user = "iamuser" - endpoint = "hostname-of-db.example9y7k.us-east-1.rds.amazonaws.com:5432" -} - -data "aws_rds_token" "db" { - region = "us-east-1" - endpoint = local.endpoint - username = local.user -} - -env "rds" { - url = "postgres://${local.user}:${urlescape(data.aws_rds_token.db)}@${local.endpoint}/postgres" -} -``` - -#### Data source: `gcp_cloudsql_token` - -The `gcp_cloudsql_token` data source generates a short-lived token for an [GCP CloudSQL](https://cloud.google.com/sql) database -using [IAM Authentication](https://cloud.google.com/sql/docs/mysql/authentication#manual). - -To use this data source: -1. Enable IAM Authentication for your database. For instructions on how to do this, - [see the GCP documentation](https://cloud.google.com/sql/docs/mysql/create-edit-iam-instances). -2. Create a database user and grant it permission to authenticate using IAM, see - [the GCP documentation](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users) - for instructions. - -##### Attributes {#data-source-gcp_cloudsql_token-attributes} - -- The loaded variable is a `string` type with no attributes. Notice that the token contains special characters that - need to be escaped when used in a URL. To escape the token, use the `urlescape` function. - -##### Example - -```hcl title="atlas.hcl" -locals { - user = "iamuser" - endpoint = "34.143.100.1:3306" -} - -data "gcp_cloudsql_token" "db" {} - -env "rds" { - url = "mysql://${local.user}:${urlescape(data.gcp_cloudsql_token.db)}@${local.endpoint}/?allowCleartextPasswords=1&tls=skip-verify&parseTime=true" -} -``` -:::note -The `allowCleartextPasswords` and `tls` parameters are required for the MySQL driver to connect to CloudSQL. For PostgreSQL, use `sslmode=require` to connect to the database. -::: - -### Environments - -The `env` block defines an environment block that can be selected by using the `--env` flag. - -##### Arguments {#environment-arguments} - -- `for_each` - A meta-argument that accepts a map or a set of strings and is used to compute an `env` instance for each -set or map item. See the example [below](#multi-environment-example). - -- `src` - The [URL](../concepts/url.mdx) of or reference to for the desired schema of this environment. For example: - - `file://schema.hcl` - - `file://schema.sql` - - `file://relative/path/to/file.hcl` - - Directories are also accepted: `file://schema/` - - Lists are accepted as well: - ``` - env "local" { - src = [ - "file://a.hcl", - "file://b.hcl" - ] - } - ``` - - As mentioned, references to data sources such as [`external_schema`](#data-source-external_schema) - or [`composite_schema`](#data-source-composite_schema) are a valid value for the `src` attribute. - -- `url` - The [URL](../concepts/url.mdx) of the target database. - -- `dev` - The [URL](../concepts/url.mdx) of the [Dev Database](../concepts/dev.mdx). - -- `schemas` - A list of strings defines the schemas that Atlas manages. - -- `exclude` - A list of strings defines glob patterns used to filter resources on inspection. - -- `migration` - A block defines the migration configuration of the env. - - `dir` - The [URL](../concepts/url.mdx) to the migration directory. - - `baseline` - An optional version to start the migration history from. Read more [here](../versioned/apply.mdx#existing-databases). - - `exec_order` - Set the file execution order [`LINEAR` (default), `LINEAR_SKIP`, `NON_LINEAR`]. Read more [here](../versioned/apply.mdx#execution-order). - - `lock_timeout` - An optional timeout to wait for a database lock to be released. Defaults to `10s`. - - `revisions_schema` - An optional name to control the schema that the revisions table resides in. - -- `format` - A block defines the formatting configuration of the env per command (previously named `log`). - - `migrate` - - `apply` - Set custom formatting for `migrate apply`. - - `diff` - Set custom formatting for `migrate diff`. - - `lint` - Set custom formatting for `migrate lint`. - - `status` - Set custom formatting for `migrate status`. - - `schema` - - `inspect` - Set custom formatting for `schema inspect`. - - `apply` - Set custom formatting for `schema apply`. - - `diff` - Set custom formatting for `schema diff`. - -- `lint` - A block defines the migration linting configuration of the env. - - `format` - Override the `--format` flag by setting a custom logging for `migrate lint` (previously named `log`). - - `latest` - A number configures the `--latest` option. - - `git.base` - A run analysis against the base Git branch. - - `git.dir` - A path to the repository working directory. - - `review` - The policy to use when deciding whether the user should be prompted to review and approve the changes. - Currently works with declarative migrations and requires the user to log in. Supported options: - - `ALWAYS` - Always prompt the user to review and approve the changes. - - `WARNING` - Prompt if any diagnostics are found. - - `ERROR` - Prompt if any severe diagnostics (errors) are found. By default this will happen on destructive changes only. - -- `diff` - A block defines the schema diffing policy. - -##### Multi Environment Example - -Atlas adopts the `for_each` meta-argument that [Terraform uses](https://www.terraform.io/language/meta-arguments/for_each) -for `env` blocks. Setting the `for_each` argument will compute an `env` block for each item in the provided value. Note -that `for_each` accepts a map or a set of strings. - - - - -```hcl title="atlas.hcl" -env "prod" { - for_each = toset(data.sql.tenants.values) - url = urlsetpath(var.url, each.value) - migration { - dir = "file://migrations" - } - format { - migrate { - apply = format( - "{{ json . | json_merge %q }}", - jsonencode({ - Tenant : each.value - }) - ) - } - } -} -``` - - - - -```hcl title="atlas.hcl" -env "prod" { - for_each = toset(data.sql.tenants.values) - url = urlsetpath(var.url, each.value) - src = "schema.hcl" - format { - schema { - apply = format( - "{{ json . | json_merge %q }}", - jsonencode({ - Tenant : each.value - }) - ) - } - } - // Inject custom variables to the schema.hcl defined below. - tenant = each.value -} -``` - -```hcl title="schema.hcl" -variable "tenant" { - type = string - description = "The schema we operate on" -} - -schema "tenant" { - name = var.tenant -} - -table "users" { - schema = schema.tenant - // ... -} -``` - - - - -## Configure Migration Linting - -Project files may declare `lint` blocks to configure how migration linting runs in a specific environment or globally. - -```hcl -lint { - destructive { - // By default, destructive changes cause migration linting to error - // on exit (code 1). Setting `error` to false disables this behavior. - error = false - } - // Custom logging can be enabled using the `format` attribute (previously named `log`). - format = < - - -```shell -# When working on a single database schema. ---dev-url "docker://mysql/8/schema" - -# When working on multiple database schemas. ---dev-url "docker://mysql/8" -``` - - - -```shell -# When working on a single database schema. ---dev-url "docker://maria/latest/schema" - -# When working on multiple database schemas. ---dev-url "docker://maria/latest" -``` - - - - -```shell -# When working on a single database schema, use the auth-created -# "public" schema as the search path. ---dev-url "docker://postgres/15/dev?search_path=public" - -# When working on multiple database schemas. ---dev-url "docker://postgres/15/dev" -``` - - - - -```shell -# Atlas supports working with an in-memory database in SQLite. ---dev-url "sqlite://dev?mode=memory" -``` - - - - -```shell -# When working on a single database schema. ---dev-url "docker://sqlserver/2022-latest/dev?mode=schema" - -# When working on multiple database schemas. ---dev-url "docker://sqlserver/2022-latest/dev?mode=database" -``` - - - - - -```shell -# When working on a single database schema. ---dev-url "docker://clickhouse/23.11/schema" - -# When working on multiple database schemas. ---dev-url "docker://clickhouse/23.11" -``` - - - - - -```shell -# When working on a single database schema, use the auth-created -# "public" schema as the search path. ---dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public" - -# When working on multiple database schemas. ---dev-url "redshift://user:pass@redshift-cluster:5439/dev" -``` - - - - -## Schema Definition - -Once the dev-database is set, Atlas utilizes it to convert the provided raw SQL files and statements into the Atlas -_"schema graph"_, that then can be used by various layers of the engine to _diff_, _plan_, and _apply_ changes onto the -target database. It's important to note that Atlas loads the raw SQL schema by executing the statements defined in the files -one by one. As such, it is expected that files and statements are ordered according to their dependencies. For example, -if a `VIEW` named `v1` depends on `TABLE` named `t1`, `v1` must be defined after `t1`, either in the same file or in a -separate one. - -:::info Ensuring the dev-database is clean -As mentioned above, Atlas uses the dev database to compute the desired state of the database schema. Therefore, before -starting its work, Atlas ensures the dev database is clean and there are no leftovers from previous runs. Once done, -Atlas cleans up after itself and the dev database is ready for future runs. -::: - -## Schema File - -An SQL schema defined in a single file is typically named `schema.sql` and composed of multiple DDL statements separated -by a semicolon (`;`) or a [custom delimiter](../versioned/new.mdx#custom-statements-delimiter), which can be validly -executed onto a database one after the other. - -```sql title="schema.sql" --- create "users" table -CREATE TABLE `users` ( - `id` int NOT NULL, - `name` varchar(100) NULL, - PRIMARY KEY (`id`) -); - --- create "blog_posts" table -CREATE TABLE `blog_posts` ( - `id` int NOT NULL, - `title` varchar(100) NULL, - `body` text NULL, - `author_id` int NULL, - PRIMARY KEY (`id`), - CONSTRAINT `author_fk` FOREIGN KEY (`author_id`) REFERENCES `example`.`users` (`id`) -); -``` - -In order to use an SQL schema file as an Atlas state, use the following format: `file://path/to/schema.sql`. - -## Schema Directory - -An SQL schema directory includes one or more schema files, ordered lexicographically according to their dependencies. -For example, a table with foreign keys must be defined after the other tables it references, and a view should also -be defined after the other tables and views it depends on. - -In order to use a schema directory as an Atlas state, use the following format: `file://path/to/dir`. - -## Template Directory - -Atlas supports computing the desired schemas dynamically using [Go templates](https://pkg.go.dev/text/template) and -injected variables. To set it up for a project, create an `atlas.hcl` [config file](projects.mdx), if you don't already -have one. Then, declare a new data source of type [`template_dir`](projects.mdx#data-source-template_dir) that can be -used later as an Atlas schema. - -```hcl title="atlas.hcl" -variable "path" { - type = string - description = "A path to the template directory" -} - -data "template_dir" "schema" { - path = var.path - vars = { - key = "value" - // Pass the --env value as a template variable. - env = atlas.env - } -} - -env "dev" { - url = var.url - src = data.template_dir.schema.url -} -``` - -```sql title="example/schema.tmpl.sql" -{{- if eq .env "dev" }} - create table dev2 (c text); - {{ template "shared/users" "dev2" }} -{{- else }} - create table prod2 (c text); - {{ template "shared/users" "prod2" }} -{{- end }} -``` diff --git a/doc/md/atlas-vs-others.mdx b/doc/md/atlas-vs-others.mdx deleted file mode 100644 index 7d4540ec156..00000000000 --- a/doc/md/atlas-vs-others.mdx +++ /dev/null @@ -1,332 +0,0 @@ ---- -title: Atlas vs Classic Migration Tools -id: atlas-vs-others -slug: atlas-vs-others ---- - -## Introduction - -### What are schema migrations? - -Most server-side applications are backed by a database. This database usually has a schema that reflects the -application's data model. Over time, the application's data model evolves and the database schema must follow suit. - -Schema migrations are a common approach in our industry to automate schema changes. Following this approach, the -user creates a versioned *migration script*, which is the set of commands that should be executed on the database -to upgrade the schema from one version to the next, thereby *migrating* the database to the next version. - -### What are migration tools? - -Over the years, hundreds of tools have been created to facilitate the process of schema migrations. Migration -tools typically provide a structured way of defining migration scripts, versioning them, and a program to execute -them on a target database. - -Because most database schema changes are not idempotent (they cannot be re-run successfully once applied), -migration tools commonly maintain a "Changelog Table" on the target database to keep track of which migrations -have already been applied. - -### Which migration tool should you use? - -Choosing the right migration tool for your project can be daunting due to the sheer number of options available. In this document, we compare Atlas, a modern, database schema-as-code tool to more "classic" options that have been developed over the years. - -We will take a high-level look at multiple candidates: - -- **Liquibase and Flyway** - established JVM-based migration tools -- **ORM-based solutions** - that ship with the most popular ORM and backend frameworks -- **Atlas** - a modern, database schema-as-code tool - - -### The candidates - -| | Atlas | Liquibase | Flyway | ORMs | -|------------------------------|-------------------------|-------------------|-------------------------------|---------------| -| Tagline | Database schema as code | Database DevOps | Database migrations made easy | Varies | -| Open-sourced | 2021 | 2006 | 2010 | Varies | -| Written in | Go | Java | Java | Varies | -| Docker Image Size | 22 MB | 198 MB | 242 MB | Varies | -| Migration Format | SQL | XML/JSON/SQL/Java | SQL | Native Code * | -| Commercial Support Available | ✅ | ✅ | ✅ | ☒ * | - - -**Liquibase and Flyway -** are well-established projects that have been around since 2006 and 2010 respectively. Both are -written in Java and require a JVM to execute. Both tools have an open-source distribution and are backed by commercial companies. - -Liquibase and Flyway alike operate by letting the user define migration scripts (Flyway in plain SQL, Liquibase in -XML, SQL, or Java) and execute them on behalf of the user. - -**ORM-based Solutions** - many backend developers use application development frameworks or ORM libraries to -interact with their underlying database. Virtually all of them provide some support for schema management which -is crucial for supporting developers in any realistic production environment. - -ORM-based migration tools vary greatly in quality and sophistication which makes it a bit difficult to treat them -as a single group. In relevant categories where certain tools especially stand out, we will try to mention them specifically. -ORM tend to provide native programming language based DSLs for defining migrations (such as -[Sequelize Migrations](https://sequelize.org/docs/v6/other-topics/migrations/), with a few exceptions which use plain -SQL. - -ORMs tend to be community-based projects without a commercial entity backing them (with the exception of -Prisma. This means that the level of support and maintenance can vary greatly between different -options. - -**Atlas** - is a database schema-as-code tool that applies modern DevOps principles to the ancient art of database -schema management. Many of its users call it a "Terraform for Databases" because of its schema-first, -declarative approach. - -Atlas is an [open-core](https://github.com/ariga/atlas) project developed by [Ariga](https://ariga.io) and is available -under both a commercial and an open-source license. - -:::info Full Disclosure - -In case you missed it, this document is written and maintained by the team behind Atlas 😊 - -::: - -## The Comparison - -### Summary (tl;dr) - -To assist you in deciding which migration tool is right for you, we will compare the different possibilities along these categories: - -| | Description | Atlas | Liquibase | Flyway | ORMs | -|-------------------------|----------------------------|-------|-----------|-----------|---------| -| [Schema-as-code](#schema-as-code) | Schema-first approach? | ✅ | Revisions | Revisions | Mix | -| [Auto migration planning](#planning-migrations) | Are changes planned by the tool? | ✅ | Manual | Manual | Manual* | -| [Auto code review](#quality-checks) | Are automated quality checks available? | ✅ | Some | Some | Some | -| CLI Support | Is CLI support available? | ✅ | ✅ | ✅ | ✅ | -| Kubernetes Operator | Can users natively deploy from k8s? | ✅ | ☒ | ☒ | ☒ | -| Terraform Provider | Does the tool integrate with IaC? | ✅ | ☒ | ☒ | ☒ | -| [Database Support](#database-support) | Database driver count | 9 | 60 | 31 | Varies | - - -\* Exceptions exist - -### Schema-as-code - -What is the source-of-truth for your database schema? - -| | Atlas | Liquibase | Flyway | ORMs | -|------------|-------|-----------|--------|------| -| ORM Models | ✅ | ☒ | ☒ | Mix | -| Migrations | ✅ | ✅ | ✅ | Mix | -| SQL Schema | ✅ | ☒ | ☒ | ☒ | -| HCL | ✅ | ☒ | ☒ | ☒ | - - -One of the most important principles that came from the DevOps movement is the idea that to achieve effective automation, -you need to be able to build everything, deterministically, from a single source of truth. - -This is especially important when it comes to databases! The database schema is a critical part of our application and -we better have a way to ensure it is compatible with the code we are deploying. - -Let's compare the approach to defining the source of truth schema between different migration tools: - -**Flyway and Liquibase** are "version-based", which means that to obtain the current source of truth schema, -one needs to replay all migrations on an empty database and inspect the result. - -**ORMs and frameworks** are more difficult to classify. On one hand, ORMs and frameworks revolve around a -code-first definition of the different entities in the application data model. - -On the other hand, migrations are defined as revisions, in a version-based approach. Aside from Prisma -and [Django](https://www.djangoproject.com/), most frameworks do not supply a mechanism to ensure that the planned -migrations are consistent with the application data model as it is defined in the ORM. - -**Atlas** is a database _schema-as-code_ tool that takes the "schema-based" approach. Developers provide Atlas with the -source-of-truth schema in various ways (SQL, HCL, and many other programming languages), and Atlas uses that to -automatically plan changes. This way, the source of truth schema is always readily available and legible to users. - -:::info Using Atlas with ORMs - -Atlas provides a set of plugins called "schema loaders" for integrating with ORMs such as -[SQLAlchemy](https://www.sqlalchemy.org/), [GORM](https://gorm.io) and [Hibernate](https://hibernate.org/orm/). - -By using schema loaders, developers can keep defining the application data model using their favorite -ORM and programming language while offloading schema management to Atlas. - -::: - -### Planning Migrations - -Who is responsible for planning schema changes? - -| | Atlas | Liquibase | Flyway | ORMs | -|-------------------------------------------------|-------|-----------|--------|--------------| -| Auto migrations from code | ✅ | ☒ | ☒ | Some | -| Declarative Migrations | ✅ | ☒ | ☒ | Dev-mode | -| Advanced DB Objects (triggers, functions, etc) | ✅ | ☒ | ☒ | Some plugins | - -The next category in our comparison is migration planning. People deliberating which migration tool to use should ask -themselves, who is going to plan the schema changes? If our database is in some version N, and we want to get to -version N+1, we need to calculate the plan of going from one version to the next. - -**Flyway and Liquibase** rely on the user to plan schema changes. If a developer wishes to evolve the database -schema to a new state, it is on them to look into their target database's documentation and find out the correct -way to do so. Sometimes these changes are trivial, but many caveats and unpleasant surprises exist. - -**ORMs and frameworks** most ORMs and frameworks provide a basic "auto migration" functionality that can -automatically plan changes for you. This is used primarily for local development on disposable databases. - -However, when it comes to planning production-grade changes, most frameworks leave planning to the user. - -As we mentioned above, some tools (such as Django) have stood out in this perspective, by providing -automatic migration planning capabilities for many years. Even so, these tools tend to focus on a pretty -narrow set of database capabilities often neglecting power tools such as functions, triggers, and stored procedures. - -**Atlas** includes a powerful automatic migration planning engine that supports over 95% of all database -features. Atlas migration flows come in two flavors: - -1. Fully declarative - Similar to Terraform, Atlas takes the desired state of the database (defined "as-code"), -compares it to a live database, and suggests a migration plan at runtime. - -2. Versioned - Atlas compares the desired state of the database ("as-code") with the current state of your migration -directory and produces a migration file to reconcile between the two. - -Both flows support "diffing policies" that provide Atlas with additional context and considerations about how -to plan changes (such as: "always create indexes concurrently" or "verify columns are empty before dropping"). - -Additionally, Atlas detects ambiguous change types such as resource renames (which can also be interpreted as -drop-and-add), and prompts the user to disambiguate. - -### Automated Quality Checks - -| | Atlas | Liquibase | Flyway | ORMs | -|----------------------------------------|-------|-----------|--------|--------------| -| Enforce Naming Conventions | ✅ | Paid | Paid | ☒ | -| Detect Destructive Changes | ✅ | Paid | ☒ | ☒ | -| Detect Table Locks | ✅ | ☒ | ☒ | ☒ | -| Detect Breaking Changes | ✅ | ☒ | ☒ | ☒ | -| Detect Potential Constraint Violations | ✅ | ☒ | ☒ | ☒ | -| Pre-migration checks | ✅ | ✅ | ☒ | ☒ | -| Post-migration checks | 🔜 | ☒ | ☒ | ☒ | -| Migration conflict detection | ✅ | ☒ | ☒ | ☒ | - -Schema migrations are a risky business, if you are not careful you can accidentally introduce destructive changes, break -a data contract with your server or a downstream consumer, lock a table for writes causing downtime, or cause a -deployment to fail because of a constraint violation. - -Traditionally, the responsibility for reviewing proposed migrations was placed on humans, specifically those with -database expertise. With the growing complexity of modern tech stacks, people with deep database knowledge are becoming -increasingly rare, exposing teams to potential risks. - -The modern approach to de-risking such operations is of course by automating quality checks during the CI phase of the -software development life cycle. Let's see how the different options compare in this respect: - -**Flyway and Liquibase** - provide a limited set of automated quality checks as part of their commercial offering but -not in the free/open source tier. - -**ORMs and frameworks** - generally do not provide automated quality checks for migrations. Over the years some -notable exceptions have emerged, such as the `ankane/strong_migration` Ruby Gem that provides automated migration -quality checks for Rails, and the Shadow Database functionality from "Prisma Migrate". - -**Atlas -** Atlas natively provides a feature called "Migration Linting" which is available both as a CLI tool and -as a CI integration (for example with GitHub Actions). Migration Linting is similar to static code analysis in that -it is based on a semantic analysis of changes and their examination through configurable policies. - -Atlas ships with over 30 automated checks to detect critical migration risks such as: - -* Destructive change detection -* Data-dependent changes that may fail due to constraint violations -* Table-locking operations that can cause database downtime -* Breaking change detection -* .. many more database-specific quirks and edge cases - -In addition, Atlas supports pre-migration checks that can be used to enforce certain predicates before a migration is -applied. For example, you can enforce that a column is empty before dropping it, or that a column is not null before -adding a constraint. - -To assist teams in preventing such negative outcomes, these checks can also be automatically generated by Atlas -during the automatic migration planning phase. These checks are respected by Atlas in the migration safety -verification phase (i.e a migration dropping a table can be automatically approved if it contains no data). - -#### Migration conflict detection - -When a project succeeds to the point that it has many developers working on it, it's common to see developers working -on different features that require different changes to the database schema. This can lead to conflicts when developers -try to merge their changes together. - -Classic migration tools don't provide a good way to detect and handle this situation. Because each migration script -is created in its own file, common conflict detection tools like git can't help you. Your source control system can't -tell you if two developers are working on the same table, or if they are adding columns with the same name. - -For this reason, it's common to see teams surprised in production when conflicting migrations are applied. -Even worse, in some cases migrations may be applied out of order or skipped entirely, leading to an inconsistent -and unknown state of the database. - -Atlas supports this by enforcing a linear history and [Migration Directory Integrity](/concepts/migration-directory-integrity) -during local development and in CI. - -### Deploying Migrations - -| | Atlas | Liquibase | Flyway | ORMs | -|---------------------|--------|-----------|----------|------| -| CLI | ✅ | ✅ | ✅ | ✅ | -| Kubernetes Operator | ✅ | ☒ | ☒ | ☒ | -| Terraform Provider | ✅ | ☒ | ☒ | ☒ | -| GitHub Action | ✅ | ✅ | ☒ | ☒ | -| GitLab CI Component | 🔜 | ☒ | ☒ | ☒ | -| CircleCI Orb | 🔜 | Community | ☒ | ☒ | -| Native Code | Go SDK | Java SDK | Java SDK | ✅ | - -After planning and verifying the safety of migrations are considered, engineers comparing different migration tools -should evaluate the provided deployment strategies. - -As you can see, virtually all tools provide a CLI tool that can be used to deploy migrations, making it possible -to integrate schema management into CI/CD pipelines. - -However, modern deployment solutions often provide a modular plugin system that allows tool creators to develop -simple integrations to streamline the deployment process. - -For instance, organizations that manage their source code on GitHub often use -[GitHub Actions](https://github.com/features/actions) to automate their workflows. Tools that provide a -native GitHub Actions integration make it easier for these organizations to integrate schema management into -their CI/CD pipelines. As of writing this document, Atlas and Liquibase are the only tools that provide a -comprehensive [GitHub Actions](https://github.com/ariga/atlas-action) integration. - -Similarly, organizations that deploy to [Kubernetes](https://kubernetes.io) benefit from using Operators to -extend the Kubernetes API with custom resources. By providing a Kubernetes Operator, a schema management -makes it straightforward to include schemas as native Kubernetes resources. Atlas is the only tool that -provides a Kubernetes Operator for schema management. - -Finally, organizations that manage their infrastructure using [Terraform](https://terraform.io) rely on Terraform -Providers to manage all of their resources. By providing a Terraform Provider, a schema management tool enables -organizations to include schema management in their infrastructure-as-code workflows. Atlas is the only tool -under comparison here that provides a Terraform Provider. - -### Database Support - -Naturally, when comparing migration tools, support for various database engines is a differentiating factor. - -While all tools under comparison in this document support popular open-source RDBMSs such as PostgreSQL, -MySQL and SQLite, support for other databases varies. - -The following table summarizes the support matrix for different databases between the different tools: - -| | Atlas | Liquibase | Flyway | ORMs | -|--------------|-------|-----------|--------|------| -| PostgreSQL | ✅ | ✅ | ✅ | ✅ | -| MySQL | ✅ | ✅ | ✅ | ✅ | -| MariaDB | ✅ | ✅ | ✅ | ✅ | -| SQLite | ✅ | ✅ | ✅ | ✅ | -| LibSQL | ✅ | ☒ | ☒ | Some | -| SQL Server | ✅ | ✅ | ✅ | Some | -| ClickHouse | ✅ | ✅ | ✅ | ☒ | -| Redshift | ✅ | ✅ | ✅ | ☒ | -| Oracle | 🔜 | ✅ | ✅ | Some | -| TiDB | ✅ | ☒ | ✅ | Some | -| CockroachDB | ✅ | ✅ | ✅ | Some | -| MongoDB | ☒ | ✅ | ✅ | Some | -| Apache Derby | ☒ | ✅ | ✅ | ☒ | -| Full List | [Here](https://atlasgo.io/features#database-support) | [47 more..](https://www.liquibase.com/supported-databases) | [18 more..](https://documentation.red-gate.com/flyway/flyway-cli-and-api/supported-databases) | - | - -As you can see from the table, Liquibase and Flyway, having been around for a longer time support the largest -amount of databases. Atlas, being a newer tool, supports fewer databases but is rapidly expanding its support. - -## Conclusion - -We created Atlas to provide a modern, schema-as-code approach to database schema management. Atlas by no means -is the best solution in every case and for every team but we hope this document has provided you with some -insight into how Atlas compares to other tools and where it may be a good fit for your project. - -If you are interested in learning more about Atlas, we recommend you check out the [Quickstart Guide](/getting-started) -and continue exploring the [documentation](/). - -If you have any questions about Atlas or want to engage with a community of Atlas users, you are welcome to join the discussion on the [Atlas Discord server](https://discord.gg/zZ6sWVg6NT). diff --git a/doc/md/cli-data-privacy.mdx b/doc/md/cli-data-privacy.mdx deleted file mode 100644 index f9323cb0008..00000000000 --- a/doc/md/cli-data-privacy.mdx +++ /dev/null @@ -1,91 +0,0 @@ ---- -title: Data Privacy and the CLI -slug: /cli/data-privacy -id: cli-data-privacy ---- - -## What data does the Atlas CLI collect? - -This document explains what data the Atlas CLI collects and how we use it. - -### Anonymous Telemetry - -When you run the Atlas CLI, we may collect anonymous telemetry data to help us understand how users are using the -CLI and provide better support. This includes things like the commands you run, general statistics and information -about the runtime environment. Under no circumstances do we collect any personally identifiable information -or sensitive data. This is further detailed in our [Privacy Policy](https://ariga.io/legal/privacy) and the -[Atlas EULA](https://ariga.io/legal/atlas/eula). - -If you wish to opt-out of telemetry data collection, you can do so by setting the `ATLAS_NO_ANON_TELEMETRY` environment -variable to `true`. This will disable all anonymous telemetry collection. - -### Identified Users - -:::info Logged-in Users - -The following section pertains only relevant to users who have created an [Atlas Pro](/pricing) account and are logged -in to the CLI. - -::: - -As a database management tool, Atlas has access to a lot of sensitive data. We take privacy and security very seriously, -and we want to be transparent about what data we collect and how we use it. - -If you want the full details in formal and legal language you can read our [Privacy Policy](https://ariga.io/legal/privacy) -and [SaaS Agreement](https://ariga.io/legal/saas) which you sign when you create an account, but here is a summary of -the data we collect in layman's terms. - -### Data we DON'T collect - -* **Your Data**: We don't collect any data from your databases. We don't know what data you have in your databases, - and we don't have access to it. -* **Your Database Passwords**: We don't collect, transmit or store your database passwords. -* **Your database schema or migrations**: Unless you explicitly use a command or flag that sends us your schema or - migrations (as described below), we don't collect, transmit or store your database schema, migrations or execution logs. - -### Data we DO collect - -* **Errors and Crashes** - We collect error and crash reports to help us improve the CLI. This includes things like the - stack trace of the error, the command you were running, and the version of the CLI you were using. We are very careful - not to collect and transmit any sensitive data in these reports. -* **Data you choose to report** - Some Atlas commands, such as `migrate push` (which is used to upload your migration - directory to the Atlas Schema Registry) or flags (like running `schema inspect` with the `--web` or `-w` flag), naturally - involve sending us data. See below for more details on these commands and flags. -* **Basic Telemetry** - We collect basic telemetry data to help us understand how users are using the CLI and to enforce - license restrictions. - - This includes things like the commands you run, general information about the runtime environment (like OS version - or if you're running on GitHub Actions), your local hostname, the type of database you're using, and the version - of the CLI you're using. We are very careful not to collect and transmit any sensitive data in these reports. - -### We only collect what you send us - -In principle, aside from basic telemetry and error reporting (with no sensitive data), we only collect data that you -explicitly choose to send us. Consider the following list of commands and flags that send us data: - -* `schema inspect` when used with the `--web` or `-w` flag: This flag opens a web browser to view the schema of your - database. This the schema of your database. -* `migrate push`: This command uploads your migration directory to the Atlas Schema Registry. This includes your schema - and migrations. -* `migrate apply` when used with a directory that is stored in the schema registry (e.g the url has a `atlas://` prefix) - AND the `--env` flag is used: This flag applies a migration from the Atlas Schema Registry to your database and uploads - the execution logs to Atlas Cloud, updating the status of the target database in your account. We also store the - hostname of the target database to help you identify which database the logs belong to. -* `schema apply` when used with the interactive `Lint and Edit` mode: This mode allows you to edit your schema in a - visual editor and then apply the changes to your database. This sends your schema to Atlas Cloud for processing. -* `migrate lint` when used with the `--web` or `-w` flag: This flag opens a web browser to view the linting results of - your migrations. This sends your migrations to Atlas Cloud for processing. - -## How do we use the data we collect? - -We use the data we collect to improve the CLI and provide better support to our users. This includes things like: - -* Fixing bugs and crashes -* Improving the performance of the CLI -* Understanding how users are using the CLI -* Providing better experience to users - -We don't sell your data to third parties, and use it strictly in compliance with our [Privacy Policy](https://ariga.io/legal/privacy) -as well as applicable laws and regulations. We take data privacy and security very seriously and hold ourselves accountable -to the highest standards. To ensure this commitment is upheld we perform an annual SOC 2 audit which you can read more -[about here](https://ariga.io/announcing-soc2-compliance-for-atlas-cloud/). \ No newline at end of file diff --git a/doc/md/cloud/agents.mdx b/doc/md/cloud/agents.mdx deleted file mode 100644 index 8cda8d446df..00000000000 --- a/doc/md/cloud/agents.mdx +++ /dev/null @@ -1,227 +0,0 @@ ---- -title: Cloud Agents -id: agents -slug: /cloud/agents ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -### Background - -Some features require [Atlas Cloud](https://atlasgo.cloud/) to connect to your database. As it is uncommon for databases -to be directly accessible from the internet, you can run Atlas Agents in your database's network to facilitate this -communication. Agents register themselves via credentials against your Atlas Cloud account and continuously poll it for -work. - -The following features require an Atlas Agent installed in your databases network: -* Drift Detection -* Cloud mediated deployments (coming soon) -* Schema monitoring and auditing (coming soon) - -:::info PAID FEATURE -Drift Detection is currently only available in a [paid subscription](https://atlasgo.cloud/pricing). -::: - -### Create an Agent - -To create an Agent, head over to the general settings and click on the "Agents" tab. - -:::info -Only admins can create Agents. Ensure you have sufficient permissions before proceeding. -::: - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/1.png) -
- -Pick a name for your Agent and hit **Create Agent**. - - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/2.png) -
- -### Connect the Agent - -Now that we have created an Agent in your Atlas Cloud account, we can run an Agent process. This is typically done by -running it in the same network as our target database. To authenticate against your Atlas Cloud account, a token will -be created which you will need to provide the Agent with. Store this token in a secure place, as you won't be able to -see it again. If needed, you can always create a new one later. - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/3.png) -
- -To start the Agent and let it poll work from Atlas Cloud, simply follow the instructions shown. Currently, Ariga -provides two builds, a docker image and a linux amd64 binary. All you need to provide the Agent with is a token to -authenticate against Atlas Cloud. - - - - -If you want to use the Docker image, run the following: - -```shell -docker run -e ATLAS_TOKEN arigaio/atlas-agent -``` - - - - -If you want to use the linux build, run the following: - -```shell -curl -L -o atlas-agent 'https://release.ariga.io/atlas-agent/atlas-agent-amd64-latest' -chmod +x atlas-agent -atlas-agent --token arigaio/atlas-agent -``` - - - - -If the Agent has access to the internet and can reach Atlas, you should see a success message. - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/4.png) -
- -Now the Agent is running and can connect to Atlas. However, in order to connect to your databases, it needs to know -how to obtain valid credentials and how to use them. - -### Database Credentials - -Since we want the Agent to connect to our database on behalf of Atlas, it needs to know how to access it. -For this we can assign an Agent multiple database connections. Either click the **Set up Database Connection** or -select the **Database Connections** tab and hit **Create Connection**. - -:::info -You can only create a connection with an actively running Agent. If there is no Agent selectable in the dropdown, -ensure the Agent binary is still running and has access to the cloud. -::: - -Fill out the form with the connection details to your database. - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/5.png) -
- - - - -AWS RDS databases offer to obtain a short-lived token using an IAM role to authenticate against an RDS instance. - -1. Enable IAM Authentication for your database. For instructions on how to do this, - [see the AWS documentation](https://aws.github.io/aws-sdk-go-v2/docs/sdk-utilities/rds/#iam-authentication). - -2. Create an IAM role with the "rds-db:connect" permission for the specific database and user. For instructions on how - to do this, [see the AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html). - -All other info required to obtain a token is derived from the RDS endpoint you define in the connection modal. - - - - -AWS Secrets Manager is a secret store you can use to retrieve a secret from. This is somewhat similar to IAM -authentication, since in this case you need to have access to the secret's store, usually using IAM authentication. -Find the secret name and region for your secret. If you need more info, -[read this guide](/guides/deploying/secrets##using-iam-authentication). If your secret is stored as JSON, e.g. when you -choose to let RDS handle your database password in the Secrets Manager, you can provide the path to the actual token -using a dot-notation. For example, for a secret like `{"password":"my_passw0rd!"}` you'd provide `.password` as the path -to the token. - - - - -[GCP CloudSQL](https://cloud.google.com/sql) offers using -[IAM Authentication](https://cloud.google.com/sql/docs/mysql/authentication#manual) to generate a short-lived token to -use for authentication against a GCP CloudSQL database. - -1. Enable IAM Authentication for your database. For instructions on how to do this, - [see the GCP documentation](https://cloud.google.com/sql/docs/mysql/create-edit-iam-instances). - -2. Create a database user and grant it permission to authenticate using IAM, see - [the GCP documentation](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users) - for instructions. - -All other info required to obtain a token is derived from the CloudSQL endpoint you define in the connection modal. - - - - -GCP Secrets Manager is a secret store you can use to retrieve a secret from. Find the project ID and secret name of your -secret. If you need more info, [read this guide](/guides/deploying/secrets##using-iam-authentication). If your secret is -stored as a JSON, you can provide the path to the actual token using a dot-notation. For example, for a secret like -`{"password":"my_passw0rd!"}` you'd provide `.password` as the path to the token. - - - - -You can tell the Agent to look for the password in an environment variable. For example, if your password lives in an -environment variable like `DATABASE_PASSWORD=passw0rd` you'd provide `DATABASE_PASSWORD` to the Agent. - -:::info -We advise to use either IAM authentication or a Secrets Manager to obtain a database password. -::: - - - - -To ensure the credentials are correct, Atlas will check if the credentials are working before we can save them. Hit -the **Test Connection** button and wait. It can take a few seconds before the Agent will pick up the job and check -the connection to the database. - -If all goes well, we should see a message telling us that Atlas was able to connect to our database through the -Agent. - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/6.png) -
- -### Drift Detection - -Once Atlas can connect to your database, it can start monitoring your schema and warn you if it detects a drift -between your migration directory and its deployment. In the migration directory overview, click on -**Enable Drift Detection**. You'll be asked which database connection the deployments are reachable on. - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/7.png) -
- -Once enabled, Atlas will run drift detection jobs twice a day. If there is a drift, Atlas will provide you with detailed -information about the drift, including an ERD, HCL diff and SQL statements required to fix the drift. - -:::caution -Do not apply the SQL blindly to fix the drift. It is potentially destructive. -::: - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/8.png) -
- -### Notifications - -You can instruct Atlas to notify you if there is a drift. Atlas supports various channels, such as email, Slack, -Workplace or by a plain webhook. - -
-Screenshot Example - -![](https://atlasgo.io/uploads/cloud/drift-detection/9.png) -
diff --git a/doc/md/cloud/beta-drivers.mdx b/doc/md/cloud/beta-drivers.mdx deleted file mode 100644 index b9266e671a4..00000000000 --- a/doc/md/cloud/beta-drivers.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -id: beta-drivers -title: Get access to beta drivers ---- - -Atlas Cloud users can get early access to beta drivers by joining the Atlas Cloud Beta Program. To join the program, -you first need to create an account on Atlas Cloud, enable it in your account settings, and then login to the CLI -using your Atlas identity. - -### Sign up to Atlas Cloud - -The first step is to sign up to [Atlas Cloud](https://auth.atlasgo.cloud/signup). If you already have an account, you can skip this step. - -### Opt-in to the beta program - -Go to the account settings by clicking your avatar. Then, select the driver you want to get access to. For example, -Microsoft SQL Server. - -![](https://atlasgo.io/uploads/cloud/images/beta-drivers.png) - -### Login to the CLI - -Then, run `atlas login` to login or update your account permissions: - -```text -$ atlas login a8m -//highlight-next-line-info -You are now connected to "a8m" on Atlas Cloud. -``` - -### Use the beta driver - -```bash -atlas schema apply \ - --url "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \ - --to "file://schema.hcl" -``` - diff --git a/doc/md/cloud/bot.mdx b/doc/md/cloud/bot.mdx deleted file mode 100644 index dcc6fc1c31a..00000000000 --- a/doc/md/cloud/bot.mdx +++ /dev/null @@ -1,32 +0,0 @@ ---- -title: Creating Bots -id: bots -slug: /cloud/bots ---- - -Bots are "headless" users that can be used to perform certain actions against the Atlas Cloud API, such as pushing and -reading schema changes from the Registry, or reporting CI and migration statuses. However, bots **cannot be used for local -development** or to access the Atlas Cloud UI. - -:::info Token Usage For Local Development -Organizations on the Community (Free) plan cannot use bot tokens for local development, and must use their personal -tokens instead. Personal tokens are issued by running `atlas login` and are limited to the users who created them. -::: - -### Creating a Bot User - -To create a bot user, make sure you are logged in as an administrator to you Atlas Cloud account. - -Follow these steps to create a new bot user: -1. To get to the bot management screen, click on _**Settings > Bots**_ in the left navigation. - In this screen, click on the _**Create Bot**_ button. - - ![](https://atlasgo.io/uploads/bot-list.png) - -2. Give your bot a name and click _**Create**_. - - ![](https://atlasgo.io/uploads/create-bot.png) - -3. Copy the token and store it in a safe place. You will not be able to see it again. - - ![](https://atlasgo.io/uploads/copy-bot.png) diff --git a/doc/md/cloud/deployment.mdx b/doc/md/cloud/deployment.mdx deleted file mode 100644 index b08f274cc9f..00000000000 --- a/doc/md/cloud/deployment.mdx +++ /dev/null @@ -1,81 +0,0 @@ ---- -id: deployment -title: Deploying migrations from Atlas Cloud ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -### Deploying without Atlas Cloud - -A common way to deploy migrations using Atlas (or any other migration tool) is similar to this: - -* When changes are merged to the `main` branch, a CI/CD pipeline is triggered. -* The pipeline builds an artifact (usually a Docker image) that includes the migration directory content - and Atlas itself. -* The artifact is pushed to a registry. -* The deployment process is configured to use this newly created image to run the migrations - against the production database. - -This process is a common practice, but it requires setting up a CI/CD pipeline (including storage, permissions, -and other glue) for each service, adding another layer of complexity. - -### Why deploy from Atlas Cloud? - -Atlas Cloud streamlines deploying migrations by providing a single place to manage migrations for all your services. After connecting -your migration directory to Atlas Cloud, it is automatically synced to a central location on every commit to your main branch. -Once this setup (which takes less than one minute) is complete, you can deploy migrations from Atlas Cloud to any environment -with a single command (or using popular CD tools such as Kubernetes and Terraform). - -## Deploying migrations from Atlas Cloud - -To read the migration directory from the [Schema Registry](https://atlasgo.io/cloud/features/registry), -use the `atlas://` scheme in the migration URL as follows: - -```hcl title="atlas.hcl" -env { - // Set environment name dynamically based on --env value. - name = atlas.env - migration { - // In this example, the directory is named "myapp". - dir = "atlas://myapp" - } -} -``` - -Now you can deploy migrations from Atlas Cloud using the following command: - -```shell -export ATLAS_TOKEN="" -atlas migrate apply \ - --url "" \ - --config file://path/to/atlas.hcl \ - --env prod -``` - -## Visualizing Migration Runs - -Schema migrations are an integral part of application deployments, yet the setup might vary between different applications and teams. -Some teams may prefer using init-containers, while others run migrations from a structured CD pipeline. There are also those who opt for Helm -upgrade hooks or use our Kubernetes operator. The differences also apply to databases. Some applications work with one database, -while others manage multiple databases, often seen in [multi-tenant applications](/blog/2022/10/27/multi-tenant-support). - -However, across all these scenarios, there's a shared need for a single place to view and track the progress of executed -schema migrations. This includes triggering alerts and providing the means to troubleshoot and manage recovery if -problems arise. - -When you use the configuration above with a valid token, Atlas will log migration -runs in your cloud account. Here's a demonstration of how it looks in action: - - - - -[![](https://atlasgo.io/uploads/cloud/images/deployment-2.png)](https://gh.atlasgo.cloud/dirs/4294967383/deployments/51539607593) - - - - -[![](https://atlasgo.io/uploads/cloud/images/deployment-set-2.png)](https://gh.atlasgo.cloud/dirs/4294967347/deployments/sets/94489280524) - - - diff --git a/doc/md/cloud/directory.mdx b/doc/md/cloud/directory.mdx deleted file mode 100644 index 6f3660a4fa4..00000000000 --- a/doc/md/cloud/directory.mdx +++ /dev/null @@ -1,90 +0,0 @@ ---- -title: Connect Migration Directories -id: directories -slug: /cloud/directories ---- - -import MigratePushCommand from '../components/migrate-push-command.mdx' - -The following guide explains how to connect a migration directory to Atlas Cloud. Once connected, Atlas provides a list -of features enabling users to seamlessly audit their migration history, run continuous migration linting on their PRs, -monitor executed migrations and receive webhook notifications, among other capabilities. - -To get started, you need to sync your migration directory to the [Schema Registry](/cloud/features/registry) in Atlas Cloud. - -### Login to Atlas Cloud - -Firstly, you'll need to [log in to Atlas](https://auth.atlasgo.cloud/signup). If it's your first time, you'll be -prompted to create both an account and a workspace (organization): - -```bash -atlas login -``` - -### Push the migration directory - -After logging in, we can push the migration directory to Atlas Cloud to persist its state. This allows it to be read by -Atlas CLI later on in our CI/CD pipelines or by other developers on our team. - - - -:::note -For the purpose of this example, we have named our migration directory `app`, but you can choose a -different name containing only alphanumeric characters, dashes, and underscores. -::: - -
- - atlas migrate push - -

Migration Directory created with atlas migrate push

-
- -### Setup CI - -After setting up your migration directory, the next step is to add Atlas to your CI pipeline. This ensures that the -migration files are checked and reviewed before being merged into the main branch. Below are the supported CI platforms -Atlas currently supports: - -- [GitHub Actions](/cloud/setup-ci#github-action-workflow) -- [GitLab CI](/guides/ci-platforms/gitlab) -- Using other CI platforms? [Let us know!](https://github.com/ariga/atlas/issues/new) - - -### Setup CD - -Atlas Cloud streamlines deploying migrations by providing a single place to manage migrations for all your services. -After connecting your migration directory to Atlas Cloud, it is automatically synced to a central location on every -commit to your main branch. Once this setup (which takes less than one minute) is complete, you can deploy migrations -from Atlas Cloud to any environment with a single command (or using popular CD tools such as Kubernetes and Terraform). - -For more information, visit the _Introduction to schema migrations deployments_ in the [guides section](/guides/deploying/intro). - -### Slack Integration - -You can use Slack Webhooks to receive updates in your Slack channel about schema changes, CI runs and deployment statuses of your migration directory. -Make sure you have a Slack incoming webhook configured -(you can create one [here](https://my.slack.com/services/new/incoming-webhook)). - -Select your migration directory, go to "Webhooks" and click on **Add Webhook**. -
-Screenshot example - -![](https://atlasgo.io/uploads/cloud/images/webhooks.png) - -
- - -Select the Slack integration, Choose the events you want to be notified about and insert your webhook URL. -Click on **Add Webhook** to confirm. - -Once the Webhook is created, you will be redirected to the Webhooks page. -Select a webhook to edit its configuration and to view its delivery history. -
-Screenshot example - -![](https://atlasgo.io/uploads/cloud/images/manage-slack-webhook.png) - -
- -You can also test the webhook by clicking on `Send Ping`. diff --git a/doc/md/cloud/features/pre-migration-checks.mdx b/doc/md/cloud/features/pre-migration-checks.mdx deleted file mode 100644 index 75f544f3be3..00000000000 --- a/doc/md/cloud/features/pre-migration-checks.mdx +++ /dev/null @@ -1,102 +0,0 @@ ---- -title: Pre-migration checks -id: pre-migration-checks -slug: /cloud/features/pre-migration-checks ---- - -## Overview - -With pre-migration checks, you can provide a list of assertions to make on the target database -before starting a migration. This can greatly reduce the failure rate of migrations by catching -issues early. - -## Why use pre-migration checks? - -Atlas provides a [comprehensive set of safety checks](/lint/analyzers) that can be made against migrations during the -Continuous Integration (CI) process. - -However, some issues can only be classified as "might fail" during the CI process because they depend on the -actual state of the target database. For example, a migration might fail when you're adding a uniqueness -constraint to a column that already contains duplicate values. Alternatively, you might decide that you only -want to delete a column if it doesn't contain data. - -## How it works - -To add these checks, Atlas supports a text-based file archive to describe "migration plans". Unlike regular migration files, -which mainly contain a list of DDL statements (with optional directives), Atlas `txtar` files (currently) support two file -types: migration files and pre-execution check files. - -The code below presents a simple example of a pre-migration check. The default checks file is named `checks.sql`, -and the `migration.sql` file contains the actual DDLs to be executed on the database in case the assertions are passed. - -```sql title="20240201131900_drop_users.sql" --- atlas:txtar - -// highlight-next-line --- checks.sql -- --- The assertion below must be evaluated to true. Hence, the "users" table must not contain any rows. -SELECT NOT EXISTS(SELECT * FROM users); - -// highlight-next-line --- migration.sql -- --- The statement below will be executed only if the assertion above evaluates to true. -DROP TABLE users; -``` - -If the pre-execution checks pass, the migration will be applied and Atlas will report the results. - -```bash -atlas migrate --dir atlas://app --env prod -``` - -![Check passed](https://atlasgo.io/uploads/cloud/deploy/check-passed-v1.png) - -```applylog title="Output" -Migrating to version 20240201131900 from 20240201131800 (1 migrations in total): - -// highlight-start - -- checks before migrating version 20240201131900 - -> SELECT NOT EXISTS(SELECT * FROM users); - -- ok (624.004µs) -// highlight-end - - -- migrating version 20240201131900 - -> DROP TABLE users; - -- ok (5.412737ms) - - ------------------------- - -- 22.138088ms - -- 1 migration - -- 1 check - -- 1 sql statement -``` - -If the pre-execution checks fail, the migration will not be applied, and Atlas will exit with an error. - -```bash -atlas migrate --dir atlas://app --env prod -``` - -![Check failed](https://atlasgo.io/uploads/cloud/deploy/check-failed-v1.png) - -```applylog title="Output" -Migrating to version 20240201131900 from 20240201131800 (1 migrations in total): - - -- checks before migrating version 20240201131900 - -> SELECT NOT EXISTS(SELECT * FROM internal_users); - -> SELECT NOT EXISTS(SELECT * FROM external_users); - -- ok (1.322842ms) - - -- checks before migrating version 20240201131900 - -> SELECT NOT EXISTS(SELECT * FROM roles); - -> SELECT NOT EXISTS(SELECT * FROM user_roles); - 2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false - - ------------------------- - -- 19.396779ms - -- 1 migration with errors - -- 2 checks ok, 2 failures -Error: 2 of 2 assertions failed: check assertion "SELECT NOT EXISTS(SELECT * FROM user_roles);" returned false -``` - - diff --git a/doc/md/cloud/features/registry.mdx b/doc/md/cloud/features/registry.mdx deleted file mode 100644 index b3ba6095284..00000000000 --- a/doc/md/cloud/features/registry.mdx +++ /dev/null @@ -1,169 +0,0 @@ ---- -title: Atlas Registry -id: registry -slug: /cloud/features/registry ---- - -Atlas Registry allows you to store, version, and maintain a single source of truth for your database schemas, -database migrations, and additional assets related to schema changes. It is like Docker Hub for Atlas schemas and -migrations. - -## Getting Started - -### Pushing a Schema - -The `atlas schema push` command allows you to push your schema definition to the Atlas Registry. If no repository exists -in the registry for the schema, a new one is created. Otherwise, a new version is generated. - -```bash {4} -atlas schema push myapp \ - --env local - -https://example.atlasgo.cloud/schemas/141733920769 -``` - -Once pushed, the schema is available to others in your organization and can be referenced in your deployment pipelines. - -
- atlas schema push -

Schema created with atlas schema push

-
- -#### Registry Config for Schemas - -The example below demonstrates how to configure the registry for your schema within an `env` block. All commands -executed in this environment will automatically use the configured repository: - -```hcl title="atlas.hcl" {6-13} -env "local" { - # URL to the underlying database. - url = env("DATABASE_URL") - # URL to the dev-database. - dev = "sqlite://dev?mode=memory" - schema { - # Desired schema state. - src = "file://schema.sql" - # Atlas Registry config. - repo { - name = "myapp" - } - } -} -``` - -### Pushing a Migration Directory - -The `atlas migrate push` command allows you to push your migration directory to the Atlas Registry. If no repository -exists in the registry for the pushed directory, a new one is created. Otherwise, the directory state will be updated. - -```bash {4} -atlas migrate push myapp \ - --env local - -https://example.atlasgo.cloud/dirs/4294967370 -``` - -Once pushed, the migration directory is available to others in your organization and can be referenced in your deployment -pipelines. - -Note that Atlas Registry is schema-aware. In addition to storing and versioning your migrations, it provides ER diagrams, -SQL diffing, [schema docs](/cloud/features/schema-docs), and other capabilities that make working with your -schema easier. - -
- atlas migrate push -

Migration Directory created with atlas migrate push

-
- -#### Registry Config for Migration Directories - -The example below demonstrates how to configure the registry for your migration directory within an `env` block. -All commands executed in this environment will automatically use the configured repository: - -```hcl title="atlas.hcl" {8-13} -env "local" { - # URL to the underlying database. - url = env("DATABASE_URL") - # URL to the dev-database. - dev = "sqlite://dev?mode=memory" - # URL to the desired schema state. - src = "file://schema.sql" - migration { - # Atlas Registry config. - repo { - name = "myapp" - } - } -} -``` - -## Simplifying Deployments - -When you push a migration directory to the Schema Registry, you can give it a tag in the same way that you would tag a -Docker image. This makes any version of your schema effectively _addressable_ which dramatically simplifies deployments. - -References to a schema version can then be used in your deployment pipelines with an expression similar to: - -```bash -# Reference by version -atlas migrate apply \ - --dir "atlas://myapp?version=20231201182011" \ - --url "postgres://postgres:pass@:5432/example?search_path=public&sslmode=disable" - -# Reference by tag -atlas migrate apply \ - --dir "atlas://app?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5" \ - --url "postgres://postgres:pass@:5432/example?search_path=public&sslmode=disable" -``` - -Using the Schema Registry completely eliminates the need to create custom Docker images for migrations or grant your runtime -environments access to your source code repository. - -### Example: Kubernetes Deployment - -Let's see how we can use tagged schema versions in a Kubernetes deployment: - -```yaml -apiVersion: db.atlasgo.io/v1alpha1 -kind: AtlasMigration -metadata: - name: migration -spec: - urlFrom: - secretKeyRef: - key: url - name: postgres-credentials - cloud: - project: "atlasdemo" # Atlas Cloud project name - tokenFrom: - secretKeyRef: - name: atlas-credentials - key: token - dir: - // highlight-start - remote: - name: "myapp" # Migration directory name in your atlas cloud project - tag: "39e7e4e35fce7409bd26d25d8140061695d4ffd5" # See below on how to obtain this value for your project. - // highlight-end -``` - -### Example: Terraform - -You can also use references to the Schema Registry in your Terraform modules: - -```hcl -resource "atlas_migration" "myapp" { - url = "postgres://postgres:pass@:5432/example?search_path=public&sslmode=disable" - // highlight-start - remote_dir { - name = "myapp" - tag = "39e7e4e35fce7409bd26d25d8140061695d4ffd5" - } - // highlight-end -} -``` - -## Additional Benefits - -* [Schema Docs](schema-docs.mdx) - Get automatically generated documentation for your schemas when you - push them to the Schema Registry. \ No newline at end of file diff --git a/doc/md/cloud/features/schema-docs.mdx b/doc/md/cloud/features/schema-docs.mdx deleted file mode 100644 index 4a6514f467a..00000000000 --- a/doc/md/cloud/features/schema-docs.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -title: Schema Docs -id: schema-docs -slug: /cloud/features/schema-docs ---- - - -Schema Docs - -### Overview - -Atlas Cloud maintains always up-to-date, automatically generated documentation for your database schema. - -### Why Schema Docs? - -When working in a larger organization, having up-to-date documentation of the database schema -becomes increasingly important. Whether it's for onboarding new team members, or for streamlining -collaboration between different functions in the organization, having a single source of truth for the database schema -in a format that is easy to understand and navigate is crucial. - -In addition, as a company matures, it is common to see some databases grow in complexity to a point where it becomes -difficult to keep track of all the tables, columns, and relationships. After many years of development and changes, -it is not uncommon to see databases with hundreds of tables and thousands of columns whose authors are no longer with -the company. In such cases, having a tool that can automatically generate and keep up-to-date the documentation of the database -can bring clarity and understanding that is necessary for the organization to continue to grow and evolve. - -### How it works - -Schema Docs are regenerated whenever you push a new version of your schema to Atlas Cloud. It is therefore -recommended to set up your CI pipeline in such a way that it pushes the schema to Atlas Cloud whenever changes are made to the -main branch of your project. - -[Learn how to connect a directory to Atlas Cloud](/cloud/directories) - -Schema Docs available from within the project overview page. You can access it by clicking on the "Docs" tab -at the top of the page. diff --git a/doc/md/cloud/features/troubleshooting.mdx b/doc/md/cloud/features/troubleshooting.mdx deleted file mode 100644 index d545429c828..00000000000 --- a/doc/md/cloud/features/troubleshooting.mdx +++ /dev/null @@ -1,46 +0,0 @@ ---- -title: Troubleshooting Migrations -slug: /cloud/features/troubleshooting ---- - -Troubleshooting and triaging failures during database migrations can be especially difficult. Errors often stem from data -and schema changes, making it hard to identify the exact problem. - -When an error or migration failure occurs, it is crucial to understand what went wrong and assess the current state of -the database. - -## Drill Down Analysis - -When reporting migration runs to Atlas Cloud, the detailed logs allow you to quickly drill down and troubleshoot -any schema migration failures. - -The report shows what happened in the migration, what caused the failure, and the current state of the -database post-deployment. - -For example, in the image below we can see a migration that failed due to a constraint, `number_length`, which -was violated. This caused the migration to fail and only one of the three intended migration files was executed on -the database. - -[![](https://atlasgo.io/uploads/cloud/images/gh-failed-deployment.png)](https://gh.atlasgo.cloud/dirs/4294967329/deployments/51539607581) - -### Database-per-Tenant Migrations - -In a database-per-tenant architecture, the same migration is executed on multiple databases. If a migration fails, -the root cause of the error often involves tenant-specific data and schema changes, making it even more challenging to -pinpoint issues. - -In this scenario, identifying which databases were affected and which remained unaffected is crucial to assess the impact -and plan the next steps effectively. - -In the image below we can see the deployment intended to run on four different databases. The migration failed once it reached -the third tenant, stopping the deployment entirely before reaching the last tenant's database. - -[![](https://atlasgo.io/uploads/cloud/images/multitenant-failed-deployment.png)](https://gh.atlasgo.cloud/dirs/4294967347/deployments/sets/94489280523) - -When examining the specific tenant that failed (as shown in the image below), we can see that the failure was caused by -an attempt to create a table that already existed in the database. - -[![](https://atlasgo.io/uploads/cloud/images/tenant-failed-deployment.png)](https://gh.atlasgo.cloud/dirs/4294967347/deployments/51539607566) - -The detailed deployment reports provide clarity on migration failures, allowing for efficient resolution and minimizing -downtime. diff --git a/doc/md/cloud/getting-started.mdx b/doc/md/cloud/getting-started.mdx deleted file mode 100644 index 2f7ac6b8d54..00000000000 --- a/doc/md/cloud/getting-started.mdx +++ /dev/null @@ -1,64 +0,0 @@ ---- -id: getting-started -title: Getting Started with Atlas Cloud -sidebar_label: Getting Started ---- - -## What is Atlas Cloud? - -[Atlas Cloud](https://atlasgo.cloud) is an online platform that supplements the Atlas CLI -to allow teams to build robust, end-to-end schema management workflows. We built it to help -teams make the most out of Atlas with as little friction and effort as possible. - -## What can I do with Atlas Cloud? - -### For Individuals: Unlock the full power of Atlas CLI - -By logging in to Atlas Cloud, get access to enhanced schema management, such as views, triggers, functions and stored procedures. -You will also get access to various database drivers, private visualizations, and more. For a full list of the CLI features -that require login access, click [here](/features#feature-availability). - -### For Teams: Take database schema management to the next level -* **[Schema Documentation](/cloud/features/schema-docs)** - Enjoy always up-to-date automatically generated docs for your schema. Atlas Cloud manages a -single source of truth for the database schema in a format that is easy to understand and navigate. - -* **[Automatic Code Review](/cloud/setup-ci)** - If you're not careful, it's easy to accidentally deploy a schema migration - that breaks your application. Atlas Cloud automatically simulates and reviews your migrations during -the CI process to ensure that your migrations are safe to deploy. With the Cloud, you can view detailed reports breaking down -each change and its severity. - -* **[Stronger Safety Guarantees](/cloud/features/pre-migration-checks)** - Safeguard deployments with pre-migration checks and interactive migrations. - -* **Full Visibility** - Atlas Cloud provides a single pane of glass to view your database schemas, and the status of your migrations across - all of your environments. - -* **[Fast Troubleshooting](/cloud/deployment#visualizing-migration-runs)** - Quickly resolve failed migrations with detailed deployment logs. - -* **[Simplify your Pipelines](/cloud/deployment)** - Declutter deployment pipelines by pushing migrations to Atlas Cloud. Atlas Cloud seamlessly -integrates with modern deployment tools like Kubernetes and Terraform to make it easy to deploy your migrations to production -as part of your existing CD process. - -* **[Send Notifications](/cloud/directories#slack-integration)** - Notify the right people when schema changes are proposed and deployed via Slack and other integrations. - -## How do I get started? - -1. **Create an account** - in the [signup page](https://auth.atlasgo.cloud/signup) or from the CLI: - - ``` - atlas login - ``` -2. **Connect a migration directory** - To unlock CI, deployments and visibility features you need to connect a migration - directory to your account. [Follow this guide](/cloud/directories) to learn how. -3. **Deploy from your connected directory** - Once you've connected a migration directory, you can deploy migrations - from your Cloud account. [Follow this guide](/cloud/deployment) to learn how. - -## Can I see a demo? - -Yes, head over to our [live demo account](https://gh.atlasgo.cloud) to see Atlas Cloud in action. - -## Can I use Atlas Cloud for free? - -We provide a 30 day free trial for new users. After the trial period, a license is required to continue using -Atlas Cloud. - -To learn more see the [pricing page](https://atlasgo.io/pricing). diff --git a/doc/md/cloud/pricing.mdx b/doc/md/cloud/pricing.mdx deleted file mode 100644 index 6fde9efd584..00000000000 --- a/doc/md/cloud/pricing.mdx +++ /dev/null @@ -1,59 +0,0 @@ ---- -title: Pricing -id: pricing -slug: /cloud/pricing ---- - -Atlas Cloud offers two pricing plans: - -1. **Pro Plan** - for individuals and small teams that want to unlock the full potential of Atlas. The Pro plan - provides full access to advanced CLI capabilities, the Cloud UI, and other features. A free, 30-day trial is available - for new users. After the trial period, a license is required to continue using Atlas Pro. - - To learn more about Atlas Pro pricing, visit our [pricing page](https://atlasgo.io/pricing). - -2. **Enterprise Plan** - for larger organizations looking to solve schema management at scale. This plan includes a - dedicated support channel, solution engineering, and other features required for adoption by enterprises. - -For a full list of the features in each plan, click [here](https://atlasgo.cloud/pricing). - -## What are Seats, Projects and Target Databases? - -The three dimensions for billing on the Pro plan are seats, projects, and target databases. - -**Seats.** Every developer using Atlas Pro from the CLI or the Cloud UI counts as a seat. The Pro plan comes with 3 free -seats per company. Additional seats can be purchased as needed. For seat pricing see [here](/pricing). - -**Projects.** For teams using Atlas Cloud, resources are grouped into projects. A Project is -a set of database resources that are versioned and managed together, essentially a *migration directory*. -Each project gets its own page in the Cloud UI under which you will find CI Runs, Deployments, ERDs, documentation -and more. See our [live demo](https://gh.atlasgo.cloud/dirs/4294967383) for an example. - -The Pro plan comes with 1 free project. Additional projects can be purchased as needed. For project pricing see [here](/pricing). - -**Target Databases.** A project may be deployed to multiple target databases, for instance, a development, staging, and -production database. Atlas Cloud keeps track of each target environment separately, allowing you to deploy to each -environment independently. - -Each target database is essentially a unique URL to which a project is deployed. -The Pro plan comes with 2 free target databases. Additional target databases can be purchased as needed. For target -database pricing see [here](/pricing). - -## Upgrading your Pricing Plan - -### Adding Quota to Pro -To purchase extra seats, databases or projects, log in to your [Atlas Cloud](https://auth.atlasgo.cloud/login) account and click on the 'Settings' tab. - -![](https://atlasgo.io/uploads/cloud/images/pro-pricing.png) - -Under the 'Plan' tab, click the 'Upgrade' button. Note, to access the Plan tab you must be an admin of the organization. - -![](https://atlasgo.io/uploads/cloud/images/pro-quotas.png) - -Add the desired resources and proceed to checkout. - -That's it! - -### Pro to Enterprise -To upgrade your plan from Pro to Enterprise, send an email to . - diff --git a/doc/md/cloud/setup-ci.mdx b/doc/md/cloud/setup-ci.mdx deleted file mode 100644 index d04aaa9572a..00000000000 --- a/doc/md/cloud/setup-ci.mdx +++ /dev/null @@ -1,259 +0,0 @@ ---- -id: setup-ci -title: Setting up CI for your Database with Atlas Cloud ---- - -import MigratePushCommand from '../components/migrate-push-command.mdx' - -As your application evolves, so does your database schema. If you're not careful -with schema changes (migrations), you can end up introducing all sorts of issues -that are painful and expensive to fix. - -To mitigate the risk of deploying dangerous changes to database schemas, many teams -apply CI/CD practices to their database. This means that every change to the database -schema is automatically reviewed and tested before it is deployed to production. - -In this guide, we'll show you how to set up CI for your database using Atlas Cloud -and GitHub Actions. - -A similar workflow is supported on other CI platforms as well. To learn more, see -the [GitLab CI](/guides/ci-platforms/gitlab), for example. - -## Prerequisites - -1. A GitHub repository containing an Atlas migration directory. If you don't have one handy - and would like to experiment with this workflow, you can use the - [`ariga/atlas-template`](https://github.com/ariga/atlas-template) template repo - to create one for you. -2. An [Atlas Cloud](https://atlasgo.cloud) account. If you don't have one, you can sign up for free in - [auth.atlasgo.cloud](https://auth.atlasgo.cloud/signup), or by running `atlas login` in your terminal: - - ```shell - $ atlas login - //highlight-next-line-info - You are now connected to "a8m" on Atlas Cloud. - ``` - -3. Push your migration directory to Atlas Cloud. This is used by the CI process to detect - which migration files are new and if linear history is maintained: - - - - -## GitHub Action Workflow - -### Step 1: Create a Bot Token for Atlas Cloud - -In order to report the results of your CI runs to Atlas Cloud, you will need to -create a bot token for Atlas Cloud to use. Follow [these instructions](/cloud/bot.mdx) -to create a token and save it somewhere safe. - -### Step 2: Install the Atlas GitHub CLI Extension - -To streamline the process of configuring the GitHub Action, we've created a -GitHub CLI extension that will do most of the work for you: - -1. Make sure you have the GitHub CLI installed. - - ```shell - brew install gh - ``` - See [here](https://github.com/cli/cli#installation) for more installation options. - -2. Install the Atlas GitHub CLI extension: - - ```shell - gh extension install ariga/gh-atlas - ``` - -### Step 3: Configure the GitHub Action - -1. **Permissions** - Make sure you have the necessary permissions to configure your action: - - ```shell - gh auth refresh -s write:packages,workflow - ``` - -2. **Go to your repository root** - Make sure your current working directory is the root of your repository. - - ```shell - cd path/to/my/repo - ``` - -3. **Run the extension** - Run the `init-ci` command to configure the GitHub Action: - - ```shell - gh atlas init-action --token --dir-name="app" - ``` - Atlas will scan your repository (locally) for directories containing Atlas migrations - and ask you which one you would like to use for CI. Select the desired directory and press "Enter": - ```text - Use the arrow keys to navigate: ↓ ↑ → ← - ? choose migration directory: - ▸ migrations - ``` - - Atlas will then ask you which database driver this directory contains migrations for. Select the - desired driver and press "Enter": - - ```text - Use the arrow keys to navigate: ↓ ↑ → ← - ? choose driver: - ▸ mysql - postgres - mariadb - sqlite - ``` - - Next, the GitHub extension will save your bot token to a GitHub secret and create a - pull request with the necessary configuration for the GitHub Action. - -
- Screenshot Example - - ![](https://atlasgo.io/uploads/cloud/ci/gh-ext-pr-2.png) -
- - The PR contains a GitHub Actions workflow similar to this: - -
- - Code Example - - ```yaml - name: Atlas - on: - push: - branches: - - master - paths: - - .github/workflows/ci-atlas.yaml - - 'migrations/*' - pull_request: - paths: - - 'migrations/*' - # Permissions to write comments on the pull request. - permissions: - contents: read - pull-requests: write - jobs: - atlas: - services: - # Spin up a mysql:8 container to be used as the dev-database for analysis. - mysql: - image: mysql:8 - env: - MYSQL_DATABASE: dev - MYSQL_ROOT_PASSWORD: pass - ports: - - 3306:3306 - options: >- - --health-cmd "mysqladmin ping -ppass" - --health-interval 10s - --health-start-period 10s - --health-timeout 5s - --health-retries 10 - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - uses: ariga/setup-atlas@v0 - with: - cloud-token: ${{ secrets.ATLAS_CLOUD_TOKEN_L0QEJQ }} - - uses: ariga/atlas-action/migrate/lint@v1 - with: - dir: 'file://migrations' - dir-name: 'gh-ext-demo' - dev-url: 'mysql://root:pass@localhost:3306/dev' - env: - GITHUB_TOKEN: ${{ github.token }} - - uses: ariga/atlas-action/migrate/push@v1 - if: github.ref == 'refs/heads/master' - with: - dir: 'file://migrations' - dir-name: 'gh-ext-demo' - dev-url: 'mysql://root:pass@localhost:3306/dev' - ``` - -
- - After reviewing the changes, merge the pull request to enable the GitHub Action. - -### Step 4: Test the Action - -After merging the pull request, the GitHub Action will run `atlas migrate lint` on every pull request -and sync the migrations to Atlas Cloud on every push to `master`. - -1. To test this setup, create a new branch for making some changes to your database schema: - - ```shell - git checkout -b test-ci - ``` - -2. Create a new migration in interactive mode: - - ```shell - atlas migrate new --edit dummy - ``` - -3. Type some broken SQL into the migration file and save it: - - ```sql - CREATE users ( - col varchar(255) - ); - ``` - - (Notice that we're missing the `TABLE` keyword in the `CREATE TABLE` statement.) - -4. Commit the changes and push them to GitHub: - - ```shell - git add migrations - git commit -m "migrations: add dummy migration" - git push origin test-ci - ``` - - Our changes are pushed to GitHub: - ``` - remote: Resolving deltas: 100% (3/3), done. - remote: - remote: Create a pull request for 'test-ci' on GitHub by visiting: - remote: https://github.com/rotemtam/gh-init-demo/pull/new/test-ci - remote: - To github.com:rotemtam/gh-init-demo.git - * [new branch] test-ci -> test-ci - ``` - -5. Open a pull request for the `test-ci` branch and wait for the GitHub Action to run. -
- Screenshot Example - - ![](https://atlasgo.io/uploads/cloud/ci-guide/open-pr.png) -
- - - -6. Notice that after a few moments, a comment appears on the pull request with the results of the linting run: -
- Screenshot Example - - ![](https://atlasgo.io/uploads/cloud/ci/gh-action-comment.png) -
- - The linting run failed because of the syntax error we introduced in the migration file. - -7. Click on "Full Report" to see the full report on Atlas Cloud: -
- Screenshot Example - - ![](https://atlasgo.io/uploads/cloud/ci-guide/fixed-cloud-report-2.png) -
- -## Summary - -In this guide, we've shown how to configure Atlas Cloud to apply continuous integration -for our database schema changes. With this setup, whenever a developer proposes a change -to the database schema, Atlas Cloud will verify the safety of the change using various checks -and report back the results. diff --git a/doc/md/community-edition.mdx b/doc/md/community-edition.mdx deleted file mode 100644 index 030ea83dfa2..00000000000 --- a/doc/md/community-edition.mdx +++ /dev/null @@ -1,131 +0,0 @@ ---- -title: Community Edition -id: community-edition -slug: community-edition ---- -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -## Obtaining Community Binaries - -Users that want to use the Community Edition of Atlas which is built directly from the -[Atlas GitHub repository](https://github.com/ariga/atlas) and licensed under the Apache 2.0 license can do so by following the instructions below. - - - - -To download and install the latest release of the Atlas CLI, simply run the following in your terminal: - -```shell -curl -sSf https://atlasgo.sh | sh -s -- --community -``` - - - - -To pull the Atlas image and run it as a Docker container: - -```shell -docker pull arigaio/atlas:latest-community -docker run --rm arigaio/atlas:latest-community --help -``` - -If the container needs access to the host network or a local directory, use the `--net=host` flag and mount the desired -directory: - -```shell -docker run --rm --net=host \ --v $(pwd)/migrations:/migrations \ -arigaio/atlas:latest-community migrate apply ---url "mysql://root:pass@:3306/test" -``` - - - - -Download the [latest release](https://release.ariga.io/atlas/atlas-community-windows-amd64-latest.exe) and -move the atlas binary to a file location on your system PATH. - - - - -If you want to manually install the Atlas CLI, pick one of the below builds suitable for your system. - -* MacOS - * [amd64](https://release.ariga.io/atlas/atlas-community-darwin-amd64-latest) ([md5](https://release.ariga.io/atlas/atlas-community-darwin-amd64-latest.md5) / [sha256](https://release.ariga.io/atlas/atlas-community-darwin-amd64-latest.sha256)) - * [arm64](https://release.ariga.io/atlas/atlas-community-darwin-arm64-latest) ([md5](https://release.ariga.io/atlas/atlas-community-darwin-arm64-latest.md5) / [sha256](https://release.ariga.io/atlas/atlas-community-darwin-arm64-latest.sha256)) -* Windows - * [amd64](https://release.ariga.io/atlas/atlas-community-windows-amd64-latest.exe) ([md5](https://release.ariga.io/atlas/atlas-community-windows-amd64-latest.exe.md5) / [sha256](https://release.ariga.io/atlas/atlas-community-windows-amd64-latest.exe.sha256)) -* Linux - * [amd64](https://release.ariga.io/atlas/atlas-community-linux-amd64-latest) ([md5](https://release.ariga.io/atlas/atlas-community-linux-amd64-latest.md5) / [sha256](https://release.ariga.io/atlas/atlas-community-linux-amd64-latest.sha256)) - * [arm64](https://release.ariga.io/atlas/atlas-community-linux-arm64-latest) ([md5](https://release.ariga.io/atlas/atlas-community-linux-arm64-latest.md5) / [sha256](https://release.ariga.io/atlas/atlas-community-linux-arm64-latest.sha256)) - - - - -## Community vs Other Editions - -### License - -The Community Edition of Atlas is licensed under the Apache 2.0 license, while the Open Edition is based on both the -open-source codebase with some additional proprietary features. Usage of the Open Edition is free under the terms of the -[Atlas EULA](https://ariga.io/legal/atlas/eula). - -### Features Unavailable in the Community Edition - -The Community Edition does not support the following commands and integrations: - -#### Versioned Migrations - -- `migrate checkpoint` - Create a checkpoint for the current state of the database. -- `migrate down` - Roll back the last migration. -- `migrate rebase` - Reverting applied migration files from the database. -- `migrate edit` - Edit a migration file. -- `migrate rm` - Remove a migration file. -- `migrate push` - Push the migration files to [Atlas Registry](/cloud/features/registry). -- `migrate test` - Run tests using the Atlas Testing Framework. - -#### Declarative Migrations - -- `schema test` - Run tests using the Atlas Testing Framework. -- `schema plan` - [Pre-plan declarative migrations](/declarative/plan). -- `schema push` - Push a new version to [Atlas Registry](/cloud/features/registry). - -#### Integrations and Providers - -- [Atlas Kubernetes Operator](https://github.com/ariga/atlas-operator) -- [Atlas Terraform Provider](https://github.com/ariga/terraform-provider-atlas) -- [Atlas GitHub Action](https://github.com/ariga/atlas-action) -- [Atlas CircleCI Orb](https://github.com/ariga/atlas-orb) -- [Atlas JetBrains Plugin](https://plugins.jetbrains.com/plugin/22307-atlas-hcl) -- [Atlas VSCode Extension](https://marketplace.visualstudio.com/items?itemName=Ariga.atlas-hcl) -- [Atlas Go SDK](https://github.com/ariga/atlas-go-sdk) - -#### Database Features - -The following database features are not supported in the Community Edition: - -- Views -- Triggers -- Functions and Procedures -- Materialized Views -- Row-level security (RLS) -- Domain types -- Extensions -- Sequences -- Composite types - -#### Drivers - -The following drivers are not supported in the Community Edition: - -- SQL Server -- ClickHouse -- Redshift \ No newline at end of file diff --git a/doc/md/components/_installation_instructions.mdx b/doc/md/components/_installation_instructions.mdx deleted file mode 100644 index eae237a4934..00000000000 --- a/doc/md/components/_installation_instructions.mdx +++ /dev/null @@ -1,71 +0,0 @@ -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - - - - -To download and install the latest release of the Atlas CLI, simply run the following in your terminal: - -```shell -curl -sSf https://atlasgo.sh | sh -``` - - - - -Get the latest release with [Homebrew](https://brew.sh/): - -```shell -brew install ariga/tap/atlas -``` - - - - -To pull the Atlas image and run it as a Docker container: - -```shell -docker pull arigaio/atlas -docker run --rm arigaio/atlas --help -``` - -If the container needs access to the host network or a local directory, use the `--net=host` flag and mount the desired -directory: - -```shell -docker run --rm --net=host \ - -v $(pwd)/migrations:/migrations \ - arigaio/atlas migrate apply - --url "mysql://root:pass@:3306/test" -``` - - - - -Download the [latest release](https://release.ariga.io/atlas/atlas-windows-amd64-latest.exe) and -move the atlas binary to a file location on your system PATH. - - - - -If you want to manually install the Atlas CLI, pick one of the below builds suitable for your system. - -* MacOS - * [amd64](https://release.ariga.io/atlas/atlas-darwin-amd64-latest) ([md5](https://release.ariga.io/atlas/atlas-darwin-amd64-latest.md5) / [sha256](https://release.ariga.io/atlas/atlas-darwin-amd64-latest.sha256)) - * [arm64](https://release.ariga.io/atlas/atlas-darwin-arm64-latest) ([md5](https://release.ariga.io/atlas/atlas-darwin-arm64-latest.md5) / [sha256](https://release.ariga.io/atlas/atlas-darwin-arm64-latest.sha256)) -* Windows - * [amd64](https://release.ariga.io/atlas/atlas-windows-amd64-latest.exe) ([md5](https://release.ariga.io/atlas/atlas-windows-amd64-latest.exe.md5) / [sha256](https://release.ariga.io/atlas/atlas-windows-amd64-latest.exe.sha256)) -* Linux - * [amd64](https://release.ariga.io/atlas/atlas-linux-amd64-latest) ([md5](https://release.ariga.io/atlas/atlas-linux-amd64-latest.md5) / [sha256](https://release.ariga.io/atlas/atlas-linux-amd64-latest.sha256)) - * [arm64](https://release.ariga.io/atlas/atlas-linux-arm64-latest) ([md5](https://release.ariga.io/atlas/atlas-linux-arm64-latest.md5) / [sha256](https://release.ariga.io/atlas/atlas-linux-arm64-latest.sha256)) - - - \ No newline at end of file diff --git a/doc/md/components/card.tsx b/doc/md/components/card.tsx deleted file mode 100644 index 813c47d0af8..00000000000 --- a/doc/md/components/card.tsx +++ /dev/null @@ -1,64 +0,0 @@ -import React from 'react'; -import clsx from 'clsx'; -import Link from '@docusaurus/Link'; -import Heading from '@theme/Heading'; -import Image from '@theme/IdealImage'; - -export enum CardImage { - Action = "action", - CI = "ci", - ClickHouse = "clickhouse", - Redshift = "redshift", - Cloud = "cloud", - Config = "config", - DataSource = "datasource", - Deployment = "deployment", - Docker = "docker", - ECS = "ecs", - GitHub = "github", - Helm = "helm", - Integration = "integration", - Kubernetes = "kubernetes", - Migrate = "migrate", - MySQL = "mysql", - Operator = "operator", - Postgres = "postgres", - Schema = "schema", - SQLite = "sqlite", - SQLServer = "sqlserver", - Terraform = "terraform", - Testing = "testing", - Tools = "tools", -} - -interface CardProps { - name: string; - image: CardImage; - url: string; - description: JSX.Element; -} - -export function Card({name, image, url, description}: CardProps) { - return ( -
-
-
- - {`${name}'s - -
-
- {name} -

{description}

-
-
-
- - Read guide - -
-
-
-
- ); -} diff --git a/doc/md/components/login-required.mdx b/doc/md/components/login-required.mdx deleted file mode 100644 index 0defa6106da..00000000000 --- a/doc/md/components/login-required.mdx +++ /dev/null @@ -1,3 +0,0 @@ - - Atlas Pro - \ No newline at end of file diff --git a/doc/md/components/migrate-push-command.mdx b/doc/md/components/migrate-push-command.mdx deleted file mode 100644 index cba52829944..00000000000 --- a/doc/md/components/migrate-push-command.mdx +++ /dev/null @@ -1,61 +0,0 @@ -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - - - - -```bash -atlas migrate push app \ - --dev-url "docker://postgres/15/dev?search_path=public" -``` - - - - -```bash -atlas migrate push app \ - --dev-url "docker://mysql/8/dev" -``` - - - - -```bash -atlas migrate push app \ - --dev-url "docker://mariadb/latest/dev" -``` - - - - -```bash -atlas migrate push app \ - --dev-url "sqlite://dev?mode=memory" -``` - - - - -```bash -atlas migrate push app \ - --dev-url "docker://sqlserver/2022-latest" -``` - - - - -```bash -atlas migrate push app \ - --dev-url "docker://clickhouse/23.11" -``` - - - - -```bash -atlas migrate push app \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev" -``` - - - \ No newline at end of file diff --git a/doc/md/concepts/dev.mdx b/doc/md/concepts/dev.mdx deleted file mode 100644 index 2c42f66096c..00000000000 --- a/doc/md/concepts/dev.mdx +++ /dev/null @@ -1,410 +0,0 @@ ---- -title: Dev Database -id: dev-database -slug: /concepts/dev-database ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -## Introduction - -Some commands require a URL pointing to a _"Dev Database"_, typically a temporary and locally running database, which -Atlas uses to process and validate users' schemas, migrations and more. This requirement is necessary as Atlas cannot -replicate every database type 'X' in every version 'Y'. - -To simplify the process of creating temporary databases for one-time use, Atlas can spin up an ephemeral local Docker -container using the special [docker driver](concepts/url.mdx), and clean it up at the end of the process. Here are a -few examples of how to use the docker driver: - - - - - -```shell -# When working on a single database schema. ---dev-url "docker://mysql/8/dev" - -# When working on multiple database schemas. ---dev-url "docker://mysql/8" -``` - -To work with a custom Docker image, use one of the following formats: - -```shell -# When working on a single database schema. -docker+mysql://org/image/dev -docker+mysql://user/image:tag/dev -# For local/official images, leave host empty or use "_". -docker+mysql:///local/dev -docker+mysql://_/mariadb:latest/dev - -# When working on multiple database schemas. -docker+mysql://local -docker+mysql://org/image -docker+mysql://user/image:tag -docker+mysql://_/mariadb:latest -``` - - - - -```shell -# When working on a single database schema. ---dev-url "docker://maria/latest/schema" - -# When working on multiple database schemas. ---dev-url "docker://maria/latest" -``` - -To work with a custom Docker image, use one of the following formats: - -```shell -# When working on a single database schema. -docker+maria://org/image/dev -docker+maria://user/image:tag/dev -# For local/official images, leave host empty or use "_". -docker+maria:///local/dev -docker+maria://_/mysql:latest/dev - -# When working on multiple database schemas. -docker+maria://local -docker+maria://org/image -docker+maria://user/image:tag -docker+maria://_/mariadb:latest -``` - - - - -```shell -# When working on a single database schema, use the auth-created -# "public" schema as the search path. ---dev-url "docker://postgres/15/dev?search_path=public" - -# When working on multiple database schemas. ---dev-url "docker://postgres/15/dev" -``` - -To work with a custom Docker image, use one of the following formats: - -```shell -# When working on a single database schema. -docker+postgres://org/image/dev?search_path=public -docker+postgres://ghcr.io/namespace/image:tag/dev?search_path=public -# For local/official images, leave host empty or use "_". -docker+postgres://_/local/dev?search_path=public -docker+postgres://_/official:latest/dev?search_path=public - -# When working on multiple database schemas. -docker+postgres://org/image/dev -# Default database is "postgres". -docker+postgres://org/image:tag -``` - - - - -```shell -# Atlas supports working with an in-memory database in SQLite. ---dev-url "sqlite://dev?mode=memory" -``` - - - - -```shell -# When working on a single database schema. ---dev-url "docker://sqlserver/2022-latest/dev?mode=schema" - -# When working on multiple database schemas. ---dev-url "docker://sqlserver/2022-latest/dev?mode=database" -``` - -To work with an Azure SQL Edge docker image or SQLServer version, use one of the following formats: - -```shell -# Run SQLServer 2017-latest in schema mode. -docker://sqlserver/2017-latest?mode=schema -docker://sqlserver/2019-latest?mode=schema -docker://sqlserver/2022-latest?mode=schema - -# Run Azure SQL Edge 1.0.7 in schema mode. -docker+sqlserver://mcr.microsoft.com/azure-sql-edge:1.0.7?mode=schema -``` - - - - -```shell -# When working on a single database schema ---dev-url "docker://clickhouse/23.11/dev" - -# When working on multiple database schemas. ---dev-url "docker://clickhouse/23.11" -``` - -To work with a custom Docker image, use one of the following formats: - -```shell -# When working on a single database schema. -docker+clickhouse://org/image/dev -docker+clickhouse://user/image:tag/dev -# For local/official images, leave host empty or use "_". -docker+clickhouse:///local/dev -docker+clickhouse://_/clickhouse:latest/dev - -# When working on multiple database schemas. -docker+clickhouse://local -docker+clickhouse://org/image -docker+clickhouse://user/image:tag -docker+clickhouse://_/clickhousedb:latest -``` - - - - -Redshift is a managed service and does not support running in a Docker container. To user dev-database with Redshift, -we recommend using a real Redshift cluster with separate database. - -```shell -# When working on a single database schema, use the auth-created -# "public" schema as the search path. ---dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public" - -# When working on multiple database schemas. ---dev-url "redshift://user:pass@redshift-cluster:5439/dev" -``` - - - - -## Baseline Schema - -:::info [Atlas Pro Feature](/features#pro) - -The `docker` block is available for [Atlas Pro users](/features#pro). To use this feature, run: - -``` -atlas login -``` -::: - -In some cases, there is a need to configure a baseline schema for the dev database so that every computation using the -dev-database starts from this baseline. For example, users' schemas or migrations rely on objects, extensions, or -other schema resources that are not managed by the project. - -To configure such a baseline, use the `docker` block with the relevant image and pass to it the script for creating the -base schema for the project: - -```hcl -docker "postgres" "dev" { - image = "postgres:15" - schema = "public" - // highlight-start - baseline = < d), DROP COLUMN `c1`, ADD COLUMN `c` int NOT NULL -✔ Apply -Error: modify "t" table: Error 1054: Unknown column 'd' in 'check constraint ck expression' -exit status 1 -``` - -Atlas cannot predict such errors without applying the schema file on the database, because some cases require parsing -and compiling SQL expressions, traverse their AST and validate them. This is already implemented by the database engine. - -Migration failures can leave the database in a broken state. Some databases, like MySQL, do not support transactional -migrations due to [implicit COMMIT](https://dev.mysql.com/doc/refman/8.0/en/implicit-commit.html). However, this can be -avoided using the `--dev-url` option. Passing this to `schema apply` will first create and validate the desired state -(the HCL schema file) on temporary named-databases (schemas), and only then continue to `apply` the changes if it passed -successfully. - -```shell -atlas schema apply \ - --url "mysql://root:pass@:3308/test" \ - --to "schema.hcl" \ - --dev-url "docker://mysql/8/test" -``` -```text -Error: create "t" table: Error 3820: Check constraint 'ck' refers to non-existing column 'd'. -exit status 1 -``` - -## Diffing - -Atlas adopts the declarative approach for maintaining the schemas desired state, but provides two ways to manage and -apply changes on the database: `schema apply` and `migrate diff`. In both commands, Atlas compares the "current", and the -"desired" states and suggests a migration plan to migrate the "current" state to the "desired" state. For example, the -"current" state can be an inspected database or a migration directory, and the "desired" state can be an inspected -database, or an HCL file. - -Schemas that are written in HCL files are defined in natural form by humans. However, databases store schemas in -normal form (also known as canonical form). Therefore, when Atlas compares two different forms it may suggest incorrect -or unnecessary schema changes, and using the `--dev-url` option can solve this (see the above section for more -in-depth example). - -Let's see it in action, by adding an index-expression to our schema. - -```hcl title="schema.hcl" {6-10} -table "t" { - schema = schema.test - column "c" { - type = varchar(32) - } - index "i" { - on { - expr = "upper(concat('c', c))" - } - } -} -``` - -```shell -atlas schema apply \ - --url "mysql://root:pass@:3308/test" \ - --to "schema.hcl" -``` -```text --- Planned Changes: --- Modify "t" table -ALTER TABLE `test`.`t` ADD INDEX `i` ((upper(concat('c', c)))) -✔ Apply -``` - -We added a new index-expression to our schema, but using `schema inspect` will show our index in its normal form. - -```shell -atlas schema inspect --url "mysql://root:pass@:3308/test" -``` -```hcl {7-11} -table "t" { - schema = schema.test - column "c" { - null = false - type = varchar(32) - } - index "i" { - on { - expr = "upper(concat(_utf8mb4'c',`c`))" - } - } -} -``` - -Therefore, running `schema apply` again will suggest unnecessary schema changes. -```shell -atlas schema apply \ - --url "mysql://root:pass@:3308/test" \ - --to "schema.hcl" -``` -```text --- Planned Changes: --- Modify "t" table -ALTER TABLE `test`.`t` DROP INDEX `i` --- Modify "t" table -ALTER TABLE `test`.`t` ADD INDEX `i` ((upper(concat('c', c)))) -✔ Abort -``` - -Similarly to the previous example, we will use the `--dev-url` option to solve this. - -```shell -atlas schema apply \ - --url "mysql://root:pass@:3308/test" \ - --to "schema.hcl" \ - --dev-url "docker://mysql/8/test" -``` -```text -Schema is synced, no changes to be made -``` - -Hooray! Our desired schema is synced and no changes have to be made. - -:::info - -Atlas cleans up after itself! You can use the same instance of a "Dev Database" for multiple environments, as long -as they are not accessed concurrently. - -::: diff --git a/doc/md/concepts/migration-directory-integrity.md b/doc/md/concepts/migration-directory-integrity.md deleted file mode 100644 index af64f6d69df..00000000000 --- a/doc/md/concepts/migration-directory-integrity.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -id: migration-directory-integrity -slug: /concepts/migration-directory-integrity -title: Migration Directory Integrity File ---- - -Applying changes to a database schema in the wrong order can be dangerous. For this -reason, Atlas is built on a workflow that enforces a _linear history_ using a -_migration directory integrity file_. - -### The problem - -Suppose you have multiple teams working on a system simultaneously, all of which need to make -changes to the database schema from time to time. Unless they somehow coordinate, they -may end up with a broken migration directory. - -Consider what would happen if Team A and B, working in parallel, both merge a migration -that creates a new table named `inventory`. This is illustrated in this diagram: - -![Teams committing migrations in parallel. Source: Ent blog](https://entgo.io/images/assets/migrate/no-conflict.svg) - -Something like this might happen: -1. Team A creates a feature branch, committing a migration creating the `inventory` table. -2. Team B creates a second feature branch, also creating a table by the same name. -3. Both branches pass code-review and continuous integration. -4. Team A's branch is merged to the mainline branch. -5. Team B's branch is merged. -6. When both changes are deployed, the first one to run will succeed and the second will fail. - This will happen in an arbitrary order (migrations are run in lexicographic order, usually set by the timestamp - on the developer's workstation when generating them). - -### The solution - -Recovering from a failed migration is quite a headache, so wouldn't it be great -to prevent this from ever happening? - -Code conflicts are usually detected by source-control systems (such as Git) when -the same line in the same file is modified by two different commits. In our case, -no such conflict happens because migrations are typically described in a separate -file for each migration. - -Atlas's engine offers a way to prevent concurrent creation of new migration -files and guards against accidental changes in the migration history we call -*Migration Directory Integrity File*. This file is simply another file in your -migration directory called `atlas.sum` and looks something like: - -```text -h1:KRFsSi68ZOarsQAJZ1mfSiMSkIOZlMq4RzyF//Pwf8A= -20220318104614_team_A.sql h1:EGknG5Y6GQYrc4W8e/r3S61Aqx2p+NmQyVz/2m8ZNwA= -``` - -The `atlas.sum` file contains the checksum of each migration file (implemented by a -reverse, one branch merkle hash tree), and a sum of all files. Adding new files -results in a change to the sum file, which will raise merge conflicts in most -version control systems. - -### How does this mechanism prevent situations like the one we described above? - -The migration directory integrity file is updated automatically -whenever a new migration is created. Therefore, after Team A merged their changes to -the mainline branch, Team B would not be able to do so without dealing -with the changes landed by Team B. - -Because of the merge conflict on the `atlas.sum` file, in order to land -their changes, Team B would need to: -1. Merge the latest changes from the master branch into their branch. -2. If their migration file is not the latest one, they can use `atlas migrate rebase` to rebase their changes on top of all migrations. -3. In case there are any conflicts on the database schema (and application) level, resolve them and run [`atlas migrate hash`](/cli-reference#atlas-migrate-hash). -4. Update the PR, wait for Atlas CI to pass, and merge the changes. diff --git a/doc/md/concepts/url.mdx b/doc/md/concepts/url.mdx deleted file mode 100644 index d4a966d3384..00000000000 --- a/doc/md/concepts/url.mdx +++ /dev/null @@ -1,346 +0,0 @@ ---- -title: URLs -id: concepts-url -slug: /concepts/url ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -Atlas uses a standard URL format to connect to databases and load schemas and migrations from various sources. The format -below covers the supported parts of a URL, with subsequent sections providing more detailed examples. - -``` -driver://[username[:password]@]address/[schema|database][?param1=value1&...¶mN=valueN] -``` - -To inspect a database using a URL, refer to one of the examples below: - - - - -Connecting to a local MySQL server (all schemas/databases): -```shell -mysql://localhost:3306/ -``` - -Connecting to a specific MySQL schema (database) with a username and password: -```shell -mysql://user:pass@localhost:3306/schema -``` - -Connecting using Unix Sockets: -``` -mysql+unix:///tmp/mysql.sock - -mysql+unix://user:pass@/tmp/mysql.sock - -mysql+unix://user@/tmp/mysql.sock?database=dbname -``` - - - - -Connecting to a local MariaDB server (all schemas/databases): -```shell -maria://localhost:3306/ -``` - -Connecting to a specific MariaDB schema (database) with a username and password: -```shell -maria://user:pass@localhost:3306/schema -``` - -Connecting using Unix Sockets: -``` -maria+unix:///tmp/mysql.sock - -maria+unix://user:pass@/tmp/mysql.sock - -maria+unix://user@/tmp/mysql.sock?database=dbname -``` - - - - -Connecting to a local PostgreSQL database named `database` (all schemas): -```shell -postgres://localhost:5432/database -``` - -Connecting to a specific PostgreSQL schema named `public`: -```shell -postgres://localhost:5432/database?search_path=public -``` - -Connecting to a local PostgreSQL with credentials and SSL disabled: -```shell -postgres://postgres:pass@0.0.0.0:5432/database?search_path=public&sslmode=disable -``` - - - - -Connecting to a default schema of current user: -```shell -sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master&mode=schema -``` - -Connecting to a local SQLServer database named `master` (all schemas). The user need to have `db_owner` role: -```shell -sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master&mode=database -``` - -Azure Active Directory (AAD) authentication: - -Use the `fedauth` parameter to specify the AAD authentication method. For more information, see [the document on the underlying driver](https://github.com/microsoft/go-mssqldb?tab=readme-ov-file#azure-active-directory-authentication). -```shell -azuresql://.database.windows.net?fedauth=ActiveDirectoryDefault&database=master -``` - -:::note -- The `mode` parameter is Atlas-specific and isn't used for opening the underlying connection. -- The default `mode` is `schema`. -- The `azuresql` schema is used for AAD authentication with Azure SQL Database and Azure SQL Managed Instance. -::: - - - -Connecting to a local SQLite database (file): -```shell -sqlite://file.db -``` - -Connecting to an in-memory SQLite database (ephemeral). Useful for `--dev-url`: -```shell -sqlite://file?mode=memory&_fk=1 -``` - -Atlas also supports WebSocket connections to remote `libsql` databases: - -```shell -libsql+ws://database-url # For local environments -libsql+wss://database-url -``` - - - - -Connecting to a local ClickHouse server (all schemas/databases): -```shell -clickhouse://localhost:9000 -``` - -Connecting to a specific ClickHouse schema (database) with a username and password: -```shell -clickhouse://user:pass@localhost:9000/schema -``` - -Connecting to a specific ClickHouse schema with SSL enabled: -```shell -clickhouse://user:pass@localhost:9000/schema?secure=true -``` - -To connect [ClickHouse Cloud](https://clickhouse.com/cloud), -we need to use native protocol port [`9440`](https://clickhouse.com/docs/en/guides/sre/network-ports) with SSL enabled: -```shell -clickhouse://user:pass@CLICKHOUSE-CLOUD-HOST:9440/schema?secure=true -``` - - - - -Connecting to a specific Redshift cluster with a schema named `public`: -```shell -redshift://user:pass@redshift-cluster:5439/database?search_path=public -``` - -Connecting to a specific Redshift cluster with a schema named `public` with SSL disabled: -```shell -redshift://user:pass@redshift-cluster:5439/database?search_path=public&sslmode=disable -``` - -If you want to connect Redshift though Data API you can use the following URL: -:::info -AWS credentials are required to connect to Redshift via Data API. In this protocol, -atlas doesn't support changing the schema on URL, the schema is based on default schema of the user. -If you want to bind the connection to a specific schema, you can use the following SQL command: - -``` -ALTER USER [username] SET search_path = [schema]; -``` -::: - -Connecting to Serverless via IAM Identity: -```shell -redshift+http://workgroup([workgroup-name])/[database] -``` - -Connecting to Serverless via Secret ARN: -```shell -redshift+http://[arn]@workgroup([workgroup-name])/[database] -``` - -Connecting to provisioned Redshift cluster via IAM Identity: -```shell -redshift+http://cluster([cluster-name])/[database] -``` - -Connecting to provisioned Redshift cluster with database username -```shell -redshift+http://[dbuser]@cluster([cluster-name])/[database] -``` - -Connecting to provided Redshift cluster via Secret ARN: -```shell -redshift+http://[arn]@cluster([cluster-name])/[database] -``` - -:::note -- The default `mode` is `schema`. -- To change the connection to realm mode, use `mode=database`. -- Use `timeout=5m` to set the timeout for the http client. Default is 5 minutes. -- Use `polling=50ms` to set the polling interval when fetching the query results. Default is 50ms. -::: - - - - -Atlas can spin up an ephemeral local docker container for you by specifying a special URL like below. This can be useful -if you need a [dev database](../concepts/dev.mdx) for schema validation or diffing. However, some images like `mysql` / -`mariadb` take quite some time to "boot", before they are ready to be used. For a smoother developing experience -consider spinning up a longer lived container by yourself. - -```shell -# PostgreSQL database scope (all schemas). -docker://postgres/15/test - -# PostgreSQL specific schema scope. -docker://postgres/15/test?search_path=public - -# MySQL server scope (all schemas). -docker://mysql/8 - -# MySQL specific schema scope. -docker://mysql/8/test - -# MySQL server scope (all schemas). -docker://maria/latest - -# MySQL specific schema scope. -docker://maria/latest/test -``` - - - - -:::info Schema vs. Database scope - -When the database URL is set to a specific schema (e.g., `mysql://:3306/dev`), the scope of the work done by Atlas -(inspection, diffing, planning, applying, etc.) is limited to one schema. As a result, DDL statements printed during -diffing or planning will be formatted without schema qualifiers and can be executed on any schema. e.g., `table` instead -of `schema.table` - -However, if the database URL does not specify a schema (e.g., `mysql://:3306/`), Atlas operates on the selected schemas -(defaulting to all), and the generated DDL statements include schema qualifiers. e.g., `schema.table` instead of `table`. -::: - -## Supported Schemes - -Besides the standard database URLs mentioned above, Atlas supports various schemes for loading schemas and migration states: - -#### `file` - -The `file://` scheme is used to load schema state from a local file or a directory. The supported extensions -are `.sql` and `.hcl`. For example: - -```shell -file://path/to/schema.hcl -file://path/to/schema.sql -file://path/to/schemadir - -file://path/to/migrations -file://path/to/migrations?version=20231201182011 -``` - -#### `atlas` - -The `atlas://` scheme is used to load the state of a remote schema or a migrations directory from the Atlas Cloud, the -schema registry, and migrations artifactory of Atlas. For example: - -```shell -atlas://dir-slug -atlas://dir-slug?version=20231201182011 -atlas://dir-slug?tag=39e7e4e35fce7409bd26d25d8140061695d4ffd5 -``` - -#### `env` - -The `env://` scheme is useful for referencing the state of a schema after it has been loaded by a data source. For example: - -```hcl title="atlas.hcl" -data "external_schema" "orm" { - program = [ - ... - ] -} - -env "dev" { - orm = data.external_schema.orm -} -``` -```shell -atlas schema inspect --env dev -u env://orm -``` - -#### `ent` - -The `ent://` scheme is used to load the state an [ent](https://entgo.io) schema. For example: - -```shell -ent://path/to/ent/schema -``` - -## SSL/TLS Mode - -The default SSL mode for Postgres is `required`. Please follow the -[Postgres documentation](https://www.postgresql.org/docs/current/libpq-ssl.html) -for configuring your SSL connection for your database, or set SSL mode to `disable` -with the search parameter `?sslmode=disable`. For local databases, -disabling SSL is appropriate when inspecting and applying schema changes. - -MySQL does not require TLS by default. However, you can require TLS -with the `?tls=true` search parameter. - -## Non-alphanumeric characters - -Database URLs often contain passwords and other information which may contain non-alphanumeric characters. -These characters must be escaped using standard URL encoding, in order to be parsed correctly. -As a convenience, users may use the `urlescape` function in an `atlas.hcl` [project file](/concepts/url) to escape -these characters automatically. - -Suppose your password is `h:e!:l:l:o` and it is stored as an environment variable named `DB_PASSWORD`, you -can read this value and escape it using the `urlescape` function: - -```hcl title="atlas.hcl" -locals { - db_pass = urlescape(getenv("DB_PASSWORD")) -} - -env "local" { - url = "postgres://user:${local.db_pass}@localhost:5432/database" -} -``` - -The `urlescape` function return the escaped value: `h%3Ae%21%3Al%3Al%3Ao`. \ No newline at end of file diff --git a/doc/md/concepts/workflows.md b/doc/md/concepts/workflows.md deleted file mode 100644 index e7029b51470..00000000000 --- a/doc/md/concepts/workflows.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: Declarative vs Versioned Workflows -id: workflows -slug: /concepts/declarative-vs-versioned ---- - -This section introduces two types of workflows that are supported by Atlas -to manage database schemas: _declarative_ and _versioned_ migrations. - -### Declarative Migrations - -The declarative approach has become increasingly popular with engineers nowadays because it embodies -a convenient separation of concerns between application and infrastructure engineers. -Application engineers describe _what_ (the desired state) they need to happen, and -infrastructure engineers build tools that plan and execute ways to get to that state (_how_). -This division of labor allows for great efficiencies as it abstracts away the complicated -inner workings of infrastructure behind a simple, easy to understand API for the application -developers and allows for specialization and development of expertise to pay off for the -infra people. - -With declarative migrations, the desired state of the database schema is given -as input to the migration engine, which plans and executes a set of actions to -change the database to its desired state. - -For example, suppose your application uses a small SQLite database to store its data. -In this database, you have a `users` table with this structure: -```hcl -schema "main" {} - -table "users" { - schema = schema.main - column "id" { - type = int - } - column "greeting" { - type = text - } -} -``` -Now, suppose that you want to add a default value of `"shalom"` to the `greeting` -column. Many developers are not aware that it isn't possible to modify a column's -default value in an existing table in SQLite. Instead, the common practice is to -create a new table, copy the existing rows into the new table and drop the old one -after. Using the declarative approach, developers can change the default value for -the `greeting` column: - -```hcl {10} -schema "main" {} - -table "users" { - schema = schema.main - column "id" { - type = int - } - column "greeting" { - type = text - default = "shalom" - } -} -``` -And have Atlas's engine devise a plan similar to this: -```sql --- Planned Changes: --- Create "new_users" table -CREATE TABLE `new_users` (`id` int NOT NULL, `greeting` text NOT NULL DEFAULT 'shalom') --- Copy rows from old table "users" to new temporary table "new_users" -INSERT INTO `new_users` (`id`, `greeting`) SELECT `id`, IFNULL(`greeting`, 'shalom') AS `greeting` FROM `users` --- Drop "users" table after copying rows -DROP TABLE `users` --- Rename temporary table "new_users" to "users" -ALTER TABLE `new_users` RENAME TO `users` -``` - -### Versioned Migrations - -As the database is one of the most critical components in any system, applying changes -to its schema is rightfully considered a dangerous operation. For this reason, many teams -prefer a more imperative approach where each change to the database schema is checked-in -to source control and reviewed during code-review. Each such change -is called a "migration", as it migrates the database schema from the previous version to -the next. To support this kind of requirement, many popular database schema management -tools such as [Flyway](https://atlasgo.io/atlas-vs-others), [Liquibase](https://atlasgo.io/atlas-vs-others) or -[golang-migrate](https://github.com/golang-migrate/migrate) support a workflow that -is commonly called "versioned migrations". - -In addition to the higher level of control which is provided by versioned migrations, -applications are often deployed to multiple remote environments at once. These environments, -are not controlled (or even accessible) by the development team. In such cases, declarative migrations, -which rely on a network connection to the target database and on human -approval of migrations plans in real-time, are not a feasible strategy. - -With versioned migrations (sometimes called "change-based migrations") instead of describing -the desired state ("what the database should look like"), developers describe the changes themselves -("how to reach the state"). Most of the time, this is done by creating a set of SQL files -containing the statements needed. Each of the files is assigned a unique version and a -description of the changes. Tools like the ones mentioned earlier are then able to -interpret the migration files and to apply (some of) them in the correct order to -transition to the desired database structure. - -The benefit of the versioned migrations approach is that it is explicit: engineers -know _exactly_ what queries are going to be run against the database when the time -comes to execute them. Because changes are planned ahead of time, migration authors -can control precisely how to reach the desired schema. If we consider a migration as -a plan to get from state A to state B, oftentimes multiple paths exist, each with a -very different impact on the database. To demonstrate, consider an initial state which -contains a table with two columns: -```sql -CREATE TABLE users ( - id int, - name varchar(255) -); -``` -Suppose our desired state is: -```sql -CREATE TABLE users ( - id int, - user_name varchar(255) -); -``` -There are at least two ways get from the initial to the desired state: -* Drop the `name` column and create a new `user_name` column. -* Alter the name of the `name` column to `user_name`. - -Depending on the context, either may be the desired outcome for the developer -planning the change. With versioned migrations, engineers have the ultimate confidence -of what change is going to happen which may not be known ahead of time in a _declarative_ -approach. - -### Migration Authoring - -The downside of the _versioned migration_ approach is, of course, that it puts the -burden of planning the migration on developers. This requires a certain level -of expertise that is not always available to every engineer, as we demonstrated -in our example of setting a default value in a SQLite database above. - -As part of the Atlas project we advocate for a third combined approach that we call -"Versioned Migration Authoring". Versioned Migration Authoring is an attempt to combine -the simplicity and expressiveness of the declarative approach with the control and -explicitness of versioned migrations. - -With versioned migration authoring, users still declare their desired state and use -the Atlas engine to plan a safe migration from the existing to the new state. -However, instead of coupling planning and execution, plans are instead written -into normal migration files which can be checked-in to source control, fine-tuned manually and -reviewed in regular code review processes. - diff --git a/doc/md/contributing.md b/doc/md/contributing.md deleted file mode 100644 index 759fa471685..00000000000 --- a/doc/md/contributing.md +++ /dev/null @@ -1,134 +0,0 @@ ---- -title: Contributing -id: contributing -slug: /contributing ---- - -### How to Contribute -Atlas is a community project, we welcome contributions of all kinds and sizes! - -Here are some ways in which you can help: -* File well-written and descriptive bug reports or feature requests in the [Issues page](https://github.com/ariga/atlas/issues). -* Tweet about your experience with Atlas on [Twitter](https://twitter.com), don't forget to mention - [@ariga_io](https://twitter.com/ariga_io) and link to [atlasgo.io](https://atlasgo.io) if you do. -* Write educational content on your personal blog or websites such as [dev.to](https://dev.to) or - [Medium](https://medium.com). If you do, don't hesitate to reach out to us via Discord (link below) - for help proof-reading your text and - using our social-media channels for distributing it to readers. -* Join our [Discord Server](https://discord.com/invite/QhsmBAWzrC) to answer questions of other users - and find out other ways in which you can contribute by talking to the community there! -* Contribute bug-fixes or new features to the [codebase](https://github.com/ariga/atlas). - -### Contributing code to Atlas - -As we are still starting out, we don't have an official code-style or guidelines on composing your -code. As general advice, read through the area of the code that you are modifying and try to keep your code -similar to what others have written in the same place. - -#### Code-generation - -Some of the code in the Atlas repository is generated. The CI process verifies that -all generated files are checked-in by running `go generate ./...` and then running -`git status --porcelain`. Therefore, before committing changes to Atlas, please run: -```shell -go generate ./... -``` - -#### Linting - -Your code will be linted using `golangci-lint` during CI. To install in locally, -[follow this guide](https://golangci-lint.run/usage/install/#local-installation). - -To run it locally: -```shell -golangci-lint run -``` - -#### Formatting -Format your code using the standard `fmt` command: -```shell -go fmt ./... -``` - -#### Unit-tests - -Your code should be covered in unit-tests, see the codebase for examples. To run tests: -```shell -go test ./... -``` - -#### Integration tests - -Some features, especially those that interact directly with a database must be verified -in an integration test. There is extensive infrastructure for integration tests under -`internal/integration/` that runs tests under a matrix of database dialect (Postres, MySQL, etc.) -and versions. To run the integration tests, first use the `docker-compose.yml` file to spin up -databases to test against: - -```shell -cd internal/integration -docker-compose up -d -``` - -Then run the tests, from with the `integration` directory: -```shell -go test ./... -``` - -### Contributing documentation - -The Atlas documentation website is generated from the project's main [GitHub repo](https://github.com/ariga/atlas). - -Follow this short guide to contribute documentation improvements and additions: - -#### Setting Up - -1. [Locally fork and clone](https://docs.github.com/en/github/getting-started-with-github/quickstart/fork-a-repo) the - [repository](https://github.com/ariga/atlas). -2. The documentation site uses [Docusaurus](https://docusaurus.io/). To run it you will need [Node.js installed](https://nodejs.org/en/). -3. Install the dependencies: - ```shell - cd doc/website && npm install - ``` -4. Run the website in development mode: - ```shell - cd doc/website && npm start - ``` -5. Open you browser at [http://localhost:3000](http://localhost:3000). - -#### General Guidelines - -* Documentation files are located in `doc/md`, they are [Markdown-formatted](https://en.wikipedia.org/wiki/Markdown) - with "front-matter" style annotations at the top. [Read more](https://docusaurus.io/docs/docs-introduction) about - Docusaurus's document format. -* Atlas uses [Golang CommitMessage](https://github.com/golang/go/wiki/CommitMessage) formats to keep the repository's - history nice and readable. As such, please use a commit message such as: -```text -doc/md: adding a guide on contribution of docs to atlas -``` - -#### Adding New Documents - -1. Add a new Markdown file in the `doc/md` directory, for example `doc/md/writing-docs.md`. - -2. The file should be formatted as such: - ```markdown - --- - id: writing-docs - title: Writing Docs - --- - ... - ``` - Where `id` should be a unique identifier for the document, and should be the same as the filename without the `.md` suffix, - and `title` is the title of the document as it will appear in the page itself and any navigation element on the site. -3. If you want the page to appear in the documentation website's sidebar, add a `doc` block to `website/sidebars.js`, for example: -```diff - { - type: 'doc', - id: 'writing-docs', - }, -+ { -+ type: 'doc', -+ id: 'contributing', -+ }, -``` \ No newline at end of file diff --git a/doc/md/declarative/apply.mdx b/doc/md/declarative/apply.mdx deleted file mode 100644 index e14bc9bd729..00000000000 --- a/doc/md/declarative/apply.mdx +++ /dev/null @@ -1,504 +0,0 @@ ---- -id: apply -slug: /declarative/apply -title: Declarative schema migrations ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -With Atlas, users do not need to plan database schema changes themselves. Instead -of figuring out the correct SQL statements to get their database to the desired state, -Atlas supports a kind of workflow that we call _declarative schema migration_. -With declarative schema migrations the user provides a URL (connection string) to the target database -and the desired schema, while Atlas does the planning. - -[Read more about declarative workflows](/concepts/declarative-vs-versioned) - -:::info Login to Atlas -By default, the `atlas schema apply` command manages only **schemas**, **tables**, and their associated **indexes** and -constraints such as **foreign keys** and **checks**. - -**Views**, **materialized views**, **functions**, **procedures**, **triggers**, **sequences**, **domains**, **extensions**, -and additional database features are available to [Atlas Pro users](/features#pro). To include these resources in schema migrations, -use the following command: - -```shell -atlas login -``` -::: - -## Flags - -The `schema apply` command auto-generates a migration plan and applies it to the database to bring it to the desired state. -The desired state can be defined using an [HCL](/atlas-schema/hcl) or [SQL schema](/atlas-schema/sql) definition, a database [URL](../concepts/url), or an -external schemas like [ORM](/atlas-schema/external). - -* `--url` (`-u` accepted as well) - the [URL](/concepts/url) of the database to be inspected. -* `--to` - a list of [URLs](/concepts/url) to the desired state: can be a database URL, an HCL or - SQL schema, or a migration directory. -* `--dev-url` - a [URL](/concepts/url) to the [_Dev-Database_](../concepts/dev.mdx). -* `--schema` (optional, may be supplied multiple times) - schemas to inspect within the target database. -* `--exclude` (optional, may be supplied multiple times) - filter out resources matching the given glob pattern. -* `--format` (optional) - [Go template](https://pkg.go.dev/text/template) to use to format the output. -* `--edit` (optional) - open the planned migration in the default editor, allowing the user to modify it. - -## Approval Policy - -The `schema apply` command requires user review and approval before executing the migration against the target database. -The approval process can occur during migration planning (locally or in CI), automatically in the applying stage using -the linting review policy, or be completely skipped, which is not recommended in production environments. Let's cover -all options: - -1. **Interactive Review** (default): The `atlas schema apply` command will print the SQL statements it is going to run - and prompt the user for approval. Users can review the migration plan and either approve or reject it. [Atlas Pro](/features#pro) - users can set the `--dev-url` flag to run analysis and simulation on the [_Dev-Database_](../concepts/dev.mdx) of the - proposed changes, and get a detailed linting report when reviewing the migration. - -2. **Skip review** (auto-approval): The `--auto-approve` flag can be used to skip the review process and automatically - apply the migration to the target database. Although this option is convenient for experimentation and development, - it is not recommended for production environments, as it may lead to unexpected changes, such as destructive operations. - -3. **Approve using Review Policy**: Atlas ships with an analysis engine that can detect the impact of proposed changes - to the target database. For example, Atlas can detect irreversible destructive changes that will result in data - loss or data-dependent changes that may fail due to data integrity constraints. - - Users can configure Atlas to automatically approve migrations that pass the analysis engine checks (for example, no - destructive changes were detected) and require manual review for migrations that fail the checks. Read more on how to - configure the [Review Policy](#review-policy) for your project. - -4. **Pre-plan (and Approve) Migrations**: The `atlas schema plan` command allows users to pre-plan, review, and approve - migrations before executing `atlas schema apply` on the database. This enables users to preview and modify SQL changes, - involve team members in the review process, and ensure that no human intervention is required during the `atlas schema apply` phase. - [Read more about `atlas schema plan`](/declarative/plan) and how to integrate it into your CI/CD pipeline. - -## Review Policy - -Users can define in which cases their schema changes require manual review and approval, if no migration has already been -planned and approved for the given [_Schema Transition_ (State1 -> State2)](/declarative/plan#overview). How does it work? - -1. During `atlas schema apply`, Atlas checks if there is an approved pre-planned migration for the given schema transition - (State1 -> State2). If there is one, Atlas will apply it without requiring manual review. -2. If no migration has been pre-planned, Atlas will run analysis and simulation on the [_Dev-Database_](../concepts/dev.mdx) - to detect the impact of the proposed changes. -3. Atlas then prints the linting report, and based on the review policy defined in the `atlas.hcl` file, it will - decide whether to auto-approve the migration or require manual review. - -The review policy can be set to one of the following values: - -1. `ERROR` - Atlas will require manual review and approval only if the linting report contains errors, i.e., one of the - analyzers is configured to return an error. See the [destructive-changes analyzer](/lint/analyzers#destructive-changes) - as an example. -2. `WARNING` - Atlas will require manual review and approval if the linting report contains warnings (diagnostics) or - errors. See the [list of checks](/lint/analyzers#checks) that can be detected by the analyzers. -3. `ALWAYS` (default) - Atlas will always require manual review and approval, regardless of the linting report. - - - - -```hcl title="atlas.hcl" {2} -lint { - review = ERROR // ERROR | ALWAYS - destructive { - error = false - } -} -``` - - - -```hcl title="atlas.hcl" {3} -env "prod" { - lint { - review = ERROR // ERROR | ALWAYS - destructive { - error = false - } - } -} -``` - - - -The typical workflow for applying schema changes without manual review is to use `atlas schema plan` to pre-plan the -migration and falling back to the "review policy" for schema transitions that were not pre-planned but are still safe -to apply. The `atlas schema plan` command can be integrated into the CI/CD pipeline to ensure that all schema changes -are reviewed and approved before being applied to the database. - -## Auto-approval - -Before executing the migration against the target database, Atlas will print the SQL -statements that it is going to run and prompt the user for approval. Users that wish -to automatically approve may run the `schema apply` command with the `--auto-approve` -flag. - -## Dry-runs - -In order to skip the execution of the SQL queries against the target database, -users may provide the `--dry-run` flag. When invoked with this flag, Atlas will -connect to the target database, inspect its current state, calculate the diff -between the provided desired schema and print out a series of SQL statements to -reconcile any gaps between the inspected and desired schemas. - -## Dev-database - -When storing schema definitions, many database engines perform some form of -normalization. That is, despite us providing a specific definition of some -aspect of the schema, the database will store it in another, equivalent form. -This means in certain situations it may appear to Atlas as if some diff exists -between the desired and inspected schemas, whereas in reality there is none. - -To overcome these situations, users may use the `--dev-url` flag to provide -Atlas with a connection string to a [_Dev-Database_](../concepts/dev.mdx). -This database is used to normalize the schema prior to planning migrations and -for simulating changes to ensure their applicability before execution. - -## Diff Policy - -Atlas allows configuring the schema diffing policy in [project configuration](../atlas-schema/projects.mdx) to fine-tune -or modify suggested changes before applying them to the database: - - - - -```hcl title="atlas.hcl" -variable "destructive" { - type = bool - default = false -} - -env "local" { - diff { - skip { - drop_schema = !var.destructive - drop_table = !var.destructive - } - } -} -``` - -The usage is as follows: - -```go -atlas schema apply --env "local" --var "destructive=true" -``` - - - - -```hcl title="atlas.hcl" -env "local" { - diff { - // By default, indexes are not added or dropped concurrently. - concurrent_index { - add = true - drop = true - } - } -} -``` - - - - -## Examples - -### HCL schema - -The following example demonstrates how to use [Atlas DDL](../atlas-schema/hcl) (HCL) as the desired state and -update the database schema to match it: - - - - -```shell -atlas schema apply \ - --url "mysql://root:pass@localhost:3306/example" \ - --to "file://schema.hcl" \ - --dev-url "docker://mysql/8/example" -``` - - - - -```shell -atlas schema apply \ - --url "maria://root:pass@:3306/example" \ - --to "file://schema.hcl" \ - --dev-url "docker://maria/latest/example" -``` - - - - -```shell -atlas schema apply \ - --url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \ - --to "file://schema.hcl" \ - --dev-url "docker://postgres/15" -``` - - - - -```shell -atlas schema apply \ - --url "sqlite://file.db" \ - --to "file://schema.hcl" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -```shell -atlas schema apply \ - --url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \ - --to "file://schema.hcl" \ - --dev-url "docker://sqlserver/2022-latest/dev" -``` - - - - -```shell -atlas schema apply \ - --url "clickhouse://localhost:9000/default" \ - --to "file://schema.hcl" \ - --dev-url "docker://clickhouse/23.11/dev" -``` - - - - -```shell -atlas schema apply \ - --url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \ - --to "file://schema.hcl" \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -``` - - - - -### SQL schema - -The following example demonstrates how to use an SQL schema file as the desired state and update the database schema to -match it: - - - - -```shell -atlas schema apply \ - --url "mysql://root:pass@localhost:3306/example" \ - --to "file://schema.sql" \ - --dev-url "docker://mysql/8/example" -``` - - - - -```shell -atlas schema apply \ - --url "maria://root:pass@:3306/example" \ - --to "file://schema.sql" \ - --dev-url "docker://maria/latest/example" -``` - - - - -```shell -atlas schema apply \ - --url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \ - --to "file://schema.sql" \ - --dev-url "docker://postgres/15" -``` - - - - -```shell -atlas schema apply \ - --url "sqlite://file.db" \ - --to "file://schema.sql" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -```shell -atlas schema apply \ - --url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \ - --to "file://schema.sql" \ - --dev-url "docker://sqlserver/2022-latest/dev" -``` - - - - -```shell -atlas schema apply \ - --url "clickhouse://localhost:9000/default" \ - --to "file://schema.sql" \ - --dev-url "docker://clickhouse/23.11/dev" -``` - - - - -```shell -atlas schema apply \ - --url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \ - --to "file://schema.sql" \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -``` - - - - -:::info The role of the [Dev Database](/concepts/dev-database) -Atlas loads the **desired state** by executing the SQL files onto the provided [dev database](/concepts/dev-database), -compares it against the database **current state** by inspecting its schema and writes a migration plan for moving -from the current state to the desired state. -::: - -### Migration directory - -The following example demonstrates how to use the migration directory as the desired state and update the database -schema to match it. The URL for the migration directory can contain two optional query parameters: -* `format` - migration directory format: atlas (default), golang-migrate, goose, dbmate, flyway, liquibase. -* `version` - until which version of migration files to read. By default, all migrations are read. - - - - -```shell -atlas schema apply \ - --url "mysql://root:pass@localhost:3306/example" \ - --to "file://migrations" \ - --dev-url "docker://mysql/8/example" -``` - - - - -```shell -atlas schema apply \ - --url "maria://root:pass@:3306/example" \ - --to "file://migrations" \ - --dev-url "docker://maria/latest/example" -``` - - - - -```shell -atlas schema apply \ - --url "postgres://postgres:pass@localhost:5432/database?search_path=public&sslmode=disable" \ - --to "file://migrations" \ - --dev-url "docker://postgres/15" -``` - - - - -```shell -atlas schema apply \ - --url "sqlite://file.db" \ - --to "file://migrations" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -```shell -atlas schema apply \ - --url "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \ - --to "file://migrations" \ - --dev-url "docker://sqlserver/2022-latest/dev" -``` - - - - -```shell -atlas schema apply \ - --url "clickhouse://localhost:9000/default" \ - --to "file://migrations" \ - --dev-url "docker://clickhouse/23.11/dev" -``` - - - - -```shell -atlas schema apply \ - --url "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \ - --to "file://migrations" \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -``` - - - - -### Multi-Tenant environments - -The Atlas configuration language provides built-in support for executing declarative workflows in multi-tenant -environments. Using the `for_each` meta-argument, users can define a single `env` block that is expanded to N instances, -one for each tenant: - - - - -```hcl -env "prod" { - for_each = toset(var.tenants) - url = urlsetpath(var.url, each.value) - src = "schema.hcl" - format { - schema { - apply = format( - "{{ json . | json_merge %q }}", - jsonencode({ - Tenant : each.value - }) - ) - } - } - // Inject custom variables to the schema.hcl defined below. - tenant = each.value -} -``` - - - - -```hcl -variable "tenant" { - type = string - description = "The schema we operate on" -} - -schema "tenant" { - name = var.tenant -} - -table "users" { - schema = schema.tenant - // ... -} -``` - - - - -[Read more](../atlas-schema/projects.mdx#multi-environment-example) about how to define declarative workflows using -project files in multi-tenant environments. - -## Reference - -[CLI Command Reference](/cli-reference#atlas-schema-apply) diff --git a/doc/md/declarative/diff.mdx b/doc/md/declarative/diff.mdx deleted file mode 100644 index 0dafe9e6eed..00000000000 --- a/doc/md/declarative/diff.mdx +++ /dev/null @@ -1,837 +0,0 @@ ---- -id: diff -slug: /declarative/diff -title: Comparing Schemas ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -It is sometimes useful to be able to calculate the diff between two -schemas. For instance, as you are developing you may want to calculate -how to move from an existing database to some other state that you are -interested in. Alternatively, you may be diagnosing some issue and want -to verify there is no difference between a local copy of a schema and a -remote one. - -To accommodate these types of use-cases, Atlas offers the `schema diff` that accepts two schema states: `--from` and -`--to`, calculates the differences between them, and generates a plan of SQL statements that can be used -to migrate the "from" schema to the state defined by the "to" schema. A state can be specified using a database URL, -an HCL or SQL schema, or a migration directory. - -:::info Login to Atlas -By default, running `atlas schema diff` diffs only **schemas**, **tables**, and their associated **indexes** and -constraints such as **foreign keys** and **checks**. - -**Views**, **materialized views**, **functions**, **procedures**, **triggers**, **sequences**, **domains**, **extensions**, -and additional features are available to [Atlas Pro users](/features#pro). To include these resources in the schema diffing, use the -following command: - -```shell -atlas login -``` -::: - -## Flags -* `--from` - a list of [URLs](/concepts/url) to the current state: can be a database URL, an HCL or -SQL schema, or a migration directory. -* `--to` - a list of [URLs](/concepts/url) to the desired state: can be a database URL, an HCL or -SQL schema, or a migration directory. -* `--dev-url` - a [URL](/concepts/url) to the [_Dev-Database_](../concepts/dev.mdx). -* `--schema` (optional, may be supplied multiple times) - schemas to inspect within the target database. -* `--exclude` (optional, may be supplied multiple times) - filter out resources matching the given glob pattern. -* `--format` (optional) - [Go template](https://pkg.go.dev/text/template) to use to format the output. -* `--web` (`-w` accepted as well)- visualize the schema diff as an ERD on [Atlas Cloud](https://auth.atlasgo.cloud/signup). See an -example [here](https://gh.atlasgo.cloud/explore/72b578d2...00799c41). - - -## Diff Policy - -Atlas allows configuring the schema diffing policy in [project configuration](../atlas-schema/projects.mdx) to fine-tune -or modify suggested changes before they are printed: - - - - -```hcl title="atlas.hcl" -variable "destructive" { - type = bool - default = false -} - -env "local" { - diff { - skip { - drop_schema = !var.destructive - drop_table = !var.destructive - } - } -} -``` - -The usage is as follows: - -```go -atlas schema diff --env "local" --var "destructive=true" -``` - - - - -```hcl title="atlas.hcl" -env "local" { - diff { - // By default, indexes are not created or dropped concurrently. - concurrent_index { - create = true - drop = true - } - } -} -``` - - - - -## Examples - -### Compare databases - - - - -```shell -atlas schema diff \ - --from "mysql://user:pass@localhost:3306" \ - --to "mysql://user:pass@remote:3306" -``` - - - - -```shell -atlas schema diff \ - --from "maria://user:pass@localhost:3306" \ - --to "maria://user:pass@remote:3306" -``` - - - - -```shell -atlas schema diff \ - --from "postgres://postgres:pass@localhost:5432/database?sslmode=disable" \ - --to "postgres://postgres:pass@remote:5432/database" -``` - - - - -```shell -atlas schema diff \ - --from "sqlite://file1.db" \ - --to "sqlite://file2.db" -``` - - - - -```shell -atlas schema diff \ - --from "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master&mode=database" \ - --to "sqlserver://sa:P@ssw0rd0995@remote:1433?database=master&mode=database" -``` - - - - -```shell -atlas schema diff \ - --from "docker://clickhouse/23.11" \ - --to "clickhouse://user:pass@remote:9000" -``` - - - - -```shell -atlas schema diff \ - --from "redshift://user:pass@redshift-cluster:5439/database?search_path=public&sslmode=disable" \ - --to "redshift://user:pass@redshift-cluster-2:5439/database?search_path=public&sslmode=disable" -``` - - - - -### Compare database schemas - - - - -Compare two MySQL [schemas](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_schema)/[databases](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_database) -named `example`: - -```shell -atlas schema diff \ - --from "mysql://user:pass@localhost:3306/example" \ - --to "mysql://user:pass@remote:3306/example" -``` - - - - -Compare two MariaDB [schemas](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_schema)/[databases](https://dev.mysql.com/doc/refman/8.0/en/glossary.html#glos_database) -named `example`: - -```shell -atlas schema diff \ - --from "maria://user:pass@localhost:3306/example" \ - --to "maria://user:pass@remote:3306/example" -``` - - - - -Compare two PostgreSQL [schemas](https://www.postgresql.org/docs/current/ddl-schemas.html) named `public` under the -`example` database: - -```shell -atlas schema diff \ - --from "postgres://postgres:pass@localhost:5432/example?search_path=public&sslmode=disable" \ - --to "postgres://postgres:pass@remote:5432/example?search_path=public" -``` - - - - -Compare two SQL Server schemas: - -```shell -atlas schema diff \ - --from "sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master" \ - --to "sqlserver://sa:P@ssw0rd0995@remote:1433?database=master" -``` - - - - -Compare two ClickHouse schemas/named-databases: - -```shell -atlas schema diff \ - --from "clickhouse://user:pass@localhost:9000/example" \ - --to "clickhouse://user:pass@remote:9000/example" -``` - - - - -Compare two Redshift clusters with a schema named `public` under the -`example` database: - -```shell -atlas schema diff \ - --from "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \ - --to "redshift://user:pass@redshift-cluster-2:5439/example?search_path=public&sslmode=disable" -``` - - - - -### Compare HCL schemas - - - - -```shell -atlas schema diff \ - --from "file://schema1.hcl" \ - --to "file://schema2.hcl" \ - --dev-url "docker://mysql" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.hcl" \ - --to "file://schema2.hcl" \ - --dev-url "docker://maria" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.hcl" \ - --to "file://schema2.hcl" \ - --dev-url "docker://postgres" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.hcl" \ - --to "file://schema2.hcl" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.hcl" \ - --to "file://schema2.hcl" \ - --dev-url "docker://sqlserver/2022-latest" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.hcl" \ - --to "file://schema2.hcl" \ - --dev-url "docker://clickhouse/23.11" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.hcl" \ - --to "file://schema2.hcl" \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -``` - - - - -### Compare SQL schemas - - - - -```shell -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "docker://mysql/8/test" -``` - -If the DDL statements only include qualified tables (e.g., `schema`.`table`), you can omit the database name from the -`--dev-url`: - -```shell {4} -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "docker://mysql" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "docker://maria/latest/test" -``` - -If the DDL statements only include qualified tables (e.g., `schema`.`table`), you can omit the database name from the -`--dev-url`: - -```shell {4} -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "docker://maria" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "docker://postgres" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "docker://sqlserver/2022-latest" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "docker://clickhouse/23.11" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema1.sql" \ - --to "file://schema2.sql" \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -``` - - - - - -### Compare migration directories - - - - -```shell -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "docker://mysql/8/test" -``` - -If the DDL statements only include qualified tables (e.g., `schema`.`table`), you can omit the database name from the -`--dev-url`: - -```shell {4} -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "docker://mysql" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "docker://maria/latest/test" -``` - -If the DDL statements only include qualified tables (e.g., `schema`.`table`), you can omit the database name from the -`--dev-url`: - -```shell {4} -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "docker://maria" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "docker://postgres" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "docker://sqlserver/2022-latest" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "docker://clickhouse/23.11" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations1" \ - --to "file://migrations2" \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -``` - - - - -### Compare SQL to HCL - - - - -```shell -atlas schema diff \ - --from "file://schema.hcl" \ - --to "file://schema.sql" \ - --dev-url "docker://mysql/8/test" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema.hcl" \ - --to "file://schema.sql" \ - --dev-url "docker://maria/latest/test" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema.hcl" \ - --to "file://schema.sql" \ - --dev-url "docker://postgres" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema.hcl" \ - --to "file://schema.sql" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema.hcl" \ - --to "file://schema.sql" \ - --dev-url "docker://sqlserver/2022-latest" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema.hcl" \ - --to "file://schema.sql" \ - --dev-url "docker://clickhouse/23.11" -``` - - - - -```shell -atlas schema diff \ - --from "file://schema.hcl" \ - --to "file://schema.sql" \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -``` - - - - - -### Compare a migration directory to a database - - - - -```shell -atlas schema diff \ - --from "file://migrations" \ - --to "mysql://root:pass@:3306/example" \ - --dev-url "docker://mysql/8/example" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations" \ - --to "maria://root:pass@:3306/example" \ - --dev-url "docker://maria/latest/example" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations" \ - --to "postgres://postgres:pass@localhost:5435/test?sslmode=disable" \ - --dev-url "docker://postgres" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations" \ - --to "file://schema.sql" \ - --dev-url "sqlite://file?mode=memory" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations" \ - --to "sqlserver://sa:P@ssw0rd0995@:1433?database=master" \ - --dev-url "docker://sqlserver/2022-latest" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations" \ - --to "clickhouse://user:pass@remote:9000/example" \ - --dev-url "docker://clickhouse/23.11" -``` - - - - -```shell -atlas schema diff \ - --from "file://migrations" \ - --to "redshift://user:pass@redshift-cluster:5439/example?search_path=public&sslmode=disable" \ - --dev-url "redshift://user:pass@redshift-cluster:5439/dev?search_path=public&sslmode=disable" -``` - - - - -### Compare external schemas - -The `schema diff` command can also be used to compare external schemas defined in data sources, such as ORM schemas, -with a database, HCL or SQL schemas, or even with other ORM schemas. - - - - -```hcl -data "external_schema" "gorm" { - program = [ - "go", "run", "-mod=mod", - "ariga.io/atlas-provider-gorm", - "load", - "--path", "./path/to/models", - "--dialect", "mysql", - ] -} - -data "external_schema" "sequelize" { - program = [ - "npx", - "@ariga/atlas-provider-sequelize", - "load", - "--path", "./path/to/models", - "--dialect", "mysql", - ] -} - -env "drift" { - dev = "docker://mysql/8/dev" - # Variables defined and available with env:// prefix. - gorm = data.hcl_schema.gorm.url - sequelize = data.hcl_schema.sequelize.url -} -``` - -```shell -atlas schema diff \ - --env "drift" \ - --from "env://gorm" \ - --to "env://sequelize" -``` - - - - -```hcl -data "external_schema" "gorm" { - program = [ - "go", "run", "-mod=mod", - "ariga.io/atlas-provider-gorm", - "load", - "--path", "./path/to/models", - "--dialect", "postgres", - ] -} - -data "external_schema" "sequelize" { - program = [ - "npx", - "@ariga/atlas-provider-sequelize", - "load", - "--path", "./path/to/models", - "--dialect", "postgres", - ] -} - -env "drift" { - dev = "docker://postgres/15/dev?search_path=public" - # Variables defined and available with env:// prefix. - gorm = data.hcl_schema.gorm.url - sequelize = data.hcl_schema.sequelize.url -} -``` - -```shell -atlas schema diff \ - --env "drift" \ - --from "env://gorm" \ - --to "env://sequelize" -``` - - - - -```hcl -data "external_schema" "gorm" { - program = [ - "go", "run", "-mod=mod", - "ariga.io/atlas-provider-gorm", - "load", - "--path", "./path/to/models", - "--dialect", "sqlite", - ] -} - -data "external_schema" "sequelize" { - program = [ - "npx", - "@ariga/atlas-provider-sequelize", - "load", - "--path", "./path/to/models", - "--dialect", "sqlite", - ] -} - -env "drift" { - dev = "sqlite://dev?mode=memory" - # Variables defined and available with env:// prefix. - gorm = data.hcl_schema.gorm.url - sequelize = data.hcl_schema.sequelize.url -} -``` - -```shell -atlas schema diff \ - --env "drift" \ - --from "env://gorm" \ - --to "env://sequelize" -``` - - - - -```hcl -data "external_schema" "gorm" { - program = [ - "go", "run", "-mod=mod", - "ariga.io/atlas-provider-gorm", - "load", - "--path", "./path/to/models", - "--dialect", "sqlserver", - ] -} - -data "external_schema" "sequelize" { - program = [ - "npx", - "@ariga/atlas-provider-sequelize", - "load", - "--path", "./path/to/models", - "--dialect", "mssql", - ] -} - -env "drift" { - dev = "docker://sqlserver/2022-latest" - # Variables defined and available with env:// prefix. - gorm = data.hcl_schema.gorm.url - sequelize = data.hcl_schema.sequelize.url -} -``` - -```shell -atlas schema diff \ - --env "drift" \ - --from "env://gorm" \ - --to "env://sequelize" -``` - - - - -### Indented SQL - -The `schema diff` command outputs a list of SQL statements without indentation by default. If you would like to view -the SQL statements with indentation, use the `--format` flag. For example: - -```shell {1} -# Indent SQL statements with 2 spaces. -atlas schema diff \ - --from "mysql://user:pass@localhost:3306/example" \ - --to "mysql://user:pass@remote:3306/example" \ - --format '{{ sql . " " }}' -``` - -## Reference - -[CLI Command Reference](/cli-reference#atlas-schema-diff) \ No newline at end of file diff --git a/doc/md/declarative/inspect.mdx b/doc/md/declarative/inspect.mdx deleted file mode 100644 index 410307c979c..00000000000 --- a/doc/md/declarative/inspect.mdx +++ /dev/null @@ -1,384 +0,0 @@ ---- -id: inspect -slug: /declarative/inspect -title: Inspecting existing schemas with Atlas ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -### Automatic Schema Inspection -Many projects begin with an existing database that users wish to start managing -with Atlas. In this case, instead of having developers learn the [Atlas Language](/atlas-schema/hcl) -and reverse engineer a schema definition file that precisely describes the existing database, -Atlas supports _automatic schema inspection_. - -With automatic schema inspection, users simply provide Atlas with a connection string -to their target database and Atlas prints out a schema definition file in the Atlas -language that they can use as the starting point for working with this database. - -:::info Login to Atlas -By default, running `atlas schema inspect` inspects only **schemas**, **tables**, and their associated **indexes** and -constraints such as **foreign keys** and **checks**. - -**Views**, **materialized views**, **functions**, **procedures**, **triggers**, **sequences**, **domains**, **extensions**, -and additional database features are available to [Atlas Pro users](/features#pro). To include these resources in the inspection, -use the following command: - -```shell -atlas login -``` -::: - -## Flags - -When using `schema inspect` to inspect an existing database, users may supply multiple -parameters: -* `--url` (`-u` accepted as well) - the [URL](/concepts/url) of database to be inspected. -* `--schema` (optional, may be supplied multiple times) - schemas to inspect within - the target database. -* `--exclude` (optional) - filter out resources matching the given glob pattern. -* `--format` (optional) - [Go template](https://pkg.go.dev/text/template) to use to format the output. -* `--web` (`-w` accepted as well) - visualize the schema as an ERD on [Atlas Cloud](https://auth.atlasgo.cloud/signup). See an - example [here](https://gh.atlasgo.cloud/explore/40d83919). - -## Examples - -### Inspect a database - -The following commands demonstrate how to inspect the entire database, including all its schemas: - - - - -```shell -atlas schema inspect -u "mysql://localhost" - -atlas schema inspect -u "mysql://user:pass@localhost:3306" -``` - - - - -```shell -atlas schema inspect -u "maria://localhost" - -atlas schema inspect -u "maria://user:pass@localhost:3306" -``` - - - - -```shell -atlas schema inspect -u "postgres://localhost:5432/database" - -atlas schema inspect -u "postgres://postgres:pass@0.0.0.0:5432/database?sslmode=disable" -``` - - - - -```shell -atlas schema inspect -u "sqlite://file.db" - -atlas schema inspect -u "sqlite://file?cache=shared&mode=memory" -``` - - - - -```shell -atlas schema inspect -u "sqlserver://user:pass@localhost:1433?database=master&mode=database" - -atlas schema inspect -u "sqlserver://user:pass@0.0.0.0:1433?database=master&mode=database" -``` - - - - -```shell -atlas schema inspect -u "clickhouse://localhost:9000" - -atlas schema inspect -u "clickhouse://user:pass@0.0.0.0:9000?secure=true" -``` - - - - -```shell -atlas schema inspect -u "redshift://user:pass@redshift-cluster:5439/database" - -atlas schema inspect -u "redshift://user:pass@redshift-cluster:5439/database?sslmode=disable" -``` - - - - -### Inspect a schema - -The following commands show how to inspect a single schema: - - - - -```shell -atlas schema inspect -u "mysql://localhost/schema" - -atlas schema inspect -u "mysql://user:pass@localhost:3306/schema" -``` - - - - -```shell -atlas schema inspect -u "maria://localhost/schema" - -atlas schema inspect -u "maria://user:pass@localhost:3306/schema" -``` - - - - -```shell -atlas schema inspect -u "postgres://localhost:5432/database?search_path=public" - -atlas schema inspect -u "postgres://postgres:pass@0.0.0.0:5432/database?search_path=public&sslmode=disable" -``` - - - - -```shell -atlas schema inspect -u "sqlserver://user:pass@localhost:1433?database=master" - -atlas schema inspect -u "sqlserver://user:pass@0.0.0.0:1433?database=master" -``` - - - - -```shell -atlas schema inspect -u "clickhouse://localhost:9000/database" - -atlas schema inspect -u "clickhouse://user:pass@0.0.0.0:9000/database?secure=true" -``` - - - - -```shell -atlas schema inspect -u "redshift://user:pass@redshift-cluster:5439/database?search_path=public" - -atlas schema inspect -u "redshift://user:pass@redshift-cluster:5439/database?search_path=public&sslmode=disable" -``` - - - - -### Inspect multiple schemas - -The following commands show how to inspect multiple schemas: - - - - -```shell -atlas schema inspect -u "mysql://localhost" --schema schema1 --schema schema2 - -atlas schema inspect -u "mysql://user:pass@localhost:3306" -s schema1,schema2 -``` - - - - -```shell -atlas schema inspect -u "maria://localhost" --schema schema1 --schema schema2 - -atlas schema inspect -u "maria://user:pass@localhost:3306" -s schema1,schema2 -``` - - - - -```shell -atlas schema inspect -u "postgres://localhost:5432/database" --schema schema1 --schema schema2 - -atlas schema inspect -u "postgres://postgres:pass@0.0.0.0:5432/database?sslmode=disable" -s schema1,schema2 -``` - - - - -```shell -atlas schema inspect -u "sqlserver://user:pass@localhost:1433?database=master&mode=database" --schema schema1 --schema schema2 - -atlas schema inspect -u "sqlserver://user:pass@0.0.0.0:1433?database=master&mode=database" -s schema1,schema2 -``` - - - - -```shell -atlas schema inspect -u "clickhouse://localhost:9000/database" --schema schema1 --schema schema2 - -atlas schema inspect -u "clickhouse://user:pass@0.0.0.0:9000/database?secure=true" -s schema1,schema2 -``` - - - - -```shell -atlas schema inspect -u "redshift://user:pass@redshift-cluster:5439/database" --schema schema1 --schema schema2 - -atlas schema inspect -u "redshift://user:pass@redshift-cluster:5439/database?search_path=public&sslmode=disable" -s schema1,schema2 -``` - - - - -### Exclude Schemas - -The following commands show how to exclude schemas that match a glob pattern from the inspection: - -```shell -atlas schema inspect -u "mysql://localhost" --exclude "internal" - -atlas schema inspect -u "mysql://localhost" --exclude "schema_*" -``` - -### Exclude Schema Resources - -The following commands show how to exclude schema resources (objects) that match a glob pattern from the inspection: - - - - -When inspecting a database (multiple schemas), the first glob pattern matches the schema name, and the second matches -the object name: - -```shell -# Exclude all policies and functions from the public schema. -atlas schema inspect -u "postgres://localhost:5432/database" --exclude 'public.*[type=policy|function]' - -# Exclude all policies and functions that match the pattern from all schemas. -atlas schema inspect -u "postgres://localhost:5432/database" --exclude '*.*[type=policy|function]' -``` - - - - -When inspecting a specific schema, the first glob pattern matches the object name: - -```shell -atlas schema inspect -u "postgres://localhost:5432/database" --exclude '*[type=policy|function]' - -atlas schema inspect -u "postgres://localhost:5432/database" --exclude '*[type=policy|function]' -``` - - - - -### Exclude Tables - -The following commands show how to exclude tables that match a glob pattern from the inspection: - - - - -When inspecting a database (multiple schemas), the first glob pattern matches the schema name, and the second matches -the table name: - -```shell -atlas schema inspect -u "mysql://localhost" --exclude "*.prefix_*" - -atlas schema inspect -u "mysql://localhost" --exclude "schema.table" - -atlas schema inspect -u "mysql://localhost" --exclude "schema.t*[type=table]" --exclude "schema.e*[type=enum]" -``` - - - - -When inspecting a specific schema, the first glob pattern matches the table name: - -```shell -atlas schema inspect -u "mysql://localhost" --exclude "prefix_*" - -atlas schema inspect -u "mysql://localhost" --exclude "table" - -atlas schema inspect -u "mysql://localhost" --exclude "t*[type=table]" --exclude "e*[type=enum]" -``` - - - - -### Exclude Table Resources - -The following commands show how to exclude columns, indexes or foreign-keys that match a glob pattern from the inspection: - - - - -When inspecting a database (multiple schemas), the first glob pattern matches the schema name, and the second matches -the table name: - -```shell -atlas schema inspect -u "mysql://localhost" --exclude "*.*.prefix_*" - -atlas schema inspect -u "mysql://localhost" --exclude "public.*.c1" - -atlas schema inspect -u "mysql://localhost" --exclude "public.*.c*[type=column|index]" -``` - - - - -When inspecting a specific schema, the first glob pattern matches the table name: - -```shell -atlas schema inspect -u "mysql://localhost" --exclude "*.prefix_*" - -atlas schema inspect -u "mysql://localhost" --exclude "*.c1" - -atlas schema inspect -u "mysql://localhost" --exclude "*.c*[type=column|index]" -``` - - - - -### SQL Format - -By default, the output of `schema inspect` is in the Atlas DDL. However, you can use SQL to describe the desired schema -in all commands that are supported by Atlas DDL. To output the schema in SQL format, use the `--format` flag as follows: - -```shell -atlas schema inspect -u "mysql://localhost" --format "{{ sql . }}" -``` - -### JSON Format - -Atlas can output a JSON document that represents the database schema. This representation allows users to use tools -like `jq` to analyze the schema programmatically. - -```shell -atlas schema inspect -u '' --format '{{ json . }}' -``` - -### Visualize schemas - -Atlas can generate an Entity Relationship Diagram (ERD) for the inspected schemas. The following command shows how to -generate an ERD for inspected schemas: - -```shell -atlas schema inspect -u '' -w -``` - -

- - Schema ERD - -

- -## Reference - -[CLI Command Reference](/cli-reference#atlas-schema-inspect) diff --git a/doc/md/declarative/plan.mdx b/doc/md/declarative/plan.mdx deleted file mode 100644 index 7840c531acc..00000000000 --- a/doc/md/declarative/plan.mdx +++ /dev/null @@ -1,664 +0,0 @@ ---- -id: plan -slug: /declarative/plan -title: Pre-planning Schema Migrations ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -The `atlas schema plan` command allows users to pre-plan declarative schema migrations before applying them to the -database. Once a migration is planned, reviewed, and approved, it can be applied using the [`atlas schema apply`](./apply.mdx) -command to update the database to the desired state. - -Note: If you are not familiar with the `atlas schema apply` command, please refer to the [Applying Changes](./apply.mdx) -guide first. - -:::info [Atlas Pro Feature](/features#pro) -The `atlas schema plan` command is available exclusively to Pro users. To use this feature, run: -``` -atlas login -``` -::: - -### Overview - -The `atlas schema apply` command updates the database to the desired state defined by the user. These auto-planned schema -changes can be approved in one of the following ways: - -1. Reviewed and approved interactively by a human. -2. Auto-approved using the `--auto-approve` flag, though this may be risky in a production database. -3. Auto-reviewed based on the lint-review policy, which requires human review only if the linter detects issues. or errors. - -These options depend on the database state and cannot predict whether the migration will succeed, fail, or abort. -This is where `atlas schema plan` becomes useful. - -The `atlas schema plan` command allows users to pre-plan, review, and approve migrations before executing `atlas schema apply` -on the database. This enables users to preview and modify SQL changes, involve team members in the review process, and ensure -that no human intervention is required during the `atlas schema apply` phase. - -How does it work? In short (more details below), `atlas schema plan` generates a migration plan for the specified -_Schema Transition_ (State1 -> State2) and stores it in the [Atlas Registry](/cloud/features/registry). During -`atlas schema apply`, Atlas checks if there is an approved migration plan for the specific schema transition and applies -it without recalculating SQL changes at runtime or requiring user-approval. - -If users wish to modify the auto-generated migration plan, they can edit it locally and then push it to the Atlas Registry. - -### Local Example - -Let's consider a simple example. We have a table `users` with two columns `id` and `name`, and we want to add a new column -`email` to the table. - -#### Example Setup - -Before running `atlas schema plan`, let's ensure that a schema repository named `app` exists in Atlas Registry and there -is a database containing the previous schema state (before our changes): - - - - -```sql title="schema.sql" -CREATE TABLE users ( - id INTEGER PRIMARY KEY AUTOINCREMENT, - name TEXT -); -``` - - - -```hcl title="atlas.hcl" -env "local" { - # URL to the target database. - url = "sqlite://main.db" - # URL to the dev-database. - dev = "sqlite://dev?mode=memory" - schema { - # Desired schema state. - src = "file://schema.sql" - # Atlas Registry config. - repo { - name = "app" - } - } -} -``` - - - -We run `atlas schema push` to create the schema in Atlas Registry: - -```planlog {3-5} -atlas schema push --env local - -Schema: app - -- Atlas URL: atlas://app - -- Cloud URL: https://a8m.atlasgo.cloud/schemas/141733920781 -``` - -Then, we run `atlas schema apply` to align the database with the schema state: - -```bash -atlas schema apply --env local --auto-approve -``` - -#### Changing the Schema - -At this stage, we want to add a non-nullable `email` column to the `users` table. Let's update the `schema.sql` file and then run -`atlas schema plan` to generate a migration plan. - -```sql title="schema.sql" -CREATE TABLE users ( - id INTEGER PRIMARY KEY AUTOINCREMENT, - name TEXT, -//highlight-next-line-info - email TEXT NOT NULL -); -``` - -We run `atlas schema plan` to generate a migration plan for adding the `email` column to the `users` table: -```bash -atlas schema plan --env local -``` -The output looks like this: -```planlog {10-13} -Planning migration from local database to file://schema.sql (1 statement in total): - - -- add column "email" to table: "users": - -> ALTER TABLE `users` ADD COLUMN `email` text NOT NULL; - -------------------------------------------- - -Analyzing planned statements (1 in total): - - -- data dependent changes detected: - -- L2: Adding a non-nullable "text" column "email" will fail in case table "users" - is not empty https://atlasgo.io/lint/analyzers#MF103 - -- ok (346.192µs) - - ------------------------- - -- 5.038728ms - -- 1 schema change - -- 1 diagnostic -? Approve or abort the plan: - ▸ Approve and push - Abort -``` - -#### Data-Dependent Changes - -Atlas detects data-dependent changes in the migration plan and provides a diagnostic message. In this case, it warns -that adding the non-nullable `email` column, will fail if the `users` table is not empty. The recommended solution is to -provide a default value for the new column. Let's fix this by adding a default value to the `email` column and re-run the -`atlas schema plan` command. - -```sql title="schema.sql" -CREATE TABLE users ( - id INTEGER PRIMARY KEY AUTOINCREMENT, - name TEXT, -//highlight-next-line-info - email TEXT NOT NULL DEFAULT 'unknown' -); -``` - -Then, we run `atlas schema plan` again to generate a new migration plan, but this time, we approve it: - -```bash -atlas schema plan --env local -``` - -```planlog -Planning migration from local database to file://schema.sql (1 statement in total): - - -- add column "email" to table: "users": - -> ALTER TABLE `users` ADD COLUMN `email` text NOT NULL DEFAULT 'unknown'; - -------------------------------------------- - -Analyzing planned statements (1 in total): - - -- no diagnostics found - - ------------------------- - -- 6.393773ms - -- 1 schema change -? Approve or abort the plan: - ▸ Approve and push - Abort -``` - -Once approved, the migration plan will be pushed to the Atlas Registry, and can be applied using `atlas schema apply`. - -```planlog -Plan Status: APPROVED - -- Atlas URL: atlas://app/plans/20240923085308 - -- Cloud URL: https://a8m.atlasgo.cloud/schemas/141733920769/plans/210453397504 -``` - -At this stage, we can run `atlas schema apply` to apply the changes to the database, on any environment, without -re-calculating the SQL changes at runtime or requiring human intervention. - -```applylog -Applying approved migration using pre-planned file 20240923085308 (1 statement in total): - - -- add column "email" to table: "users" - -> ALTER TABLE `users` ADD COLUMN `email` text NOT NULL DEFAULT 'unknown'; - -- ok (749.815µs) - - ------------------------- - -- 802.902µs - -- 1 migration - -- 1 sql statement -``` - -### Atlas Registry - -Atlas Registry allows you to store, version, and maintain a single source of truth for your database schemas and its -migration plans. It is similar to Docker Hub, but for your schemas and migrations. In addition to functioning as storage, -it is schema-aware and provides extra capabilities such as ER diagrams, SQL diffing, [schema docs](/cloud/features/schema-docs), -and more. - -
- atlas migrate push -

Schema pushed with atlas schema push

-
- -### Edit a Plan - -One of the first questions that come to mind when comparing the declarative approach to the versioned approach is: -_How can I edit a migration plan?_ There are three ways to edit a migration plan: - -1. **Edit in-place:** - - Ensure the `EDITOR` environment variable is set (e.g., `export EDITOR=vim`). - - Run `atlas schema plan --edit` to open the plan in the default editor. Upon closing, the plan will be pushed - after approval. - -2. **Save, edit, and push:** - - Run `atlas schema plan --save` to save the plan to a file and edit it manually. - - Run `atlas schema plan push --file file://` to push the edited plan to the Atlas Registry. - -3. **Pull, edit, and push:** - - Pull a remote plan by running `atlas schema plan pull --url atlas:///plans/ > name.plan.hcl`. - - Open `name.plan.hcl` in the editor, and edit the `migration` attribute. - - Push the edited plan to the Atlas Registry by running `atlas schema plan push --file file://`. - -To complete the example, let's edit the migration plan from the [example above](#local-example) by changing all `email` -columns with `'unknown'` value to a computed email value: - -1. We pull the plan first into a file named `20240923085308.plan.hcl`: - - ```bash - atlas schema plan pull --url atlas://app/plans/20240923085308 > 20240923085308.plan.hcl - ``` - - ```hcl title="20240923085308.plan.hcl" - plan "20240923085308" { - from = "vJYpErjN4kWJpw4nRaJcEX3xx/jExj4a05Ll3Y7gXr4=" - to = "B5OVckDEeHcaSdYCUMEfYe8CZN85ahLkef44hfwCe2g=" - migration = <<-SQL - -- Add column "email" to table: "users" - ALTER TABLE `users` ADD COLUMN `email` text NOT NULL DEFAULT 'unknown'; - SQL - } - ``` - - Note that the `from` and `to` are fingerprints of the schema states. They are used to identify the states in the - schema transition. We will ignore them for now (without changing them, of course) and focus on the migration attribute. - -2. We edit the `migration` attribute to change all rows with `'unknown'` emails to a computed email value: - - ```hcl title="20240923085308.plan.hcl" - plan "20240916133205" { - from = "vJYpErjN4kWJpw4nRaJcEX3xx/jExj4a05Ll3Y7gXr4=" - to = "B5OVckDEeHcaSdYCUMEfYe8CZN85ahLkef44hfwCe2g=" - migration = <<-SQL - -- Add column "email" to table: "users" - ALTER TABLE `users` ADD COLUMN `email` text NOT NULL DEFAULT 'unknown'; - //highlight-next-line-info - -- Change all unknown "email" columns with a new computed email - //highlight-next-line-info - UPDATE `users` SET `email` = PRINTF('%s+a8m@atlasgo.cloud', `name`) WHERE `email` = 'unknown'; - SQL - } - ``` - -3. Then, we push the edited plan to the Atlas Registry: - - ```bash - atlas schema plan push --file file://20240923085308.plan.hcl - ``` - - ```planlog - Planning migration statements (2 in total): - - -- add column "email" to table: "users": - -> ALTER TABLE `users` ADD COLUMN `email` text NOT NULL DEFAULT 'unknown'; - -- change all unknown "email" columns with a new computed email - -> UPDATE `users` SET `email` = PRINTF('%s+a8m@atlasgo.cloud', `name`) WHERE `email` = 'unknown'; - - ------------------------------------------- - - Analyzing planned statements (2 in total): - - -- no diagnostics found - - ------------------------- - -- 43.566575ms - -- 2 schema changes - ? Approve or abort the plan: - ▸ Approve and push - Abort - ``` - Once approved, the migration plan will be pushed to the Atlas Registry. - - :::note Schema Drift - Note that if your manual changes are not in sync with the desired state (i.e., do not bring the database to the - desired state), Atlas will detect the schema drift and reject this migration plan. - ::: - -4. Then, running `atlas schema apply` will apply the changes to the database, including the new `UPDATE` statement. - - ```applylog - Applying approved migration using pre-planned file 20240923085308 (2 statements in total): - - -- add column "email" to table: "users" - -> ALTER TABLE `users` ADD COLUMN `email` text NOT NULL DEFAULT 'unknown'; - -- ok (826.977µs) - - -- change all unknown "email" columns with a new computed email - -> UPDATE `users` SET `email` = PRINTF('%s+a8m@atlasgo.cloud', `name`) WHERE `email` = 'unknown'; - -- ok (447.152µs) - - ------------------------- - -- 1.353026ms - -- 1 migration - -- 2 sql statements - ``` - -### Push a Plan - -By default, `atlas schema plan` proposes pushing the plan file to the Atlas registry. However, you can use the `--save` flag -to dump the plan to a file, edit it, and then push it manually to the Atlas Registry using the `atlas schema plan push` -command: - -```bash -atlas schema plan push \ - --file file:// \ - --env -``` - -### Approve a Plan - -By default, `atlas schema plan` pushes plans in an **`APPROVED`** state to the registry. However, in some cases, we may prefer to create -the plan in **pending** state, and later approve it manually or automatically after it passes the team's review. - -There are two ways to create a plan in a **pending** state and approve it after review. -1. Manual workflow: - 1. Run the `atlas schema plan` command with the `--pending` flag. This creates the plan in **pending** state. - 2. Then, in order to approve the plan, either go to the Atlas Registry Web UI and approve the plan, or run the - `atlas schema plan approve` command. -2. Automated (CI) workflow: - 1. When setting the `schema/plan` GitHub Action for your repository, Atlas automatically creates a plan in a pending state. - 2. Then, after the PR is merged, Atlas auto-approves the created plan in the registry. - -:::info Protected Flow -Users can protect their registry schemas by limiting who can push changes, push approved plans, or approve existing plans. -To enable this for your schema, go to the schema repository settings in the registry and enable the `Protected Flows` option. -::: - -### Pull a Plan - -To pull a plan from the Atlas Registry, use the `atlas schema plan pull` command: - -```bash -atlas schema plan pull \ - --url atlas://app/plans/add_email > add_email.plan.hcl -``` - -### List Plans - -To list all plans in the Atlas Registry for the given schema transition, use the `atlas schema plan list` command: - -```bash -atlas schema plan list \ - --env local -``` - -```planlog -Plan Status: APPROVED - -- Atlas URL: atlas://app/plans/add_email - -- Cloud URL: https://.atlasgo.cloud/schemas//plans/ -``` - -### Lint a Plan - -To lint a plan (remote or local) before pushing it to the Atlas Registry, use the `atlas schema plan lint` command: - -```bash -atlas schema plan lint \ - --file file://add_email.plan.hcl \ - --env local -``` - -### Apply a Plan - -Running `atlas schema apply` searches for a migration plan in the Atlas Registry and applies it to the database, if exists. -However, in unusual cases, you might have multiple (approved) migration plans for the same schema transition store in the -registry (e.g., one per environment). In that case, running `atlas schema apply` will abort with the following error: - -```text -Error: multiple pre-planned migrations were found in the registry for this schema transition. - - Current hash: vJYpErjN4kWJpw4nRaJcEX3xx/jExj4a05Ll3Y7gXr4= - Desired hash: hna312Vk535aibL1hTRcBlxeyUvIwV6Mov7kfaZ2+3s= - -Plans found: - - atlas://app/plans/ - atlas://app/plans/ - -To resolve the issue, either delete the conflicting plans or provide the plan URL explicitly using the --plan flag. -``` - -In this case, we either delete the conflicting plans from the Atlas Registry or provide the plan URL explicitly using -the `--plan` flag: - -```applylog -Applying approved migration using pre-planned file 20240923085308 (2 statements in total): - - -- add column "email" to table: "users" - -> ALTER TABLE `users` ADD COLUMN `email` text NOT NULL DEFAULT 'unknown'; - -- ok (789.621µs) - - -- change all unknown "email" columns with a new computed email - -> UPDATE `users` SET `email` = PRINTF('%s+a8m@atlasgo.cloud', `name`) WHERE `email` = 'unknown'; - -- ok (883.177µs) - - ------------------------- - -- 1.77283ms - -- 1 migration - -- 2 sql statements -``` - -### GitHub Actions - -Atlas provides an [official GitHub Actions integration](https://github.com/ariga/atlas-action) to automatically plan, -review, and approve declarative schema migrations during PR workflows. The example below demonstrates how to set up this -workflow for your repository. - -
- GitHub Action for schema plan command -

Plan Generated by atlas schema plan

-
- -#### Create a Schema Repository in Atlas Registry - -For the purpose of the example, let's create a schema repository named `demo` in Atlas Registry with the following -SQL schema: - - - - -```sql title="schema.sql" -CREATE TABLE users ( - id BIGSERIAL PRIMARY KEY -); -``` - - - -```hcl title="atlas.hcl" -env "dev" { - # The URL to the dev-database. - dev = "docker://postgres/15/dev?search_path=public" - schema { - # Desired schema state. - src = "file://schema.sql" - # Atlas Registry config. - repo { - name = "demo" - } - } -} -``` - - - -To create the schema repository in the Atlas Registry, run the following command: - -```bash -atlas schema push --env dev - -https://.atlasgo.cloud/schemas/141733920769 -``` - -#### Set Up the `schema/push` GitHub Action - -In order to keep our schema repository up-to-date with the latest changes, we can set up the `schema/push` GitHub Action. This -action automatically pushes the schema to the Atlas Registry whenever changes are made to the SQL schema file: - -```yaml title=".github/workflows/schema-push.yml" -name: Push Declarative Schemas -on: - push: - branches: - - master - paths: - - .github/workflows/atlas-push.yaml - - 'schema.sql' # Can be HCL, ORM, other instead. -permissions: - contents: read -jobs: - push: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v3 - with: - fetch-depth: 0 - - uses: ariga/setup-atlas@v0 - with: - cloud-token: ${{ secrets.ATLAS_TOKEN }} - - uses: ariga/atlas-action/schema/push@master - with: - env: dev # Use the "dev" environment from the atlas.hcl file. -``` - -:::info ATLAS_TOKEN -To push the schema to Atlas Registry from your GitHub Action, set up a GitHub secret named `ATLAS_TOKEN` using your Atlas -Cloud token. To create a token, follow [these instructions](/cloud/bots). -::: - -#### Set Up the `schema/plan` GitHub Action - -The last step is to set up the `schema/plan` and `schema/plan/approve` Actions: -- `schema/plan` - This action automatically plans the schema migration whenever changes are made to the SQL schema file. - After a plan is created, it is pushed to the Atlas Registry in `PENDING` state, and waiting to be approved. -- `schema/plan/approve` - This action automatically approves the pending plan in the Atlas Registry after the PR is - merged to the master branch. - -```yaml title=".github/workflows/schema-plan.yml" -name: Plan Declarative Migrations -on: - workflow_dispatch: - push: - branches: - - master - paths: - - .github/workflows/atlas-plan.yaml - - 'schema.sql' - pull_request: - branches: - - master - paths: - - .github/workflows/atlas-plan.yaml - - 'schema.sql' -permissions: - contents: read - pull-requests: write -jobs: - plan: - name: plan - if: ${{ github.event_name == 'pull_request' }} - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - name: Setup Atlas - uses: ariga/setup-atlas@master - with: - cloud-token: ${{ secrets.ATLAS_TOKEN }} - - name: Run schema plan - uses: ariga/atlas-action/schema/plan@master - env: - GITHUB_TOKEN: ${{ github.token }} - with: - env: dev # Use the "dev" environment from the atlas.hcl file. - approve: - name: approve - if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }} - runs-on: ubuntu-latest - env: - GITHUB_TOKEN: ${{ github.token }} - steps: - - uses: actions/checkout@v4 - - name: Setup Atlas - uses: ariga/setup-atlas@v0 - with: - cloud-token: ${{ secrets.ATLAS_TOKEN }} - - name: Approve the plan - id: plan-approve - uses: ariga/atlas-action/schema/plan/approve@master - with: - env: dev # Use the "dev" environment from the atlas.hcl file. -``` - -:::info Plan Inputs -- `from` - Defines the current state of the schema to calculate the migration from. If not provided, Atlas will use the - `url` in the `atlas.hcl` file (same as `atlas schema apply`). If the `url` attribute is not set, Atlas will use the - last known state from the Atlas Registry. -- `to` - Defines the desired state of the schema to calculate the migration to. If not provided, Atlas will use the - `schema.src` attribute in the `atlas.hcl` file. -::: - -To avoid a race condition between the `push` and `plan` workflows, we can merge them into a single workflow. - -```yaml title=".github/workflows/atlas-schema.yml" -name: Plan Declarative Migrations -on: - workflow_dispatch: - push: - branches: - - master - paths: - - .github/workflows/atlas-schema.yaml - - 'schema.sql' - pull_request: - branches: - - master - paths: - - .github/workflows/atlas-schema.yaml - - 'schema.sql' -permissions: - contents: read - pull-requests: write -jobs: - plan: - name: plan - if: ${{ github.event_name == 'pull_request' }} - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - name: Setup Atlas - uses: ariga/setup-atlas@master - with: - cloud-token: ${{ secrets.ATLAS_TOKEN }} - - name: Run schema plan - uses: ariga/atlas-action/schema/plan@master - env: - GITHUB_TOKEN: ${{ github.token }} - with: - env: dev - approve-push: - name: approve-push - if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }} - runs-on: ubuntu-latest - env: - GITHUB_TOKEN: ${{ github.token }} - steps: - - uses: actions/checkout@v4 - - name: Setup Atlas - uses: ariga/setup-atlas@v0 - with: - cloud-token: ${{ secrets.ATLAS_TOKEN }} - # Plan against the latest schema state (one before the PR). - - name: Approve the plan - id: plan-approve - uses: ariga/atlas-action/schema/plan/approve@master - with: - env: dev - # Push the schema after the plan is approved. - - name: Push the schema - id: schema-push - uses: ariga/atlas-action/schema/push@master - with: - env: dev -``` - diff --git a/doc/md/features.mdx b/doc/md/features.mdx deleted file mode 100644 index e418f586129..00000000000 --- a/doc/md/features.mdx +++ /dev/null @@ -1,107 +0,0 @@ ---- -title: Feature Compatability -id: features -slug: features ---- - -## Atlas is Open-Core - -Atlas is an open-core project. The core engine is open-source and available on [GitHub](https://github.com/ariga/atlas) -under the Apache 2.0 license. Open-source features cover the core functionality of Atlas, including database inspection, -diffing, migration planning and execution, and more. For the basic and common use cases, the open-source version of -Atlas is more than enough. - -In addition, Atlas also offers a number of advanced features that are only available in proprietary versions. The -vast majority of these features are available in the Pro Plan of Atlas, which you can use for free by creating -an Atlas account. - -### Pro Plan {#pro} - -Users can upgrade to the Pro Plan for **$9/seat per month**. We provide a free, 30-day trial for new users. After the -trial period, a license is required to continue using Atlas Pro. - -To learn more about our plans and pricing, visit the [Atlas Pricing Page](/pricing). - -Create your Atlas Pro account by running the command below and following the instructions on the screen: - -```bash -atlas login -``` - -## Feature Availability - -This document compares the CLI features available to users who are logged in to Atlas Cloud and those who are not. - -### CLI Features - -| Feature | Availability | -|------------------------|----------------| -| Database Inspection | Open | -| Schema Diffing | Open | -| Versioned Migrations | Open | -| Declarative Migrations | Open | -| Migration Linting | Open | -| Pre-migration checks | Pro | -| Migration Checkpoints | Pro | -| Schema Visualization | Pro | -| Interactive Migrations | Pro | -| Testing Framework | Pro | - -Schema Visualization as well as Interactive Migrations are features that require connectivity to the cloud and hence -are not available in the open-source version of Atlas. - -### Database Support - -All common open-source RDBMS are supported in all versions of Atlas. In addition, Atlas also supports SQL Server, ClickHouse -and Redshift in the Pro Plan. - -| Driver | Availability | -|------------|-----------------------| -| MySQL | Open | -| PostgreSQL | Open | -| MariaDB | Open | -| SQLite | Open | -| LibSQL | Open | -| SQL Server | Pro | -| ClickHouse | Pro | -| Redshift | Pro | - -### Database Features - -Most common database features are supported in all versions of Atlas. More advanced features are available -to users of the Pro Plan. - -| Feature | Availability | -|-------------------------------|----------------| -| Tables and Columns | Open | -| Indexes | Open | -| Foreign Keys | Open | -| Constraints | Open | -| Partitions | Open | -| Views | Pro | -| Materialized Views | Pro | -| Stored Procedures | Pro | -| Functions | Pro | -| Triggers | Pro | -| Sequences (Postgres) | Pro | -| Composite Types (Postgres) | Pro | -| Domain Types (Postgres) | Pro | -| Extensions (Postgres) | Pro | -| Row-level Security (Postgres) | Pro | - -### Data Sources - -[Data Sources](/atlas-schema/projects#data-sources) are a powerful feature of Atlas that allows you to load data from external sources into your Atlas Project. - -| Data Source | Availability | -|--------------------|----------------| -| SQL | Open | -| External | Open | -| Runtimevar | Open | -| HCL Schema | Open | -| External Schema | Open | -| Remote Dir | Open | -| Template Dir | Open | -| AWS RDS Token | Open | -| GCP CloudSQL Token | Open | -| Composite Schema | Pro | \ No newline at end of file diff --git a/doc/md/getting-started/declarative/mariadb-declarative-hcl.mdx b/doc/md/getting-started/declarative/mariadb-declarative-hcl.mdx deleted file mode 100644 index e36d1c077d1..00000000000 --- a/doc/md/getting-started/declarative/mariadb-declarative-hcl.mdx +++ /dev/null @@ -1,255 +0,0 @@ ---- -id: mariadb-declarative-hcl -title: Getting started with HCL declarative migrations for a MariaDB schema -slug: /getting-started/mariadb-declarative-hcl ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' - -### Introduction - -Atlas makes it easy to manage your database using the declarative migrations workflow. Declarative migrations focus on describing the desired state of the database and allow the migration engine to plan and execute the necessary actions to reach that state. - -In this quickstart guide, we will go through the three essential steps for using Atlas to manage your database schema: inspecting your database, authoring the desired changes, and applying those changes. - - -### Installing Atlas - - - -### Warming up -To demonstrate the examples in the article, let's start a docker container with an `example` database and create a table in it. - -Use the following command to run a MariaDB Docker container: - -```console -docker run --name atlas-demo -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=example -p 3306:3306 -d mariadb:latest -``` - -Once the container is running, you can connect to the MariaDB database using the command below: - -```console -docker exec -it atlas-demo mariadb -ppass example -``` - -Once you're connected, you can create a table in it by running the following command: - -```sql -CREATE TABLE customers ( - id INT PRIMARY KEY AUTO_INCREMENT, - first_name VARCHAR(255) NOT NULL, - last_name VARCHAR(255) NOT NULL -); -``` - -## Step 1: Inspecting the database - -Before planning any changes to your database, it can be useful to know what your database schema looks like. To do this, you can use the `atlas schema inspect` command to inspect your database. This command generates an output in [HCL](/guides/ddl#hcl) format that represents your database schema in a simple, human-readable language. This can be used as a starting point for making changes. - - -### Usage -``` -atlas schema inspect [flags] -``` - -### Example - -The following command will inspect the current state of the schema `example` and store it in a file named `schema.hcl` - -```console -atlas schema inspect --url "mysql://root:pass@localhost:3306/example" > schema.hcl -``` - -```hcl title="schema.hcl" -schema "example" {} - -table "customers" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "first_name" { - null = false - type = varchar(255) - } - column "last_name" { - null = false - type = varchar(255) - } - primary_key { - columns = [column.id] - } -} -``` - -:::info -To learn more about inspecting schema with Atlas in variety of use cases, visit the documentation [here](/declarative/inspect) -::: - -## Step 2: Planning a change - -Once you have inspected your database, the next step is to plan a change. This is done by modifying the [HCL](/guides/ddl#hcl) file in a way that it defines the desired state of your schema. The file should include the changes you want to make to your database, such as creating tables, columns, or indexes. - -:::info -In this article, we will be focusing on HCL-based schema inspection with Atlas. For those opting to use SQL schema, refer to the documentation [here](/getting-started/mariadb-declarative-sql). -::: - -### Example - -Let’s add the following table definition to the HCL schema file that we previously created with the `atlas schema inspect` command: - -```hcl title="schema.hcl" {23-45} -schema "example" {} - -table "customers" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "first_name" { - null = false - type = varchar(255) - } - column "last_name" { - null = false - type = varchar(255) - } - primary_key { - columns = [column.id] - } -} - -table "orders" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "customer_id" { - null = false - type = int - } - column "order_date" { - null = false - type = date - } - primary_key { - columns = [column.id] - } - foreign_key "customer_id_fk" { - columns = [column.customer_id] - ref_columns = [table.customers.column.id] - } -} -``` - -## Step 3: Applying the change - -The final step is to apply the change to your database. To do this, simply run the `atlas schema apply` command. -This command will compare the current state of your database to the desired state defined in your HCL file and make the -necessary changes. - -### Usage - -```console -atlas schema apply [flags] -``` - -### Example - -Let’s apply the changes we created in Step 2 using the following command: - -```console -atlas schema apply \ - --url "mysql://root:pass@localhost:3306/example" \ - --to "file://schema.hcl" -``` - -```console --- Planned Changes: --- Create "orders" table -// highlight-next-line-info -CREATE TABLE `example`.`orders` (`id` int NOT NULL AUTO_INCREMENT, `customer_id` int NOT NULL, `order_date` date NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `customer_id_fk` FOREIGN KEY (`customer_id`) REFERENCES `example`.`customers` (`id`)); -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - ▸ Apply - Abort -``` - -Selecting `Apply` will apply the changes to the database. - -You can inspect the latest state of the database again and verify that the changes have been applied by re-running the -`atlas schema inspect` command: - -```console -atlas schema inspect --url "mysql://root:pass@localhost:3306/example" -``` -```hcl -schema "example" {} - -table "customers" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "first_name" { - null = false - type = varchar(255) - } - column "last_name" { - null = false - type = varchar(255) - } - primary_key { - columns = [column.id] - } -} - -table "orders" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "customer_id" { - null = false - type = int - } - column "order_date" { - null = false - type = date - } - primary_key { - columns = [column.id] - } - foreign_key "customer_id_fk" { - columns = [column.customer_id] - ref_columns = [table.customers.column.id] - } -} -``` -Great! Our changes have been successfully applied to the `example` schema. - -## Conclusion - -By following the steps above, you can inspect the current state of the database, make changes to the schema definition file and apply those changes to the database with confidence. - -The declarative migration is a flexible way to manage your database schema. It allows you to define your desired state in a simple, human-readable language, and then automatically apply those changes to your database. This can save you time and effort, and also reduce the risk of errors or inconsistencies in your database schema. - -:::info -Atlas supports both Declarative and Versioned workflows for managing database schemas. The choice between the two depends on your requirements. To understand the difference and choose the best option for you, visit the documentation on Declarative vs Versioned workflow [here](/concepts/declarative-vs-versioned). -::: - -## Need More Help? - -[Join the Ariga Discord Server](https://discord.com/invite/zZ6sWVg6NT) for early access to features and the ability to provide exclusive feedback that improves your Database Management Tooling. - -[Sign up](https://atlasnewsletter.substack.com/) to our newsletter to stay up to date about Atlas, and the cloud platform [Atlas Cloud](https://atlasgo.cloud). diff --git a/doc/md/getting-started/declarative/mariadb-declarative-sql.mdx b/doc/md/getting-started/declarative/mariadb-declarative-sql.mdx deleted file mode 100644 index bf8703fd50b..00000000000 --- a/doc/md/getting-started/declarative/mariadb-declarative-sql.mdx +++ /dev/null @@ -1,189 +0,0 @@ ---- -id: mariadb-declarative-sql -title: Getting started with SQL declarative migrations for a MariaDB schema -slug: /getting-started/mariadb-declarative-sql ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' - -### Introduction - -Atlas makes it easy to manage your database using the declarative migrations workflow. Declarative migrations focus on describing the desired state of the database and allow the migration engine to plan and execute the necessary actions to reach that state. - -In this quickstart guide, we will go through the three essential steps for using Atlas to manage your database schema: inspecting your database, authoring the desired changes, and applying those changes. - -### Installing Atlas - - - -### Warming up -To demonstrate the examples in the article, let's start a docker container with an example database and create a table in it. - -Use the following command to run a MariaDB Docker container: - -```console -docker run --name atlas-demo -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=example -p 3306:3306 -d mariadb:latest -``` - -Once the container is running, you can connect to the MariaDB database using the command below: - -```console -docker exec -it atlas-demo mariadb -ppass example -``` - -Once you're connected, you can create a table in it by running the following command: - -```sql -CREATE TABLE customers ( - id INT PRIMARY KEY AUTO_INCREMENT, - first_name VARCHAR(255) NOT NULL, - last_name VARCHAR(255) NOT NULL -); -``` - -## Step 1: Inspecting the database - -Before planning any changes to your database, it can be useful to know what your database schema looks like. To do this, you can use the `atlas schema inspect` command to inspect your database. This command generates a file that represents your database schema, which can be used as a starting point for making changes. - -### Usage -``` -atlas schema inspect [flags] -``` - -### Example - -The following command will inspect the current state of the schema `example` and store it in a file named `schema.sql` - -```console -atlas schema inspect \ - --url "mysql://root:pass@localhost:3306/example" \ - --format "{{ sql . }}" > schema.sql -``` - -```sql title="schema.sql" --- create "customers" table -CREATE TABLE `customers` ( - `id` int NOT NULL AUTO_INCREMENT, - `first_name` varchar(255) NOT NULL, - `last_name` varchar(255) NOT NULL, - PRIMARY KEY (`id`) -); -``` - -:::info -To learn more about inspecting schema with Atlas in variety of use cases, visit the documentation [here](/declarative/inspect) -::: - -## Step 2: Planning a change - -Once you have inspected your database, the next step is to plan a change. This is done by modifying the SQL file in a way that it defines the desired state of your schema. The file should include the changes you want to make to your database, such as creating tables, columns, or indexes. - -:::info -In this article, we will be focusing on SQL-based schema inspection with Atlas. For those opting to use an HCL schema, read the documentation [here](/getting-started/mariadb-declarative-hcl). -::: - -### Example - -Let’s add the following table definition to the SQL schema file that we previously created with the `atlas schema inspect` command: - -```sql title="schema.sql" {9-15} --- create "customers" table -CREATE TABLE `customers` ( - `id` int NOT NULL AUTO_INCREMENT, - `first_name` varchar(255) NOT NULL, - `last_name` varchar(255) NOT NULL, - PRIMARY KEY (`id`) -); - -CREATE TABLE orders ( - id int NOT NULL AUTO_INCREMENT, - customer_id int NOT NULL, - order_date date NOT NULL, - PRIMARY KEY (id), - CONSTRAINT customer_id_fk FOREIGN KEY (customer_id) REFERENCES example.customers (id) -); -``` - -## Step 3: Applying the change - -The final step is to apply the change to your database. To do this, simply run the `atlas schema apply` command. This command will compare the current state of your database to the desired state defined in your SQL file and make the necessary changes. - -### Usage - -```console -atlas schema apply [flags] -``` - -### Example - -Let’s apply the changes we created in Step 2 using the following command: - -```console -atlas schema apply \ - --url "mysql://root:pass@localhost:3306/example" \ - --to "file://schema.sql" \ - --dev-url "docker://maria/latest/example" -``` - -:::info The role of the [Dev Database](/concepts/dev-database) -Atlas loads the **desired state** by executing the SQL files onto the provided [dev database](/concepts/dev-database), -compares it against the database **current state** by inspecting its schema and writes a migration plan for moving -from the current state to the desired state. -::: - -```console --- Planned Changes: --- Create "orders" table -// highlight-next-line-info -CREATE TABLE `orders` (`id` int NOT NULL AUTO_INCREMENT, `customer_id` int NOT NULL, `order_date` date NOT NULL, PRIMARY KEY (`id`), INDEX `customer_id_fk` (`customer_id`), CONSTRAINT `customer_id_fk` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`) ON UPDATE RESTRICT ON DELETE RESTRICT) CHARSET utf8mb4 COLLATE utf8mb4_general_ci; -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - ▸ Apply - Abort -``` - -Selecting `Apply` will apply the changes to the database. - -You can inspect the latest state of the database again and verify that the changes have been applied by re-running the `atlas schema inspect` command: - -```console -atlas schema inspect \ - --url "mysql://root:pass@localhost:3306/example" \ - --format "{{ sql . }}" -``` -```sql --- create "customers" table -CREATE TABLE `customers` ( - `id` int NOT NULL AUTO_INCREMENT, - `first_name` varchar(255) NOT NULL, - `last_name` varchar(255) NOT NULL, - PRIMARY KEY (`id`) -) CHARSET utf8mb4 COLLATE utf8mb4_general_ci; - --- create "orders" table -CREATE TABLE `orders` ( - `id` int NOT NULL AUTO_INCREMENT, - `customer_id` int NOT NULL, - `order_date` date NOT NULL, - PRIMARY KEY (`id`), - INDEX `customer_id_fk` (`customer_id`), - CONSTRAINT `customer_id_fk` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`) ON UPDATE RESTRICT ON DELETE RESTRICT -) CHARSET utf8mb4 COLLATE utf8mb4_general_ci; -``` -Great! Our changes have been successfully applied to the `example` schema. - -## Conclusion - -By following the steps above, you can inspect the current state of the database, make changes to the schema definition file and apply those changes to the database with confidence. - -The declarative migration is a flexible way to manage your database schema. It allows you to define your desired state, and then automatically apply those changes to your database. This can save you time and effort, and also reduce the risk of errors or inconsistencies in your database schema. - -:::info -Atlas supports both Declarative and Versioned workflows for managing database schemas. The choice between the two depends on your requirements. To understand the difference and choose the best option for you, visit the documentation on Declarative vs Versioned workflow [here](/concepts/declarative-vs-versioned). -::: - -## Need More Help? - -[Join the Ariga Discord Server](https://discord.com/invite/zZ6sWVg6NT) for early access to features and the ability to provide exclusive feedback that improves your Database Management Tooling. - -[Sign up](https://atlasnewsletter.substack.com/) to our newsletter to stay up to date about Atlas, and the cloud platform [Atlas Cloud](https://atlasgo.cloud). diff --git a/doc/md/getting-started/declarative/mysql-declarative-hcl.mdx b/doc/md/getting-started/declarative/mysql-declarative-hcl.mdx deleted file mode 100644 index 031cc55a154..00000000000 --- a/doc/md/getting-started/declarative/mysql-declarative-hcl.mdx +++ /dev/null @@ -1,255 +0,0 @@ ---- -id: mysql-declarative-hcl -title: Getting started with HCL declarative migrations for a MySQL schema -slug: /getting-started/mysql-declarative-hcl ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' - -### Introduction - -Atlas makes it easy to manage your database using the declarative migrations workflow. Declarative migrations focus on describing the desired state of the database and allow the migration engine to plan and execute the necessary actions to reach that state. - -In this quickstart guide, we will go through the three essential steps for using Atlas to manage your database schema: inspecting your database, authoring the desired changes, and applying those changes. - -### Installing Atlas - - - -### Warming up -To demonstrate the examples in the article, let's start a docker container with an `example` database and create a table in it. - -Use the following command to run a MySQL Docker container: - -```console -docker run --name atlas-demo -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=example -p 3306:3306 -d mysql:latest -``` - -Once the container is running, you can connect to the MySQL server using the command below: - -```console -docker exec -it atlas-demo mysql -ppass --database=example -``` - -Once you're connected, you can create a table in it by running the following command: - -```sql -CREATE TABLE customers ( - id INT PRIMARY KEY AUTO_INCREMENT, - first_name VARCHAR(255) NOT NULL, - last_name VARCHAR(255) NOT NULL -); -``` - -## Step 1: Inspecting the database - -Before planning any changes to your database, it can be useful to know what your database schema looks like. To do this, you can use the `atlas schema inspect` command to inspect your database. This command generates a file that represents your database schema, which can be used as a starting point for making changes. - -### Usage -``` -atlas schema inspect [flags] -``` - -### Example - -The following command will inspect the current state of the schema `example` and store it in a file named `schema.hcl` - -```console -atlas schema inspect --url "mysql://root:pass@localhost:3306/example" > schema.hcl -``` - -```hcl title="schema.hcl" -schema "example" {} - -table "customers" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "first_name" { - null = false - type = varchar(255) - } - column "last_name" { - null = false - type = varchar(255) - } - primary_key { - columns = [column.id] - } -} -``` -:::info -To learn more about inspecting schema with Atlas in variety of use cases, visit the documentation [here](/declarative/inspect) -::: - -## Step 2: Planning a change - -Once you have inspected your database, the next step is to plan a change. This is done by modifying the HCL file in a way that it defines the desired state of your schema. The file should include the changes you want to make to your database, such as creating tables, columns, or indexes. - -:::info -In this article, we will be focusing on HCL-based schema inspection with Atlas. For those opting to use SQL schema, refer to the documentation [here](/getting-started/mysql-declarative-sql). -::: - -### Example - -Let’s add the following table definition to the HCL schema file that we previously created with the `atlas schema inspect` command: - -```hcl title="schema.hcl" {23-45} -schema "example" {} - -table "customers" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "first_name" { - null = false - type = varchar(255) - } - column "last_name" { - null = false - type = varchar(255) - } - primary_key { - columns = [column.id] - } -} - -table "orders" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "customer_id" { - null = false - type = int - } - column "order_date" { - null = false - type = date - } - primary_key { - columns = [column.id] - } - foreign_key "customer_id_fk" { - columns = [column.customer_id] - ref_columns = [table.customers.column.id] - } -} -``` - -## Step 3: Applying the change - -The final step is to apply the change to your database. To do this, simply run the `atlas schema apply` command. This command will compare the current state of your database to the desired state defined in your HCL file and make the necessary changes. - -### Usage - -```console -atlas schema apply [flags] -``` - -### Example - -Let’s apply the changes we created in Step 2 using the following command: - -```console -atlas schema apply \ - --url "mysql://root:pass@localhost:3306/example" \ - --to "file://schema.hcl" -``` - -```console --- Planned Changes: --- Create "orders" table -// highlight-next-line-info -CREATE TABLE `example`.`orders` (`id` int NOT NULL AUTO_INCREMENT, `customer_id` int NOT NULL, `order_date` date NOT NULL, PRIMARY KEY (`id`), INDEX `customer_id_fk` (`customer_id`), CONSTRAINT `customer_id_fk` FOREIGN KEY (`customer_id`) REFERENCES `example`.`customers` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci; -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - ▸ Apply - Abort -``` - -Selecting `Apply` will apply the changes to the database. - -You can inspect the latest state of the database again and verify that the changes have been applied by re-running the `atlas schema inspect` command: - -```console -atlas schema inspect -u "mysql://root:pass@localhost:3306/example" -``` -```hcl -table "customers" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "first_name" { - null = false - type = varchar(255) - } - column "last_name" { - null = false - type = varchar(255) - } - primary_key { - columns = [column.id] - } -} -table "orders" { - schema = schema.example - column "id" { - null = false - type = int - auto_increment = true - } - column "customer_id" { - null = false - type = int - } - column "order_date" { - null = false - type = date - } - primary_key { - columns = [column.id] - } - foreign_key "orders_ibfk_1" { - columns = [column.customer_id] - ref_columns = [table.customers.column.id] - on_update = NO_ACTION - on_delete = NO_ACTION - } - index "orders_ibfk_1" { - columns = [column.customer_id] - } -} -schema "example" { - charset = "utf8mb4" - collate = "utf8mb4_0900_ai_ci" -} -``` -Great! Our changes have been successfully applied to the `example` schema. - -## Conclusion - -By following the steps above, you can inspect the current state of the database, make changes to the schema definition file and apply those changes to the database with confidence. - -The declarative migration is a flexible way to manage your database schema. It allows you to define your desired state in a simple, human-readable language, and then automatically apply those changes to your database. This can save you time and effort, and also reduce the risk of errors or inconsistencies in your database schema. - -:::info -Atlas supports both Declarative and Versioned workflows for managing database schemas. The choice between the two depends on your requirements. To understand the difference and choose the best option for you, visit the documentation on Declarative vs Versioned workflow [here](/concepts/declarative-vs-versioned). -::: - -## Need More Help? - -[Join the Ariga Discord Server](https://discord.com/invite/zZ6sWVg6NT) for early access to features and the ability to provide exclusive feedback that improves your Database Management Tooling. - -[Sign up](https://atlasnewsletter.substack.com/) to our newsletter to stay up to date about Atlas, and the cloud platform [Atlas Cloud](https://atlasgo.cloud). diff --git a/doc/md/getting-started/declarative/mysql-declarative-sql.mdx b/doc/md/getting-started/declarative/mysql-declarative-sql.mdx deleted file mode 100644 index 2b7a8b3a1e1..00000000000 --- a/doc/md/getting-started/declarative/mysql-declarative-sql.mdx +++ /dev/null @@ -1,187 +0,0 @@ ---- -id: mysql-declarative-sql -title: Getting started with SQL declarative migrations for a MySQL schema -slug: /getting-started/mysql-declarative-sql ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' - -### Introduction - -Atlas makes it easy to manage your database using the declarative migrations workflow. Declarative migrations focus on describing the desired state of the database and allow the migration engine to plan and execute the necessary actions to reach that state. - -In this quickstart guide, we will go through the three essential steps for using Atlas to manage your database schema: inspecting your database, authoring the desired changes, and applying those changes. - -### Installing Atlas - - - -### Warming up -To demonstrate the examples in the article, let's start a docker container with an example database and create a table in it. - -Use the following command to run a MySQL Docker container: - -```console -docker run --name atlas-demo -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=example -p 3306:3306 -d mysql:latest -``` - -Once the container is running, you can connect to the MySQL server using the command below: - -```console -docker exec -it atlas-demo mysql -ppass --database=example -``` - -Once you're connected, you can create a table in it by running the following command: - -```sql -CREATE TABLE customers ( - id INT PRIMARY KEY AUTO_INCREMENT, - first_name VARCHAR(255) NOT NULL, - last_name VARCHAR(255) NOT NULL -); -``` - -## Step 1: Inspecting the database - -Before planning any changes to your database, it can be useful to know what your database schema looks like. To do this, you can use the `atlas schema inspect` command to inspect your database. This command generates a file that represents your database schema, which can be used as a starting point for making changes. - -### Usage - -``` -atlas schema inspect [flags] -``` - -### Example - -The following command will inspect the current state of the schema `example` and store it in a file named `schema.sql` - -```console -atlas schema inspect \ - --url "mysql://root:pass@localhost:3306/example" \ - --format "{{ sql . }}" > schema.sql -``` - -```sql title="schema.sql" --- create "customers" table -CREATE TABLE `customers` ( - `id` int NOT NULL AUTO_INCREMENT, - `first_name` varchar(255) NOT NULL, - `last_name` varchar(255) NOT NULL, - PRIMARY KEY (`id`) -); -``` -:::info -To learn more about inspecting schema with Atlas in variety of use cases, visit the documentation [here](/declarative/inspect) -::: - -## Step 2: Planning a change - -Once you have inspected your database, the next step is to plan a change. This is done by modifying the SQL file in a way that it defines the desired state of your schema. The file should include the changes you want to make to your database, such as creating tables, columns, or indexes. - -:::info -In this article, we will be focusing on SQL-based schema inspection with Atlas. For those opting to use HCL schema, please refer to the documentation [here](/getting-started/mysql-declarative-hcl). -::: - -### Example - -Let’s add the following table definition to the SQL schema file that we previously created with the `atlas schema inspect` command: - -```sql title="schema.sql" {9-16} --- create "customers" table -CREATE TABLE `customers` ( - `id` int NOT NULL AUTO_INCREMENT, - `first_name` varchar(255) NOT NULL, - `last_name` varchar(255) NOT NULL, - PRIMARY KEY (`id`) -); - -CREATE TABLE orders ( - id int NOT NULL AUTO_INCREMENT, - customer_id int NOT NULL, - order_date date NOT NULL, - PRIMARY KEY (id), - CONSTRAINT customer_id_fk FOREIGN KEY (customer_id) REFERENCES example.customers (id) -); -``` - -## Step 3: Applying the change - -The final step is to apply the change to your database. To do this, simply run the `atlas schema apply` command. This command will compare the current state of your database to the desired state defined in your SQL file and make the necessary changes. - -### Usage - -```console -atlas schema apply [flags] -``` - -### Example - -Let’s apply the changes we created in Step 2 using the following command: - -```console -atlas schema apply \ - --url "mysql://root:pass@localhost:3306/example" \ - --to "file://schema.sql" \ - --dev-url "docker://mysql/8/example" -``` - -:::info The role of the [Dev Database](/concepts/dev-database) -Atlas loads the **desired state** by executing the SQL files onto the provided [dev database](/concepts/dev-database), -compares it against the database **current state** by inspecting its schema and writes a migration plan for moving -from the current state to the desired state. -::: - -```console --- Planned Changes: --- Create "orders" table -// highlight-next-line-info -CREATE TABLE `orders` (`id` int NOT NULL AUTO_INCREMENT, `customer_id` int NOT NULL, `order_date` date NOT NULL, PRIMARY KEY (`id`), INDEX `customer_id_fk` (`customer_id`), CONSTRAINT `customer_id_fk` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci; -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - ▸ Apply - Abort -``` - -Selecting `Apply` will apply the changes to the database. - -You can inspect the latest state of the database again and verify that the changes have been applied by re-running the `atlas schema inspect` command: - -```console -atlas schema inspect -u "mysql://root:pass@localhost:3306/example" --format "{{ sql . }}" -``` - -```sql --- create "customers" table -CREATE TABLE `customers` ( - `id` int NOT NULL AUTO_INCREMENT, - `first_name` varchar(255) NOT NULL, - `last_name` varchar(255) NOT NULL, - PRIMARY KEY (`id`) -) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci; - --- create "orders" table -CREATE TABLE `orders` ( - `id` int NOT NULL AUTO_INCREMENT, - `customer_id` int NOT NULL, - `order_date` date NOT NULL, PRIMARY KEY (`id`), - INDEX `customer_id_fk` (`customer_id`), - CONSTRAINT `customer_id_fk` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION -) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci; -``` -Great! Our changes have been successfully applied to the `example` schema. - -## Conclusion - -By following the steps above, you can inspect the current state of the database, make changes to the schema definition file and apply those changes to the database with confidence. - -The declarative migration is a flexible way to manage your database schema. It allows you to define your desired state, and then automatically apply those changes to your database. This can save you time and effort, and also reduce the risk of errors or inconsistencies in your database schema. - -:::info -Atlas supports both Declarative and Versioned workflows for managing database schemas. The choice between the two depends on your requirements. To understand the difference and choose the best option for you, visit the documentation on Declarative vs Versioned workflow [here](/concepts/declarative-vs-versioned). -::: - -## Need More Help? - -[Join the Ariga Discord Server](https://discord.com/invite/zZ6sWVg6NT) for early access to features and the ability to provide exclusive feedback that improves your Database Management Tooling. - -[Sign up](https://atlasnewsletter.substack.com/) to our newsletter to stay up to date about Atlas, and the cloud platform [Atlas Cloud](https://atlasgo.cloud). diff --git a/doc/md/getting-started/declarative/postgresql-declarative-hcl.mdx b/doc/md/getting-started/declarative/postgresql-declarative-hcl.mdx deleted file mode 100644 index a024bd0a32a..00000000000 --- a/doc/md/getting-started/declarative/postgresql-declarative-hcl.mdx +++ /dev/null @@ -1,250 +0,0 @@ ---- -id: postgresql-declarative-hcl -title: Getting started with HCL declarative migrations for a PostgreSQL schema -slug: /getting-started/postgresql-declarative-hcl ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' - -### Introduction - -Atlas makes it easy to manage your database using the declarative migrations workflow. Declarative migrations focus on describing the desired state of the database and allow the migration engine to plan and execute the necessary actions to reach that state. - -In this quickstart guide, we will go through the three essential steps for using Atlas to manage your database schema: inspecting your database, authoring the desired changes, and applying those changes. - - -### Installing Atlas - - - -### Warming up -To demonstrate the examples in the article, let's start a docker container with an `example` database and create a table in it. - -Use the following command to run a PostgreSQL Docker container: - -```console -docker run --name atlas-demo -e POSTGRES_PASSWORD=pass -e POSTGRES_DB=example -p 5432:5432 -d postgres:latest -``` - -Once the container is running, you can connect to the PostgreSQL database using the command below: - -```console -docker exec -it atlas-demo psql "postgres://postgres:pass@localhost:5432/example" -``` - -Once you're connected, you can create a table in it by running the following command: - -```sql -CREATE TABLE customers ( - id INT PRIMARY KEY, - first_name VARCHAR(255) NOT NULL, - last_name VARCHAR(255) NOT NULL -); -``` - -## Step 1: Inspecting the database - -Before planning any changes to your database, it can be useful to know what your database schema looks like. To do this, you can use the `atlas schema inspect` command to inspect your database. This command generates an output in [HCL](/guides/ddl#hcl) format that represents your database schema in a simple, human-readable language. This can be used as a starting point for making changes. - - -### Usage -``` -atlas schema inspect [flags] -``` - -### Example - -The following command will inspect the current state of the schema `example` and store it in a file named `schema.hcl` - -```console -atlas schema inspect \ - --url "postgres://postgres:pass@localhost:5432/example?search_path=public&sslmode=disable" > schema.hcl -``` - -```hcl title="schema.hcl" -schema "public" {} - -table "customers" { - schema = schema.public - column "id" { - null = false - type = integer - } - column "first_name" { - null = false - type = character_varying(255) - } - column "last_name" { - null = false - type = character_varying(255) - } - primary_key { - columns = [column.id] - } -``` - -:::info -To learn more about inspecting schema with Atlas in variety of use cases, visit the documentation [here](/declarative/inspect) -::: - -## Step 2: Planning a change - -Once you have inspected your database, the next step is to plan a change. This is done by modifying the [HCL](/guides/ddl#hcl) file in a way that it defines the desired state of your schema. The file should include the changes you want to make to your database, such as creating tables, columns, or indexes. - -:::info -In this article, we will be focusing on HCL-based schema inspection with Atlas. For those opting to use SQL schema, refer to the documentation [here](/getting-started/postgresql-declarative-sql). -::: - -### Example - -Let’s add the following table definition to the HCL schema file that we previously created with the `atlas schema inspect` command: - -```hcl title="schema.hcl" {22-44} -schema "public" {} - -table "customers" { - schema = schema.public - column "id" { - null = false - type = integer - } - column "first_name" { - null = false - type = character_varying(255) - } - column "last_name" { - null = false - type = character_varying(255) - } - primary_key { - columns = [column.id] - } -} - -table "orders" { - schema = schema.public - column "id" { - null = false - type = int - auto_increment = true - } - column "customer_id" { - null = false - type = int - } - column "order_date" { - null = false - type = date - } - primary_key { - columns = [column.id] - } - foreign_key "customer_id_fk" { - columns = [column.customer_id] - ref_columns = [table.customers.column.id] - } -} -``` - -## Step 3: Applying the change - -The final step is to apply the change to your database. To do this, simply run the `atlas schema apply` command. This command will compare the current state of your database to the desired state defined in your HCL file and make the necessary changes. - -### Usage - -```console -atlas schema apply [flags] -``` - -### Example - -Let’s apply the changes we created in Step 2 using the following command: - -```console -atlas schema apply \ - --url "postgres://postgres:pass@localhost:5432/example?search_path=public&sslmode=disable" \ - --to "file://schema.hcl" -``` - -```console --- Planned Changes: --- Create "orders" table -// highlight-next-line-info -CREATE TABLE "public"."orders" ("id" integer NOT NULL, "customer_id" integer NOT NULL, "order_date" date NOT NULL, PRIMARY KEY ("id"), CONSTRAINT "customer_id_fk" FOREIGN KEY ("customer_id") REFERENCES "public"."customers" ("id")); -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - > Apply - Abort -``` - -Selecting `Apply` will apply the changes to the database. - -You can inspect the latest state of the database again and verify that the changes have been applied by re-running the `atlas schema inspect` command: - -```console -atlas schema inspect \ - --url "postgres://postgres:pass@localhost:5432/example?search_path=public&sslmode=disable" -``` -```hcl -table "customers" { - schema = schema.public - column "id" { - null = false - type = integer - } - column "first_name" { - null = false - type = character_varying(255) - } - column "last_name" { - null = false - type = character_varying(255) - } - primary_key { - columns = [column.id] - } -} -table "orders" { - schema = schema.public - column "id" { - null = false - type = integer - } - column "customer_id" { - null = false - type = integer - } - column "order_date" { - null = false - type = date - } - primary_key { - columns = [column.id] - } - foreign_key "customer_id_fk" { - columns = [column.customer_id] - ref_columns = [table.customers.column.id] - on_update = NO_ACTION - on_delete = NO_ACTION - } -} -schema "public" { -} -``` -Great! Our changes have been successfully applied to the `example` schema. - -## Conclusion - -By following the steps above, you can inspect the current state of the database, make changes to the schema definition file and apply those changes to the database with confidence. - -The declarative migration is a flexible way to manage your database schema. It allows you to define your desired state in a simple, human-readable language, and then automatically apply those changes to your database. This can save you time and effort, and also reduce the risk of errors or inconsistencies in your database schema. - -:::info -Atlas supports both Declarative and Versioned workflows for managing database schemas. The choice between the two depends on your requirements. To understand the difference and choose the best option for you, visit the documentation on Declarative vs Versioned workflow [here](/concepts/declarative-vs-versioned). -::: - -## Need More Help? - -[Join the Ariga Discord Server](https://discord.com/invite/zZ6sWVg6NT) for early access to features and the ability to provide exclusive feedback that improves your Database Management Tooling. - -[Sign up](https://atlasnewsletter.substack.com/) to our newsletter to stay up to date about Atlas, and the cloud platform [Atlas Cloud](https://atlasgo.cloud). diff --git a/doc/md/getting-started/declarative/postgresql-declarative-sql.mdx b/doc/md/getting-started/declarative/postgresql-declarative-sql.mdx deleted file mode 100644 index c851ca8bdfc..00000000000 --- a/doc/md/getting-started/declarative/postgresql-declarative-sql.mdx +++ /dev/null @@ -1,189 +0,0 @@ ---- -id: postgresql-declarative-sql -title: Getting started with SQL declarative migrations for a PostgreSQL schema -slug: /getting-started/postgresql-declarative-sql ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' - -### Introduction - -Atlas makes it easy to manage your database using the declarative migrations workflow. Declarative migrations focus on describing the desired state of the database and allow the migration engine to plan and execute the necessary actions to reach that state. - -In this quickstart guide, we will go through the three essential steps for using Atlas to manage your database schema: inspecting your database, authoring the desired changes, and applying those changes. - -### Installing Atlas - - - -### Warming up -To demonstrate the examples in the article, let's start a docker container with an `example` database and create a table in it. - -Use the following command to run a PostgreSQL Docker container: - -```console -docker run --name atlas-demo -e POSTGRES_PASSWORD=pass -e POSTGRES_DB=example -p 5432:5432 -d postgres:latest -``` - -Once the container is running, you can connect to the PostgreSQL database using the command below: - -```console -docker exec -it atlas-demo psql "postgres://postgres:pass@localhost:5432/example" -``` - -Once you're connected, you can create a table in it by running the following command: - -```sql -CREATE TABLE customers ( - id INT PRIMARY KEY, - first_name VARCHAR(255) NOT NULL, - last_name VARCHAR(255) NOT NULL -); -``` - -## Step 1: Inspecting the database - -Before planning any changes to your database, it can be useful to know what your database schema looks like. To do this, you can use the `atlas schema inspect` command to inspect your database. This command generates a file that represents your database schema, which can be used as a starting point for making changes. - -### Usage -``` -atlas schema inspect [flags] -``` - -### Example - -The following command will inspect the current state of the schema `example` and store it in a file named `schema.sql` - -```console -atlas schema inspect \ - --url "postgres://postgres:pass@localhost:5432/example?search_path=public&sslmode=disable" \ - --format "{{ sql . }}" > schema.sql -``` - -```sql title="schema.sql" --- create "customers" table -CREATE TABLE "customers" ( - "id" integer NOT NULL, - "first_name" character varying(255) NOT NULL, - "last_name" character varying(255) NOT NULL, - PRIMARY KEY ("id") -); -``` - -:::info -To learn more about inspecting schema with Atlas in variety of use cases, visit the documentation [here](/declarative/inspect) -::: - -## Step 2: Planning a change - -Once you have inspected your database, the next step is to plan a change. This is done by modifying the SQL file in a way that it defines the desired state of your schema. The file should include the changes you want to make to your database, such as creating tables, columns, or indexes. - -:::info -In this article, we will be focusing on SQL-based schema inspection with Atlas. For those opting to use an HCL schema, read the documentation [here](/getting-started/postgresql-declarative-hcl). -::: - -### Example - -Let’s add the following table definition to the SQL schema file that we previously created with the `atlas schema inspect` command: - -```sql title="schema.sql" {9-15} --- create "customers" table -CREATE TABLE "customers" ( - "id" integer NOT NULL, - "first_name" character varying(255) NOT NULL, - "last_name" character varying(255) NOT NULL, - PRIMARY KEY ("id") -); - -CREATE TABLE orders ( - id integer NOT NULL, - customer_id integer NOT NULL, - order_date date NOT NULL, - PRIMARY KEY (id), - CONSTRAINT customer_id_fk FOREIGN KEY (customer_id) REFERENCES customers (id) -); -``` - -## Step 3: Applying the change - -The final step is to apply the change to your database. To do this, simply run the `atlas schema apply` command. This command will compare the current state of your database to the desired state defined in your SQL file and make the necessary changes. - -### Usage - -```console -atlas schema apply [flags] -``` - -### Example - -Let’s apply the changes we created in Step 2 using the following command: - -```console -atlas schema apply \ - --url "postgres://postgres:pass@localhost:5432/example?&sslmode=disable" \ - --to "file://schema.sql" \ - --dev-url "docker://postgres/15" -``` - -:::info The role of the [Dev Database](/concepts/dev-database) -Atlas loads the **desired state** by executing the SQL files onto the provided [dev database](/concepts/dev-database), -compares it against the database **current state** by inspecting its schema and writes a migration plan for moving -from the current state to the desired state. -::: - -```console --- Planned Changes: --- Create "orders" table -// highlight-next-line-info -CREATE TABLE "public"."orders" ("id" integer NOT NULL, "customer_id" integer NOT NULL, "order_date" date NOT NULL, PRIMARY KEY ("id"), CONSTRAINT "customer_id_fk" FOREIGN KEY ("customer_id") REFERENCES "public"."customers" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION); -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - ▸ Apply - Abort -``` - -Selecting `Apply` will apply the changes to the database. - -You can inspect the latest state of the database again and verify that the changes have been applied by re-running the `atlas schema inspect` command: - -```console -atlas schema inspect \ - --url "postgres://postgres:pass@localhost:5432/example?search_path=public&sslmode=disable" \ - --format "{{ sql . }}" -``` -```sql --- create "customers" table -CREATE TABLE "customers" ( - "id" integer NOT NULL, - "first_name" character varying(255) NOT NULL, - "last_name" character varying(255) NOT NULL, - PRIMARY KEY ("id") -); - --- create "orders" table -CREATE TABLE "orders" ( - "id" integer NOT NULL, - "customer_id" integer NOT NULL, - "order_date" date NOT NULL, - PRIMARY KEY ("id"), - CONSTRAINT "customer_id_fk" - FOREIGN KEY ("customer_id") REFERENCES "customers" ("id") ON UPDATE NO ACTION ON DELETE NO ACTION -); -``` -Great! Our changes have been successfully applied to the `example` schema. - -## Conclusion - -By following the steps above, you can inspect the current state of the database, make changes to the schema definition file and apply those changes to the database with confidence. - -The declarative migration is a flexible way to manage your database schema. It allows you to define your desired state, and then automatically apply those changes to your database. This can save you time and effort, and also reduce the risk of errors or inconsistencies in your database schema. - -:::info -Atlas supports both Declarative and Versioned workflows for managing database schemas. The choice between the two depends on your requirements. To understand the difference and choose the best option for you, visit the documentation on Declarative vs Versioned workflow [here](/concepts/declarative-vs-versioned). -::: - -## Need More Help? - -[Join the Ariga Discord Server](https://discord.com/invite/zZ6sWVg6NT) for early access to features and the ability to provide exclusive feedback that improves your Database Management Tooling. - -[Sign up](https://atlasnewsletter.substack.com/) to our newsletter to stay up to date about Atlas, and the cloud platform [Atlas Cloud](https://atlasgo.cloud). diff --git a/doc/md/getting-started/declarative/sqlite-declarative-hcl.mdx b/doc/md/getting-started/declarative/sqlite-declarative-hcl.mdx deleted file mode 100644 index 778af8fa8c3..00000000000 --- a/doc/md/getting-started/declarative/sqlite-declarative-hcl.mdx +++ /dev/null @@ -1,245 +0,0 @@ ---- -id: sqlite-declarative-hcl -title: Getting started with HCL declarative migrations for an SQLite schema -slug: /getting-started/sqlite-declarative-hcl ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' - -### Introduction - -Atlas makes it easy to manage your database using the declarative migrations workflow. Declarative migrations focus on describing the desired state of the database and allow the migration engine to plan and execute the necessary actions to reach that state. - -In this quickstart guide, we will go through the three essential steps for using Atlas to manage your database schema: inspecting your database, authoring the desired changes, and applying those changes. - - -### Installing Atlas - - - -### Warming up -To demonstrate the examples in the article, install the SQLite command-line tool on your host machine if it's not -already installed. You can download it from the SQLite website at https://www.sqlite.org/download.html - -Once SQLite is installed, you can create a new database file by running the following command: - -```console -sqlite3 example.db -``` - -Then, create an example `customers` table in it by executing the following statement: - -```sql -CREATE TABLE customers ( - id INTEGER PRIMARY KEY, - first_name TEXT NOT NULL, - last_name TEXT NOT NULL -); -``` - -## Step 1: Inspecting the database - -Before planning any changes to your database, it can be useful to know what your database schema looks like. -To do this, you can use the `atlas schema inspect` command to inspect your database. This command generates an output -in [HCL](/guides/ddl#hcl) format that represents your database schema in a simple, human-readable language. -This can be used as a starting point for making changes. - -### Usage -``` -atlas schema inspect [flags] -``` - -### Example - -The following command will inspect the current state of the schema `example` and store it in a file named `schema.hcl` - -```console -atlas schema inspect --url "sqlite://example.db" > schema.hcl -``` - -```hcl title="schema.hcl" -schema "main" {} - -table "customers" { - schema = schema.main - column "id" { - null = true - type = integer - } - column "first_name" { - null = false - type = text - } - column "last_name" { - null = false - type = text - } - primary_key { - columns = [column.id] - } -} -``` - -:::info -To learn more about inspecting schema with Atlas in variety of use cases, visit the documentation [here](/declarative/inspect) -::: - -## Step 2: Planning a change - -Once you have inspected your database, the next step is to plan a change. This is done by modifying the [HCL](/guides/ddl#hcl) file in a way that it defines the desired state of your schema. The file should include the changes you want to make to your database, such as creating tables, columns, or indexes. - -:::info -In this article, we will be focusing on HCL-based schema inspection with Atlas. For those opting to use SQL schema, refer to the documentation [here](/getting-started/sqlite-declarative-sql). -::: - -### Example - -Let’s add the following table definition to the HCL schema file that we previously created with the `atlas schema inspect` command: - -```hcl title="schema.hcl" {22-44} -schema "main" {} - -table "customers" { - schema = schema.main - column "id" { - null = true - type = integer - } - column "first_name" { - null = false - type = text - } - column "last_name" { - null = false - type = text - } - primary_key { - columns = [column.id] - } -} - -table "orders" { - schema = schema.main - column "id" { - null = true - type = integer - } - column "customer_id" { - null = false - type = integer - } - column "order_date" { - null = false - type = date - } - primary_key { - columns = [column.id] - } - foreign_key "fk1" { - columns = [column.customer_id] - ref_columns = [table.customers.column.id] - } -} -``` - -## Step 3: Applying the change - -The final step is to apply the change to your database. To do this, simply run the `atlas schema apply` command. -This command will compare the current state of your database to the desired state defined in your HCL file and make -the necessary changes. - -### Usage - -```console -atlas schema apply [flags] -``` - -### Example - -Let’s apply the changes we created in Step 2 using the following command: - -```console -atlas schema apply \ - --url "sqlite://example.db" \ - --to "file://schema.hcl" -``` - -```console --- Planned Changes: --- Create "orders" table -// highlight-next-line-info -CREATE TABLE `orders` (`id` integer NULL, `customer_id` integer NOT NULL, `order_date` date NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk1` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`)); -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - ▸ Apply - Abort -``` - -Selecting `Apply` will apply the changes to the database. - -You can inspect the latest state of the database again and verify that the changes have been applied by re-running the `atlas schema inspect` command: - -```console -atlas schema inspect --url "sqlite://example.db" -``` -```hcl -table "customers" { - schema = schema.main - column "id" { - null = true - type = integer - } - column "first_name" { - null = false - type = text - } - column "last_name" { - null = false - type = text - } - primary_key { - columns = [column.id] - } -} -table "orders" { - schema = schema.main - column "id" { - null = true - type = integer - } - column "customer_id" { - null = false - type = integer - } - column "order_date" { - null = false - type = date - } - primary_key { - columns = [column.id] - } - foreign_key "fk1" { - columns = [column.customer_id] - ref_columns = [table.customers.column.id] - } -} -schema "main" { -} -``` -Great! Our changes have been successfully applied to the `example` schema. - -## Conclusion - -By following the steps above, you can inspect the current state of the database, make changes to the schema definition file and apply those changes to the database with confidence. - -The declarative migration is a flexible way to manage your database schema. It allows you to define your desired state in a simple, human-readable language, and then automatically apply those changes to your database. This can save you time and effort, and also reduce the risk of errors or inconsistencies in your database schema. - -:::info -Atlas supports both Declarative and Versioned workflows for managing database schemas. The choice between the two depends on your requirements. To understand the difference and choose the best option for you, visit the documentation on Declarative vs Versioned workflow [here](/concepts/declarative-vs-versioned). -::: - -## Need More Help? - -[Join the Ariga Discord Server](https://discord.com/invite/zZ6sWVg6NT) for early access to features and the ability to provide exclusive feedback that improves your Database Management Tooling. - -[Sign up](https://atlasnewsletter.substack.com/) to our newsletter to stay up to date about Atlas, and the cloud platform [Atlas Cloud](https://atlasgo.cloud). diff --git a/doc/md/getting-started/declarative/sqlite-declarative-sql.mdx b/doc/md/getting-started/declarative/sqlite-declarative-sql.mdx deleted file mode 100644 index 486d50d05fc..00000000000 --- a/doc/md/getting-started/declarative/sqlite-declarative-sql.mdx +++ /dev/null @@ -1,183 +0,0 @@ ---- -id: sqlite-declarative-sql -title: Getting started with SQL declarative migrations for an SQLite schema -slug: /getting-started/sqlite-declarative-sql ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' - -### Introduction - -Atlas makes it easy to manage your database using the declarative migrations workflow. Declarative migrations focus on describing the desired state of the database and allow the migration engine to plan and execute the necessary actions to reach that state. - -In this quickstart guide, we will go through the three essential steps for using Atlas to manage your database schema: inspecting your database, authoring the desired changes, and applying those changes. - -### Installing Atlas - - - -### Warming up -To demonstrate the examples in the article, install the SQLite command-line tool on your host machine if it's not already installed. You can download it from the SQLite website at https://www.sqlite.org/download.html - -Once SQLite is installed, you can create a new database file by running the following command: - -```console -sqlite3 example.db -``` - -Then, create an example `customers` table in it by executing the following statement: - -```sql -CREATE TABLE customers ( - id INTEGER PRIMARY KEY, - first_name TEXT NOT NULL, - last_name TEXT NOT NULL -); -``` - -## Step 1: Inspecting the database - -Before planning any changes to your database, it can be useful to know what your database schema looks like. To do this, you can use the `atlas schema inspect` command to inspect your database. This command generates a file that represents your database schema, which can be used as a starting point for making changes. - -### Usage -``` -atlas schema inspect [flags] -``` - -### Example - -The following command will inspect the current state of the schema `example` and store it in a file named `schema.sql` - -```console -atlas schema inspect \ - --url "sqlite://example.db" \ - --format "{{ sql . }}" > schema.sql -``` - -```sql title="schema.sql" --- create "customers" table --- create "customers" table -CREATE TABLE `customers` ( - `id` integer NULL, - `first_name` text NOT NULL, - `last_name` text NOT NULL, - PRIMARY KEY (`id`) -); -``` - -:::info -To learn more about inspecting schema with Atlas in variety of use cases, visit the documentation [here](/declarative/inspect) -::: - -## Step 2: Planning a change - -Once you have inspected your database, the next step is to plan a change. This is done by modifying the SQL file in a way that it defines the desired state of your schema. The file should include the changes you want to make to your database, such as creating tables, columns, or indexes. - -:::info -In this article, we will be focusing on SQL-based schema inspection with Atlas. For those opting to use an HCL schema, read the documentation [here](/getting-started/sqlite-declarative-hcl). -::: - -### Example - -Let’s add the following table definition to the SQL schema file that we previously created with the `atlas schema inspect` command: - -```sql title="schema.sql" {9-15} --- create "customers" table -CREATE TABLE `customers` ( - `id` integer NULL, - `first_name` text NOT NULL, - `last_name` text NOT NULL, - PRIMARY KEY (`id`) -); - -CREATE TABLE orders ( - id integer NULL, - customer_id integer NOT NULL, - order_date date NOT NULL, - PRIMARY KEY (id), - CONSTRAINT fk1 FOREIGN KEY (customer_id) REFERENCES customers (id) ON UPDATE NO ACTION ON DELETE NO ACTION -); -``` - -## Step 3: Applying the change - -The final step is to apply the change to your database. To do this, simply run the `atlas schema apply` command. This command will compare the current state of your database to the desired state defined in your SQL file and make the necessary changes. - -### Usage - -```console -atlas schema apply [flags] -``` - -### Example - -Let’s apply the changes we created in Step 2 using the following command: - -```console -atlas schema apply \ - --url "sqlite://example.db" \ - --to "file://schema.sql" \ - --dev-url "sqlite://file?mode=memory" -``` - -:::info The role of the [Dev Database](/concepts/dev-database) -Atlas loads the **desired state** by executing the SQL files onto the provided [dev database](/concepts/dev-database), -compares it against the database **current state** by inspecting its schema and writes a migration plan for moving -from the current state to the desired state. -::: - -```console --- Planned Changes: --- Create "orders" table -// highlight-next-line-info -CREATE TABLE `orders` (`id` integer NULL, `customer_id` integer NOT NULL, `order_date` date NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk1` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION); -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - ▸ Apply - Abort -``` - -Selecting `Apply` will apply the changes to the database. - -You can inspect the latest state of the database again and verify that the changes have been applied by re-running the `atlas schema inspect` command: - -```console -atlas schema inspect \ - --url "sqlite://example.db" \ - --format "{{ sql . }}" -``` -```sql --- create "customers" table -CREATE TABLE `customers` ( - `id` integer NULL, - `first_name` text NOT NULL, - `last_name` text NOT NULL, - PRIMARY KEY (`id`) -); - --- create "orders" table -CREATE TABLE `orders` ( - `id` integer NULL, - `customer_id` integer NOT NULL, - `order_date` date NOT NULL, - PRIMARY KEY (`id`), - CONSTRAINT `fk1` FOREIGN KEY (`customer_id`) REFERENCES `customers` (`id`) ON UPDATE NO ACTION ON DELETE NO ACTION -); -``` -Great! Our changes have been successfully applied to the `example` schema. - -## Conclusion - -By following the steps above, you can inspect the current state of the database, make changes to the schema definition file and apply those changes to the database with confidence. - -The declarative migration is a flexible way to manage your database schema. It allows you to define your desired state, and then automatically apply those changes to your database. This can save you time and effort, and also reduce the risk of errors or inconsistencies in your database schema. - -:::info -Atlas supports both Declarative and Versioned workflows for managing database schemas. The choice between the two depends on your requirements. To understand the difference and choose the best option for you, visit the documentation on Declarative vs Versioned workflow [here](/concepts/declarative-vs-versioned). -::: - -## Need More Help? - -[Join the Ariga Discord Server](https://discord.com/invite/zZ6sWVg6NT) for early access to features and the ability to provide exclusive feedback that improves your Database Management Tooling. - -[Sign up](https://atlasnewsletter.substack.com/) to our newsletter to stay up to date about Atlas, and the cloud platform [Atlas Cloud](https://atlasgo.cloud). diff --git a/doc/md/getting-started/getting-started.mdx b/doc/md/getting-started/getting-started.mdx deleted file mode 100644 index 2ecd597ec52..00000000000 --- a/doc/md/getting-started/getting-started.mdx +++ /dev/null @@ -1,329 +0,0 @@ ---- -id: getting-started -title: Quick Introduction -sidebar_label: Quick Introduction ---- -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; -import InstallationInstructions from '../components/_installation_instructions.mdx' -import Discord from '../../website/src/assets/icons/discord-white.svg' - -import { AtlasGoNewsletterDocs } from "@ariga/atlas-website"; - -Atlas is a language-independent tool for managing and migrating database schemas using modern DevOps principles. -It offers two workflows: - -- **Declarative**: Similar to Terraform, Atlas compares the current state of the database to the desired state, as -defined in an [HCL](../atlas-schema/hcl), [SQL](../atlas-schema/sql), or [ORM](../atlas-schema/external) schema. -Based on this comparison, it generates and executes a migration plan to transition the database to its desired state. - -- **Versioned**: Unlike other tools, Atlas automatically plans schema migrations for you. Users can describe their desired -database schema in [HCL](../atlas-schema/hcl), [SQL](../atlas-schema/sql), or their chosen [ORM](../atlas-schema/external), -and by utilizing Atlas, they can plan, lint, and apply the necessary migrations to the database. - -### Installation - - - -The default binaries distributed in official releases are released under the -[Atlas EULA](https://ariga.io/legal/atlas/eula). If you would like obtain a copy of Atlas -Community Edition (under an Apache 2 license) follow the instructions [here](/community-edition). - -### Start a local database container - -For the purpose of this guide, we will start a local Docker container running MySQL. -```shell -docker run --rm -d --name atlas-demo -p 3306:3306 -e MYSQL_ROOT_PASSWORD=pass -e MYSQL_DATABASE=example mysql -``` - -For this example, we will start with a schema that represents a `users` table, in which each user has an ID and a name: -```sql -CREATE table users ( - id int PRIMARY KEY, - name varchar(100) -); -``` - -To create the table above on our local database, we can run the following command: -```shell -docker exec atlas-demo mysql -ppass -e 'CREATE table example.users(id int PRIMARY KEY, name varchar(100))' -``` - -### Inspecting our database - -The `atlas schema inspect` command supports reading the database description provided by a URL and outputting it in -three different formats: [Atlas DDL](../atlas-schema/hcl.mdx) (default), SQL, and JSON. In this guide, we will -demonstrate the flow using both the Atlas DDL and SQL formats, as the JSON format is often used for processing the -output using `jq`. - - - - -To inspect our locally-running MySQL instance, use the `-u` flag and write the output to a file named `schema.hcl`: - -```shell -atlas schema inspect -u "mysql://root:pass@localhost:3306/example" > schema.hcl -``` - -Open the `schema.hcl` file to view the Atlas schema that describes our database. - -```hcl title="schema.hcl" -table "users" { - schema = schema.example - column "id" { - null = false - type = int - } - column "name" { - null = true - type = varchar(100) - } - primary_key { - columns = [column.id] - } -} -``` -This block represents a [table](../atlas-schema/hcl.mdx#table) resource with `id`, and `name` -columns. The `schema` field references the `example` schema that is defined elsewhere -in this document. In addition, the `primary_key` sub-block defines the `id` column as -the primary key for the table. Atlas strives to mimic the syntax of the database that the user is working against. In this case, the -type for the `id` column is `int`, and `varchar(100)` for the `name` column. - - - - -To inspect our locally-running MySQL instance, use the `-u` flag and write the output to a file named `schema.sql`: - -```shell -atlas schema inspect -u "mysql://root:pass@localhost:3306/example" --format '{{ sql . }}' > schema.sql -``` - -Open the `schema.sql` file to view the inspected SQL schema that describes our database. - -```sql title="schema.sql" --- create "users" table -CREATE TABLE `users` ( - `id` int NOT NULL, - `name` varchar(100) NULL, - PRIMARY KEY (`id`) -) CHARSET utf8mb4 COLLATE utf8mb4_0900_ai_ci; -``` - - - - -Now, consider we want to add a `blog_posts` table and have our schema represent a simplified -blogging system. - - -

- Blog ERD -

- -Let's add the following to our inspected schema, and use Atlas to plan and apply the changes to our database. - - - - -Edit the `schema.hcl` file and add the following `table` block: - -```hcl title="schema.hcl" {22-25} -table "blog_posts" { - schema = schema.example - column "id" { - null = false - type = int - } - column "title" { - null = true - type = varchar(100) - } - column "body" { - null = true - type = text - } - column "author_id" { - null = true - type = int - } - primary_key { - columns = [column.id] - } - foreign_key "author_fk" { - columns = [column.author_id] - ref_columns = [table.users.column.id] - } -} -``` -In addition to the elements we saw in the `users` table, here we can find a [foreign key](../atlas-schema/hcl.mdx#foreign-key) -block, declaring that the `author_id` column references the `id` column on the -`users` table. - - - -Edit the `schema.sql` file and add the following `CREATE TABLE` statement: - -```sql title="schema.sql" --- create "blog_posts" table -CREATE TABLE `blog_posts` ( - `id` int NOT NULL, - `title` varchar(100) NULL, - `body` text NULL, - `author_id` int NULL, - PRIMARY KEY (`id`), - CONSTRAINT `author_fk` FOREIGN KEY (`author_id`) REFERENCES `example`.`users` (`id`) -); -``` - - - -Now, let's apply these changes by running a migration. -In Atlas, migrations can be applied in two types of workflows: _declarative_ and _versioned_. - -### Declarative Migrations -The declarative approach requires the user to define the _desired_ end schema, and Atlas provides -a safe way to alter the database to get there. Let's see this in action. - -Continuing the example, in order to apply the changes to our database we will run the `apply` command: - - - - -```shell -atlas schema apply \ - -u "mysql://root:pass@localhost:3306/example" \ - --to file://schema.hcl -``` - - - - -```shell -atlas schema apply \ - -u "mysql://root:pass@localhost:3306/example" \ - --to file://schema.sql \ - --dev-url "docker://mysql/8/example" -``` - - - - -Atlas presents the plan it created by displaying the SQL statements. For example, for a MySQL database we will see -the following: - -```console --- Planned Changes: --- Create "blog_posts" table -// highlight-next-line-info -CREATE TABLE `example`.`blog_posts` (`id` int NOT NULL, `title` varchar(100) NULL, `body` text NULL, `author_id` int NULL, PRIMARY KEY (`id`), INDEX `author_id` (`author_id`), CONSTRAINT `author_fk` FOREIGN KEY (`author_id`) REFERENCES `example`.`users` (`id`)) -Use the arrow keys to navigate: ↓ ↑ → ← -? Are you sure?: - ▸ Apply - Abort -``` -Apply the changes, and that's it! You have successfully run a declarative migration. - -To ensure that the changes have been made to the schema, you can run the `inspect` command again. This time, we use -the `--web`/`-w` flag to open the Atlas Web UI and view the schema. - -```shell {3} -atlas schema inspect \ - -u "mysql://root:pass@localhost:3306/example" \ - --web -``` - -:::note -If you are using an old version of Atlas, you may need to replace the `--web` flag with `--visualize`. -::: - -### Versioned Migrations -Alternatively, the versioned migration workflow, sometimes called "change-based migrations", allows each change to the -database schema to be checked-in to source control and reviewed during code-review. Users can still benefit from Atlas -intelligently planning migrations for them, however they are not automatically applied. - -To start, we will calculate the difference between the _desired_ and _current_ state of the database by running the `atlas -migrate diff` command. - -To run this command, we need to provide the necessary parameters: - -* `--dir` the URL to the migration directory, by default it is `file://migrations`. -* `--to` the URL of the desired state. A state can be specified using a database URL, HCL or SQL schema, or another migration directory. -* `--dev-url` a URL to a [Dev Database](/concepts/dev-database) that will be used to compute the diff. - - - - -```shell -atlas migrate diff create_blog_posts \ - --dir "file://migrations" \ - --to "file://schema.hcl" \ - --dev-url "docker://mysql/8/example" -``` - - - -```shell -atlas migrate diff create_blog_posts \ - --dir "file://migrations" \ - --to "file://schema.sql" \ - --dev-url "docker://mysql/8/example" -``` - - - -Run `ls migrations`, and you will notice that Atlas has created two files: - - - - -```sql --- create "blog_posts" table -CREATE TABLE `example`.`blog_posts` (`id` int NOT NULL, `title` varchar(100) NULL, `body` text NULL, `author_id` int NULL, PRIMARY KEY (`id`), INDEX `author_id` (`author_id`), CONSTRAINT `author_fk` FOREIGN KEY (`author_id`) REFERENCES `example`.`users` (`id`)) -``` - - - - -In addition to the migration directory, Atlas maintains a file name `atlas.sum` which is used -to ensure the integrity of the migration directory and force developers to deal with situations -where migration order or contents was modified after the fact. - -```text -h1:t1fEP1rSsGf1gYrYCjsGyEyuM0cnhATlq93B7h8uXxY= -20220811074144_create_blog_posts.sql h1:liZcCBbAn/HyBTqBAEVar9fJNKPTb2Eq+rEKZeCFC9M= -``` - - - - -Now that we have our migration files ready, you can use the `migrate apply` command to apply the changes to the database. -To learn more about this process, check out the [Versioned Migrations Quickstart Guide](/versioned/intro) - -### Next Steps -In this short tutorial we learned how to use Atlas to inspect databases, as well as use declarative and versioned -migrations. Read more about the use-cases for the two approaches [here](/concepts/declarative-vs-versioned) to help you -decide which workflow works best for you. - -:::info Need help getting started? - -We have a super friendly [#getting-started](https://discord.gg/8mvDUG22) channel on our community -chat on Discord. - - -For web-based, free, and fun (GIFs included) support: - - - - - Join our Discord server - - - -::: - - diff --git a/doc/md/guides/ci-platforms/gitlab.mdx b/doc/md/guides/ci-platforms/gitlab.mdx deleted file mode 100644 index 1946f916c14..00000000000 --- a/doc/md/guides/ci-platforms/gitlab.mdx +++ /dev/null @@ -1,334 +0,0 @@ ---- -id: gitlab -title: CI for Databases on GitLab -slug: /guides/ci-platforms/gitlab ---- - -import InstallationInstructions from '../../components/_installation_instructions.mdx' -import MigratePushCommand from '../../components/migrate-push-command.mdx' -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -![](https://atlasgo.io/uploads/gitlab/gitlab-ci-edited.png) - -GitLab is a popular, open-source alternative to GitHub. In addition to a self-hosted version, -GitLab also offers a hosted version at [gitlab.com](https://gitlab.com). Similar to GitHub, GitLab offers -users storage for Git repositories, issue tracking, and CI/CD pipelines. - -In this guide we will demonstrate how to use [GitLab CI](https://docs.gitlab.com/ee/ci/) and Atlas to setup CI -pipelines for your database schema changes. - -## Prerequisites - -- A GitLab project with an [Atlas migration directory](/versioned/intro). - -## Setup - -### Install Atlas - - - -### Login to Atlas - -After installing Atlas locally, you will need to login to your Atlas account. If you don't have an account yet, -you will be prompted to create one for free during the login process. - -You can do this by running the following command: - -```bash -atlas login -``` - -### Push your migrations to Atlas - -Once you have logged in, you can push your migrations to Atlas by running the following command: - - - -Atlas will print a URL leading to your migrations on Atlas Cloud. You can visit this URL to view your migrations. - -### Create a Bot Token - -To connect GitLab CI to Atlas, you will need to create a bot token. To learn how to create a bot token, please -visit the [Bot Tokens](/cloud/bots) guide. - -## Setup GitLab CI - -With your migrations pushed to Atlas, you can now setup GitLab CI to run your migrations on every commit. - -### Create a variable for your bot token - -In order to connect GitLab CI to Atlas, you will need to [create a variable](https://docs.gitlab.com/ee/ci/variables/#for-a-project) -for your bot token. To do this: -1. Go to your projects *Settings* page. -2. Click on the *CI/CD* tab. -3. Expand the *Variables* tab. -4. Click "Add Variable". -5. Use `ATLAS_CLOUD_TOKEN` as the variable key and paste the bot token you created earlier as the variable value. -6. Set the "Mask Variable" checkbox to true to prevent the token from being printed in the CI logs. -7. Unset the "Protect Variable" checkbox so that this job can be run on all branches (necessary for running on merge requests). - -### Create a `.gitlab-ci.yml` file - -Next, you will need to create a `.gitlab-ci.yml` file in the root of your repository. This file will tell GitLab CI -how to run your Atlas CI pipeline. - - - - -```yaml -image: ubuntu:latest - -stages: - - lint - - push - -before_script: - - apt-get update -q && apt-get install -y curl - - 'which atlas || (curl -sSf https://atlasgo.sh | sh)' - - atlas login --token $ATLAS_CLOUD_TOKEN - -lint migrations: - stage: lint - rules: - - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' - when: always - - if: '$CI_MERGE_REQUEST_ID' - changes: - - migrations/* - when: always - allow_failure: false - script: - - | - if [ -n "$CI_MERGE_REQUEST_IID" ]; then - URL="${CI_PROJECT_URL}/-/merge_requests/${CI_MERGE_REQUEST_IID}" - else - URL="${CI_PROJECT_URL}/-/commit/${CI_COMMIT_SHA}" - fi - - ATLAS_CONTEXT=$(cat < - - -```yaml -image: ubuntu:latest - -services: - - postgres - -variables: - POSTGRES_DB: db - POSTGRES_USER: user - POSTGRES_PASSWORD: pass - -stages: - - lint - - push - -before_script: - - apt-get update -q && apt-get install -y curl - - 'which atlas || (curl -sSf https://atlasgo.sh | sh)' - - atlas login --token $ATLAS_CLOUD_TOKEN - -lint migrations: - stage: lint - rules: - - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' - when: always - - if: '$CI_MERGE_REQUEST_ID' - changes: - - migrations/* - when: always - allow_failure: false - script: - - | - if [ -n "$CI_MERGE_REQUEST_IID" ]; then - URL="${CI_PROJECT_URL}/-/merge_requests/${CI_MERGE_REQUEST_IID}" - else - URL="${CI_PROJECT_URL}/-/commit/${CI_COMMIT_SHA}" - fi - - ATLAS_CONTEXT=$(cat < - - -```yaml -image: ubuntu:latest - -services: - - mysql - -variables: - MYSQL_DATABASE: db - MYSQL_ROOT_PASSWORD: pass - -stages: - - lint - - push - -before_script: - - apt-get update -q && apt-get install -y curl - - 'which atlas || (curl -sSf https://atlasgo.sh | sh)' - - atlas login --token $ATLAS_CLOUD_TOKEN - -lint migrations: - stage: lint - rules: - - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH' - when: always - - if: '$CI_MERGE_REQUEST_ID' - changes: - - migrations/* - when: always - allow_failure: false - script: - - | - if [ -n "$CI_MERGE_REQUEST_IID" ]; then - URL="${CI_PROJECT_URL}/-/merge_requests/${CI_MERGE_REQUEST_IID}" - else - URL="${CI_PROJECT_URL}/-/commit/${CI_COMMIT_SHA}" - fi - - ATLAS_CONTEXT=$(cat < - - -Let's break down what this file is doing: -1. The `image` directive tells GitLab CI to use the `ubuntu:latest` Docker image to run your CI jobs. -2. The `stages` directive defines the stages that your CI pipeline will run. In this case, we have two stages: - `lint` and `push`. In the `lint` stage, we will run the `atlas migrate lint` command to verify that our migrations - are valid and safe. In the `push` stage, we will run the `atlas migrate push` command to push our migrations to Atlas. -3. The `before_script` directive defines a set of commands that will be run before each job. In this case, we are - installing the `atlas` CLI and logging into Atlas using the bot token we created earlier. -4. The `lint migrations` job will run the `atlas migrate lint` command. This job will run on all commits to your default - branch and on all merge requests. This job will fail if the migrations are invalid. -5. The `push migrations` job will run the `atlas migrate push` command. This job will only run on commits to your default - branch. - -### Commit and push your changes - -Now that you have created your `.gitlab-ci.yml` file, you can commit and push your changes to GitLab. - -From this point on, whenever you make a change to your migrations and push them to GitLab, GitLab CI will run your -Atlas CI pipeline to verify their safety. To view a detailed analysis of your schema changes, look at the logs for -a direct link to your Atlas Cloud account: -```text -$ which atlas || (curl -sSf https://atlasgo.sh | sh) -Downloading https://release.ariga.io/atlas/atlas-linux-amd64-latest -Installation successful! - -$ atlas login --token $ATLAS_CLOUD_TOKEN -You are now connected to "rotemtam85" on Atlas Cloud. - -$ atlas migrate lint --base atlas://gitlab -w --dev-url "sqlite://demo?mode=memory" --context "$ATLAS_CONTEXT" -// highlight-next-line -https://rotemtam85.atlasgo.cloud/ci-runs/8589934616 -Cleaning up project directory and file based variables -``` - -Once you merge your changes to your default branch, the pipeline will -push your latest changes to Atlas, so they can be deployed. Similarly, to get a visual -representation of your database schema, follow the link from the logs to Atlas Cloud: - -```text -$ which atlas || (curl -sSf https://atlasgo.sh | sh) -Downloading https://release.ariga.io/atlas/atlas-linux-amd64-latest -Installation successful! - -$ atlas login --token $ATLAS_CLOUD_TOKEN -You are now connected to "rotemtam85" on Atlas Cloud. - -$ atlas migrate push gitlab --dev-url "sqlite://demo?mode=memory" --context "$ATLAS_CONTEXT" -// highlight-next-line -https://rotemtam85.atlasgo.cloud/dirs/4294967302 -``` - -## Next Steps: Deployments - -Congratulations! You have successfully setup GitLab CI to lint and push your migration -directory to Atlas Cloud. The next step is to deploy your migrations to your production database -directly from Atlas Cloud using the [CLI](/guides/deploying/from-cloud). diff --git a/doc/md/guides/clickhouse.mdx b/doc/md/guides/clickhouse.mdx deleted file mode 100644 index 2a7b8d1a88a..00000000000 --- a/doc/md/guides/clickhouse.mdx +++ /dev/null @@ -1,608 +0,0 @@ ---- -id: getting-started-clickhouse -title: Automatic Migrations for ClickHouse with Atlas -slug: /guides/clickhouse -tags: [clickhouse] ---- - -import InstallationInstructions from '../components/_installation_instructions.mdx'; -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -[ClickHouse](https://clickhouse.com/), one of the prominent columnar databases, is designed for real-time analytics, -providing exceptional speed and efficiency in handling large datasets. - -Many teams adopt ClickHouse for its speed and cost-efficiency only to discover that the day 2 operations of managing its -schema can be a puzzle. Sure, if your schema contains a handful of tables that rarely change, -you’re probably not going to feel much of this pain. But for mission-critical applications, managing complex and interconnected -schemas while moving fast without breaking things becomes difficult. - -#### Enter: Atlas - -Atlas helps developers manage their database schema as code - abstracting away the intricacies of database schema -management. With Atlas, users provide the desired state of the database schema and Atlas automatically plans the -required migrations. - -In this guide, we will dive into setting up Atlas for ClickHouse, and introduce the different workflows available. - -## Prerequisites - -1. Docker -2. Atlas installed on your machine: - - -## Logging in to Atlas - -To use ClickHouse with Atlas, you'll need to [log in to Atlas](https://auth.atlasgo.cloud/signup). If it's your first time, -you will be prompted to create both an account and a workspace (organization): - - - - -```shell -atlas login -``` - - - - -```shell -atlas login --token "ATLAS_TOKEN" -``` - - - - -```shell -ATLAS_TOKEN="ATLAS_TOKEN" atlas login -``` - - - - -## Inspecting our Database - -Let's start off by spinning up a database using Docker: - -```shell -docker run --rm -d --name atlas-demo -e CLICKHOUSE_DB=demo -p 9000:9000 clickhouse/clickhouse-server:latest -``` - -If you are already using ClickHouse Cloud and want to inspect it, -you can work with Atlas through [Url](../concepts/url) and utilize it throughout the guide. -``` -clickhouse://user:pass@CLICKHOUSE-CLOUD-HOST:9440/schema?secure=true -``` - -For this example we will begin with a minimal database with a `users` table and an `id` as the primary key. - -```sql -CREATE TABLE `users` ( - `id` UInt64, - `name` String NOT NULL, - PRIMARY KEY (`id`) -) ENGINE = MergeTree() ORDER BY id; -``` - -To create the table above on our local database, we can run the following command: - -```sql - docker exec -it atlas-demo clickhouse-client --query 'CREATE TABLE demo.users (id UInt64, name String NOT NULL, PRIMARY KEY (id)) ENGINE = MergeTree() ORDER BY id' -``` - -The `atlas schema inspect` command supports reading the database description provided by a [URL](/concepts/url) and outputting it in -different formats, including [Atlas DDL](/atlas-schema/hcl.mdx) (default), SQL, and JSON. In this guide, we will -demonstrate the flow using both the Atlas DDL and SQL formats, as the JSON format is often used for processing the -output using `jq`. - - - - -To inspect our locally-running ClickHouse instance, use the `-u` flag and write the output to a file named `schema.hcl`: - -```shell -atlas schema inspect -u "clickhouse://localhost:9000/demo" > schema.hcl -``` - -Open the `schema.hcl` file to view the Atlas schema that describes our database. - -```hcl title="schema.hcl" -table "users" { - schema = schema.demo - engine = MergeTree - column "id" { - null = false - type = UInt64 - } - column "name" { - null = false - type = String - } - primary_key { - columns = [column.id] - } -} -schema "demo" { - engine = sql("Memory") -} -``` - -This first block represents a [table](/atlas-schema/hcl.mdx#table) resource with `id`, and `name` -columns. The `schema` field references the `demo` schema that is defined in the block below. In addition, the `primary_key` -sub-block defines the `id` column as the primary key for the table. Atlas strives to mimic the syntax of the database -that the user is working against. In this case, the type for the `id` column is `UInt64`, and `String` for the `name` column. - - - - -To inspect our locally-running ClickHouse instance, use the `-u` flag and write the output to a file named `schema.sql`: - -```shell -atlas schema inspect -u "clickhouse://localhost:9000/demo" --format '{{ sql . }}' > schema.sql -``` - -Open the `schema.sql` file to view the inspected SQL schema that describes our database. - -```sql title="schema.sql" -CREATE TABLE `users` ( - `id` UInt64, - `name` String) - ENGINE = MergeTree - PRIMARY KEY (`id`) - SETTINGS index_granularity = 8192; -``` - - - - -:::info -For in-depth details on the `atlas schema inspect` command, covering aspects like inspecting specific schemas, -handling multiple schemas concurrently, excluding tables, and more, refer to our documentation -[here](/declarative/inspect). -::: - -To generate an Entity Relationship Diagram (ERD), or a visual representation of our schema, we can add the `-w` flag -to the inspect command: - -```shell -atlas schema inspect -u "clickhouse://localhost:9000/demo" -w -``` - -![clickhouse-inspect](https://atlasgo.io/uploads/clickhouse/images/clickhouse-inspect.png) - -## Declarative Migrations - -The declarative approach lets users manage schemas by defining the desired state of the database as code. -Atlas then inspects the target database and calculates an execution plan to reconcile the difference between the desired and actual states. -Let's see this in action. - -We will start off by making a change to our schema file, such as adding a `repos` table: - - - - -```hcl title=schema.hcl -table "users" { - schema = schema.demo - engine = MergeTree - column "id" { - null = false - type = UInt64 - } - column "name" { - null = false - type = String - } - primary_key { - columns = [column.id] - } -} - -// highlight-start -table "repos" { - schema = schema.demo - engine = MergeTree - column "id" { - type = UInt64 - null = false - } - column "name" { - type = String - null = false - } - column "owner_id" { - type = Bigint - null = false - } - primary_key { - columns = [column.id] - } -} -schema "demo" { - engine = sql("Memory") -} -// highlight-end -``` - - - - -```sql title="schema.sql" --- Create "users" table -CREATE TABLE `users` ( - `id` UInt64, - `name` String) - ENGINE = MergeTree - PRIMARY KEY (`id`) - SETTINGS index_granularity = 8192; - -// highlight-start --- Create "repos" table -CREATE TABLE `repos` ( - `id` UInt64, - `name` String, - `owner_id` Int64) - ENGINE = MergeTree - PRIMARY KEY (`id`) - SETTINGS index_granularity = 8192; - //highlight-end -``` - - - - -Now that our _desired state_ has changed, to apply these changes to our database, Atlas will plan a migration for us -by running the `atlas schema apply` command: - - - - -```shell -atlas schema apply \ --u "clickhouse://localhost:9000/demo" \ ---to file://schema.hcl \ ---dev-url "docker://clickhouse/23.11/demo" -``` - - - - -```shell -atlas schema apply \ --u "clickhouse://localhost:9000/demo" \ ---to file://schema.sql \ ---dev-url "docker://clickhouse/23.11/demo" -``` - - - - -Apply the changes, and that's it! You have successfully run a declarative migration. - -:::info -For a more detailed description of the `atlas schema apply` command refer to our documentation -[here](/declarative/apply). -::: - -To ensure that the changes have been made to the schema, let's run the `inspect` command with the `-w` flag once more -and view the ERD: - -![atlas-schema](https://atlasgo.io/uploads/clickhouse/images/clickhouse-two-tables.png) - -## Versioned Migrations - -Alternatively, the versioned migration workflow, sometimes called "change-based migrations", allows each change to the -database schema to be checked-in to source control and reviewed during code-review. Users can still benefit from Atlas -intelligently planning migrations for them, however they are not automatically applied. - -### Creating the first migration - -In the versioned migration workflow, our database state is managed by a _migration directory_. The migration directory -holds all of the migration files created by Atlas, and the sum of all files in lexicographical order represents the current -state of the database. - -To create our first migration file, we will run the `atlas migrate diff` command, and we will provide the necessary parameters: - -* `--dir` the URL to the migration directory, by default it is file://migrations. -* `--to` the URL of the desired state. A state can be specified using a database URL, HCL or SQL schema, or another migration directory. -* `--dev-url` a URL to a [Dev Database](/concepts/dev-database) that will be used to compute the diff. - - - - -```shell -atlas migrate diff initial \ - --to file://schema.hcl \ - --dev-url "docker://clickhouse/23.11/dev" -``` - - - - -```shell -atlas migrate diff initial \ - --to file://schema.sql \ - --dev-url "docker://clickhouse/23.11/dev" -``` - - - - -Run `ls migrations`, and you'll notice that Atlas has automatically created a migration directory for us, as well as -two files: - - - - -```sql --- Create "users" table -CREATE TABLE `users` ( - `id` UInt64, - `name` String) - ENGINE = MergeTree - PRIMARY KEY (`id`) - SETTINGS index_granularity = 8192; --- Create "repos" table -CREATE TABLE `repos` ( - `id` UInt64, - `name` String, - `owner_id` Int64) - ENGINE = MergeTree - PRIMARY KEY (`id`) - SETTINGS index_granularity = 8192; -``` - - - - -```shell -h1:NrYfakYn4W3xiS9VqcP98sqvgLUPn9pJpxFndh1GWsQ= -20240130122951.sql h1:R+eBw2nOk2+VLBvJ/dn49OphxzfxoWEBUxAy4Zp3VAE= -``` - - - -The migration file represents the current state of our database, and the sum file is used by Atlas to maintain the integrity -of the migration directory. To learn more about the sum file, read the [documentation](/concepts/migration-directory-integrity). - -### Pushing migration directories to Atlas - -Now that we have our first migration, we can apply it to a database. There are multiple ways to accomplish this, with -most methods covered in the [guides](/guides) section. In this example, we'll demonstrate how to push migrations to -[Atlas Cloud](https://atlasgo.cloud), much like how Docker images are pushed to Docker Hub. - -
- clickhouse migrate push -

Migration Directory created with atlas migrate push

-
- -Let's name our new migration project `app` and run `atlas migrate push`: - -```shell -atlas migrate push app \ - --dev-url "docker://clickhouse/23.11" -``` - -Once the migration directory is pushed, Atlas prints a URL to the created directory, similar to the once shown in the -image above. - -### Applying migrations - -Once our `app` migration directory has been pushed, we can apply it to a database from any CD platform without -necessarily having our directory there. - -We'll create a simple Atlas configuration file (`atlas.hcl`) to store the settings for our local environment: - -```hcl title="atlas.hcl" {1} -# The "dev" environment represents our local testings. -env "local" { - url = "clickhouse://localhost:9000/example" - migration { - dir = "atlas://app" - } -} -``` - -The final step is to apply the migrations to the database. Let's run `atlas migrate apply` with the `--env` flag -to instruct Atlas to select the environment configuration from the `atlas.hcl` file: - -```shell -atlas migrate apply --env local -``` - -Boom! After applying the migration, you should receive a link to the deployment and the database where the migration -was applied. Here's an example of what it should look like: - -
- first deployment -

Migration deployment reported created with atlas migrate apply

-
- -### Generating another migration - -After applying the first migration, it's time to update our schema defined in the schema file and tell Atlas to generate -another migration. This will bring the migration directory (and the database) in line with the new state defined by the -desired schema (schema file). - -Let's make two changes to our schema: - -* Add a new `description` column to our repos table -* Add a new `commits` table - - - - -```hcl title="schema.hcl" -table "users" { - schema = schema.demo - engine = MergeTree - column "id" { - null = false - type = UInt64 - } - column "name" { - null = false - type = String - } - primary_key { - columns = [column.id] - } -} -table "repos" { - schema = schema.demo - engine = MergeTree - column "id" { - type = UInt64 - null = false - } - column "name" { - type = String - null = false - } -// highlight-start - column "description" { - type = sql("Nullable(String)") - null = true - } -// highlight-end - column "owner_id" { - type = Bigint - null = false - } - primary_key { - columns = [column.id] - } -} -// highlight-start -table "commits" { - schema = schema.demo - engine = MergeTree - column "id" { - type = UInt64 - null = false - } - column "message" { - type = String - null = false - } - column "repo_id" { - type = Bigint - null = false - } - column "author_id" { - type = Bigint - null = false - } - primary_key { - columns = [column.id] - } -} -// highlight-end -schema "demo" { - engine = sql("Memory") -} - -``` - - - - -```sql title = "schema.sql" -CREATE TABLE `users` ( - `id` UInt64, - `name` String NOT NULL -) ENGINE = MergeTree - PRIMARY KEY (`id`) - SETTINGS index_granularity = 8192; - -CREATE TABLE `repos` ( - `id` UInt64, - `name` String NOT NULL, -// highlight-next-line - `description` String NULL, - `owner_id` Bigint NOT NULL -) ENGINE = MergeTree - PRIMARY KEY (`id`) - SETTINGS index_granularity = 8192;; - -// highlight-start -CREATE TABLE `commits` ( - `id` UInt64, - `message` String NOT NULL, - `repo_id` Bigint NOT NULL, - `author_id` Bigint NOT NULL, -) ENGINE = MergeTree - PRIMARY KEY (`id`) - SETTINGS index_granularity = 8192; -// highlight-end -``` - - - -Next, let's run the `atlas migrate diff` command once more: - - - - - -```shell -atlas migrate diff add_commits \ - --to file://schema.hcl \ - --dev-url "docker://clickhouse/23.11/dev" -``` - - - - -```shell -atlas migrate diff add_commits \ - --to file://schema.sql \ - --dev-url "docker://clickhouse/23.11/dev" -``` - - - - -Run `ls migrations`, and you'll notice that a new migration file has been generated. - -```sql title="20240130141055.sql" -ALTER TABLE `repos` ADD COLUMN `description` Nullable(String); --- Create "commits" table -CREATE TABLE `commits` ( - `id` UInt64, - `message` String, - `repo_id` Int64, - `author_id` Int64 -) ENGINE = MergeTree -PRIMARY KEY (`id`) -SETTINGS index_granularity = 8192; -``` - -Let's run `atlas migrate push` again and -observe the new file on the migration directory page. - -```shell -atlas migrate push app \ - --dev-url "docker://clickhouse/23.11/dev" -``` - -
- clickhouse migrate push -

Migration Directory created with atlas migrate push

-
- -## Next Steps - -In this guide we learned about the declarative and versioned workflows, and how to use Atlas to generate migrations, -push them to an Atlas workspace and apply them to databases. - -Next steps: -* Read the [full docs](/atlas-schema/hcl) to learn HCL schema syntax or about specific ClickHouse [column types](/atlas-schema/hcl-types#clickhouse) -* Learn how to [set up CI](/cloud/setup-ci) for your migration directory -* Deploy schema changes with [Terraform](/integrations/terraform-provider) or [Kubernetes](/integrations/kubernetes/operator) -* Learn about [modern CI/CD principles](/guides/modern-database-ci-cd) for databases - -For more in-depth guides, check out the other pages in [this section](/guides) or visit our [Docs](/getting-started) section. - -Have questions? Feedback? Find our team on our [Discord server](https://discord.com/invite/zZ6sWVg6NT). - diff --git a/doc/md/guides/database-per-tenant/01-intro.mdx b/doc/md/guides/database-per-tenant/01-intro.mdx deleted file mode 100644 index 4588b0b28ee..00000000000 --- a/doc/md/guides/database-per-tenant/01-intro.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -title: Database-per-Tenant Architectures with Atlas -id: intro -slug: intro ---- - -## Intro - -This guide describes how to utilize Atlas to manage database schemas in "Database per Tenant" architectures, -a common pattern for deploying multi-tenant applications. - -### What is a Database per Tenant Architecture? - -In a "Database per Tenant" architecture, each tenant has its own dedicated database (or a schema). Database-per-tenant architectures are commonly -used in situations where there are considerations around data isolation, security, and scalability. It is worth noting -it contrary to creating a fully isolated deployment for each tenant, in this architecture compute and other resources -are shared across tenants. - -### Alternative Multi-Tenant Architectures - -In addition to a Database per Tenant architecture, there are other common multi-tenant architectures: - -1. **Shared Database, Shared Schema**: In this architecture, all tenants share the same database and schema and are -differentiated by a tenant identifier column in each table. -2. **Full Isolation**: In this architecture, each tenant has its own isolated deployment, including its own database, -schema, and compute resources. - -### Advantages - -Database per Tenant architectures offer several advantages over other multi-tenant architectures: - -1. **Data Isolation**: Each tenant has its own dedicated database, providing strong data isolation between tenants. -By isolating data on the database level, you can ensure that data from one tenant is not accessible to another tenant, -providing better security and privacy guarantees. -2. **Data Compliance**: By isolating tenants on the database level, you can ensure that each tenant's data is stored in -compliance with data regulations and privacy laws relevant to that tenant. For example, you can host the specific -database in a region that complies with the tenant's data residency requirements or even in your customer's own -infrastructure. -3. **Performance**: By isolating tenants on the database level, you can ensure that the performance of one tenant's -database operations does not impact the performance of other tenants - eliminating the "noisy neighbor" problem. -Additionally, you can utilize different instance types or configurations for each tenant's database to meet their -specific performance requirements. -4. **Simplified Development** (compared to shared database architectures): Codebases for multi-tenant applications are -notoriously complex. Every action against the database must be made "tenant-aware" to ensure that data is correctly -isolated. By isolating tenants on the database level, applications become effectively single-tenant, simplifying -development, reducing the mental overhead of handling multi-tenancy, and reducing the risk of data leakage between tenants. -5. **Simplified Deployment** (compared to full isolation): In a full isolation architecture, each tenant has its own -deployment, requiring separate infrastructure, monitoring, and maintenance. By isolating tenants on the database level, -you can share infrastructure, monitoring, and maintenance across tenants, reducing operational overhead. -6. **Simplified Maintenance**: Running tenants on different databases simplifies maintenance tasks: Deleting, exporting, and loading tenant data do not require special toolings as most databases offer these functionalities. -### Challenges - -> This architecture decisions is one of my biggest regrets, and we are currently in the process of rebuilding into a single database model. -> -> *[HN Thread on Database per Tenant Architecture](https://news.ycombinator.com/item?id=23305111)* - -Despite their numerous advantages, Database per Tenant architectures present unique challenges, mostly around managing -database schema migrations: - -* **Migration Duration** - As the number of tenants grows, deployment time increases linearly with the number of tenants. -This is especially problematic when deploying schema migrations that take time, such as data migrations or index rebuilds. -* **Schema Consistency** - Ensuring that all tenant databases have consistent schema versions can be complex. Any schema -change needs to be applied across all tenant databases, which can be time-consuming and error-prone. -* **Detecting Drift** - Migrations can fail due to various reasons, such as network issues, database downtime, -or data dependent issues (such as constraint violations). Such inconsistencies can cause application downtime for -users, as new application versions are rolled out and expect database changes to already have been applied. -Detecting these inconsistencies (schema-drift) across a large fleet of databases can be challenging. -* **Rollbacks** - Rolling back schema changes across multiple databases can be difficult. In a shared database -architecture, rolling back a schema change is a single operation. In a database per tenant architecture, rolling back -a schema change requires rolling back the change on each tenant's database, which needs to be orchestrated somehow. - -## Database per Tenant Architectures with Atlas - -Atlas was built from the ground up to handle database-per-tenant architectures. In fact, [Atlas Cloud](https://atlasgo.cloud) -itself is a multi-tenant application that uses a database-per-tenant architecture to manage its own schema. - -Atlas supports database-per-tenant architectures both on the CLI and in the Cloud control plane. - -In the following sections, we'll describe how to use Atlas to manage database schemas in a database-per-tenant architecture. diff --git a/doc/md/guides/database-per-tenant/02-db-groups.mdx b/doc/md/guides/database-per-tenant/02-db-groups.mdx deleted file mode 100644 index 31872f6fd2b..00000000000 --- a/doc/md/guides/database-per-tenant/02-db-groups.mdx +++ /dev/null @@ -1,323 +0,0 @@ ---- -title: Defining Target Groups -id: target-groups -slug: target-groups ---- -In Atlas, a **target group** is a collection of target databases whose schema is managed together. In a database-per-tenant -architecture, each tenant's database is a target database, and all tenant databases are grouped into a target group. -However, you can also group databases by other criteria, such as environment (dev, staging, prod), region, or any other -criteria that makes sense for your application. - -For example, you might group all databases in the same region into a target group to ensure that schema changes are -applied consistently across all databases in that region, or to group free-tier databases separately from -paid-tier databases. - -Target groups can be defined statically or dynamically loaded from an API endpoint or a database query. - -Target groups are defined in the project's [`atlas.hcl`](/atlas-schema/projects) file and are later used by the Atlas CLI during the deployment -process to determine which databases to deploy to. - -Let's review some examples of how to define target groups in Atlas. - -## `env` blocks and `for_each` meta-arguments - -Before we jump into various techniques to define target groups, let's first understand the `for_each` meta-argument for -environment blocks in Atlas. - -Environment blocks (`env` blocks) are used in Atlas project files (`atlas.hcl`) to group configuration settings for -a specific environment. Normally, an `env` block is used to define the URL of a single target database, like so: - -```hcl -env "dev" { - url = "postgres://root:pass@localhost:5432/dev" -} -``` - -However, using the `for_each` meta-argument, it is possible to define multiple instances of a specific environment block -by iterating over a list of values. For example: - -```hcl -locals { - target_db_urls = [ - "postgres://root:pass@host-1:5432", - "postgres://root:pass@host-2:5432", - ] -} - -env "targets" { - for_each = toset(local.target_db_urls) - url = each.value -} -``` - -When the `for_each` meta-argument is used, the `env` block is instantiated for each value in the list, and the `each` -object is used to access the current value. In our case, we will get two instances of the `target` block, one for each -URL in the `target_db_urls` list. - -## Dynamically Computing URLs - -A technique commonly used in `atlas.hcl` files is to dynamically compile URLs by combining values from various sources. -For instance, the database instance URL might be provided as an input variable, with the database name added to it -dynamically. Here's an example: - -```hcl -variable "db_instance_url" { - type = string -} - -locals { - tenants = ["acme_corp", "widget_inc", "wayne_enterprises", "stark_industries"] -} - -env "tenants" { - for_each = toset(local.tenants) - url = urlsetpath(var.db_instance_url, each.value) -} -``` - -Let's review the code snippet above: -* We define a variable `db_instance_url` that will be used as the base URL for the database instances. - This variable is provided by the user when running the Atlas CLI by providing the `--var` flag. -* We define a local variable `tenants` that contains a list of tenant names. -* We define an `env` block named `tenants` that iterates over the `tenants` list. For each tenant, we set the `url` - attribute to the result of the `urlsetpath` function, which combines the `db_instance_url` with the tenant name. - -:::info The `urlsetpath` function - -The `urlsetpath` function is a helper function provided by Atlas that allows you to set the "path" part -of a URL. For example: - -```hcl -urlsetpath("postgres://root:pass@localhost:5432", "mydb") -# ↳ Evaluates to "postgres://root:pass@localhost:5432/mydb" - -urlsetpath("mysql://localhost:3306", "mydb") -# ↳ Evaluates to "postgres://root:pass@localhost:5432/mydb" -``` - -::: - -## Loading data from local JSON files - -Suppose our list of tenants is stored in a local file named `tenants.json`: - -```json title="tenants.json" -{ - "tenants": [ - "acme_corp", - "widget_inc", - "wayne_enterprises", - "stark_industries" - ] -} -``` - -We can load this data into our `atlas.hcl` file using the `file` and `jsondecode` functions: - -```hcl title="atlas.hcl" -locals { - f = file("tenants.json") - decoded = jsondecode(local.f) - tenants = local.decoded.tenants -} - -env "tenants" { - for_each = toset(local.tenants) - url = urlsetpath("postgres://root:pass@localhost:5432", each.value) -} -``` - -Next, we define an environment block for this target group that consumes the `target_tenants` local variable -into the `for_each` argument: - -```hcl title="atlas.hcl" -env "tenants" { - for_each = toset(local.target_tenants) - url = urlsetpath("postgres://root:pass@localhost:5432", each.value) -} -``` - -Let's review the code snippet above: -* We define a local variable `f` that reads the contents of the `tenants.json` file. -* Next, we use the `jsondecode` function to parse the JSON content into a structured object. -* We extract the `tenants` array from the decoded JSON object and store it in the `tenants` local variable. -* Finally, we define an `env` block named `tenants` that iterates over the `tenants` list. For each tenant, we set the `url` - attribute to the result of the `urlsetpath` function, which combines the base URL with the tenant name. - -## Loading Data from an API Endpoint - -In some cases, you may want to load target groups dynamically from an API endpoint. For example, you might have a -service `tenant-svc` that provides a list of tenant databases based on some criteria. Let's suppose this service's -endpoints recieve the target group ID in the path, such as `https://tenant-svc/api/target-group/{id}` and return a -simple JSON payload: - -```json -{ - "databases": [ - "acme_corp", - "widget_inc", - "wayne_enterprises", - "stark_industries" - ] -} -``` - -You can use the `runtimevar` data source with the `http` scheme to fetch this data and use it to define -target groups. - -Here's an example of how you might load tenant databases from an API endpoint: - -```hcl -var "group_id" { - type = string -} - -data "runtimevar" "tenants" { - url = "http://tenant-svc/api/target-group/${var.group_id}" -} - -locals { - decoded = jsondecode(data.runtimevar.tenants) - tenants = local.decoded.databases -} - -env "tenants" { - for_each = toset(local.tenants) - url = urlsetpath("postgres://root:pass@localhost:5432", each.value) -} -``` - -Let's unpack this example: -* We define a variable `group_id` that will be used to fetch the tenant databases from the API endpoint. -* We use the `runtimevar` data source with the `http` scheme to fetch the tenant databases from the API endpoint. -* We parse the JSON response using the `jsondecode` function and extract the `databases` array. -* We define an `env` block named `tenants` that iterates over the `tenants` list. For each tenant, we set the `url` - attribute to the result of the `urlsetpath` function, which combines the base URL with the tenant name. - -By using the `runtimevar` data source with the `http` scheme, you can dynamically load target groups from an API -endpoint and use them to define target groups in your Atlas project. - -## Loading data from a Database Query - -In some cases, you may want to load target groups dynamically from a database query. For example, you might have a -database schema for each tenant in some instance, and would like to retrieve the list from the database's native -`information_schema` tables. - -You can utilize the `sql` data source to fetch this data and use it to define target groups. - -```hcl -var "url" { - type = string -} - -locals { - pattern = "tenant_%" -} - -data "sql" "tenants" { - url = var.url - query = < CREATE TABLE users ( - id INTEGER PRIMARY KEY, - name TEXT NOT NULL - ); - -- ok (345.458µs) - - ------------------------- - -- 3.400333ms - -- 1 migration - -- 1 sql statement -Migrating to version 20240721101205 (1 migrations in total): - - -- migrating version 20240721101205 - -> CREATE TABLE users ( - id INTEGER PRIMARY KEY, - name TEXT NOT NULL - ); - -- ok (266.375µs) - - ------------------------- - -- 905.875µs - -- 1 migration - -- 1 sql statement -``` - -As you can see from the output, the migration was applied to both databases. Observe that two new files were created -in our project directory: `tenant_1.db` and `tenant_2.db`. - -### Verifying our migrations were applied - -We can check the current schema of our local SQLite databases using the `migrate status` command. Run: - -``` -atlas migrate status --url sqlite://tenant_1.db -``` - -Atlas prints: - -```bash -Migration Status: OK - -- Current Version: 20240721101205 - -- Next Version: Already at latest version - -- Executed Files: 1 - -- Pending Files: 0 -``` - -As expected, the `tenant_1` database is up-to-date with the latest migration. - -### Checking for Drift - -Additionally, we may want to verify that the schema of the tenant database is in sync with the latest migration. We can -utilize the `schema diff` command to compare the current schema with the latest migration: - -```bash -atlas schema diff \ - --dev-url 'sqlite://?mode=memory' \ - --from file://migrations \ - --to sqlite://tenant_1.db \ - --exclude 'atlas_schema_revisions' -``` - -Atlas will output: - -```bash -Schemas are synced, no changes to be made. -``` - -## Next steps - -As you can see, deploying migrations to target groups is straightforward using the Atlas CLI, but getting visibility -into the status of each tenant, is done individually. To bridge this gap, we will show how to use the Atlas Cloud -control plane to gain visibility into the status of our system in the next section. diff --git a/doc/md/guides/database-per-tenant/04-cloud.mdx b/doc/md/guides/database-per-tenant/04-cloud.mdx deleted file mode 100644 index ca864d70203..00000000000 --- a/doc/md/guides/database-per-tenant/04-cloud.mdx +++ /dev/null @@ -1,200 +0,0 @@ ---- -title: Cloud Control Plane -id: control-plane -slug: control-plane ---- -In the previous section, we demonstrated how to use the Atlas CLI to manage migrations for a database-per-tenant -architecture. Next, we will see how to use the Atlas Cloud Control Plane to manage migrations across -multiple databases. - -## Setting up - -In this section, we will be continuing our minimal example from before, so if you are just joining us, please -follow the steps in the previous section to set up your project. - -Additionally, you will need an Atlas Cloud account. If you don't have one, you can sign up for free by running -the following command and following the instructions on the screen: - -``` -atlas login -``` - -### Pushing our project to Atlas Cloud - -In order to manage our migrations across multiple databases, we need push our project to the Atlas Cloud -Schema Registry. But first, let's set up a local `env` block in our `atlas.hcl` file. Append the following -to the file: - -```hcl -env "local" { - dev = "sqlite://?mode=memory" - migration { - dir = "file://migrations" - } -} -``` - -Next, push the project to the Atlas Cloud Schema Registry by running the following command: - -```shell -atlas migrate push --env prod db-per-tenant -``` - -Atlas will push our migration directory to the Schema Registry and print the URL of the project, for example: - -``` -https://rotemtam85.atlasgo.cloud/dirs/4294967396 -``` - -## Working with Atlas Cloud - -### Deploying from the Registry - -Once we have successfully pushed our project to the Schema Registry, we can deploy from it to our target -databases. To do this, let's make a small change to our `prod` env in `atlas.hcl`: - -```hcl -env "prod" { - for_each = toset(local.tenant) - url = "sqlite://${each.value}.db" - migration { -//highlight-next-line - dir = "atlas://db-per-tenant" - } -} -``` - -Now, we can deploy the migrations to our target databases by running: - -```shell -atlas migrate apply --env prod -``` - -Atlas will read the most recent version of our migration directory from the schema registry, apply the migrations -to each target database, report the results to Atlas Cloud, and print the results: - -``` -No migration files to execute -No migration files to execute -https://rotemtam85.atlasgo.cloud/deployments/sets/94489280593 -``` - -In this case, we see that there were no new migrations to apply to the target databases. Let's show how -this flow works when there is work to be done in the next section. - -### Another migration - -Let's plan another migration to our project. Create a new migration file by running: - -```shell -atlas migrate new --edit seed_users -``` - -In the editor, add the following SQL statements: - -```sql -INSERT INTO users (id, name) VALUES (1, "a8m"); -INSERT INTO users (id, name) VALUES (2, "rotemtam"); -``` - -Save the file and exit the editor. Let's push the new migration to the Schema Registry: - -```shell -atlas migrate push --env prod db-per-tenant -``` - -### Deploying the new migration - -After successfully pushing the new migration, we can deploy it to our target databases by running: - -```shell -atlas migrate apply --env prod -``` - -Atlas will apply the new migration to each target database and print the results: - -``` -Migrating to version 20240721111345 from 20240721101205 (1 migrations in total): - - -- migrating version 20240721111345 - -> INSERT INTO users (id, name) VALUES (1, "a8m"); - -> INSERT INTO users (id, name) VALUES (2, "rotemtam"); - -- ok (1.106417ms) - - ------------------------- - -- 7.441584ms - -- 1 migration - -- 2 sql statements -Migrating to version 20240721111345 from 20240721101205 (1 migrations in total): - - -- migrating version 20240721111345 - -> INSERT INTO users (id, name) VALUES (1, "a8m"); - -> INSERT INTO users (id, name) VALUES (2, "rotemtam"); - -- ok (1.061709ms) - - ------------------------- - -- 3.272584ms - -- 1 migration - -- 2 sql statements -https://rotemtam85.atlasgo.cloud/deployments/sets/94489280594 -``` - -Following the link will take you to the Atlas Cloud UI, where you can see the details of the deployment: - -![](https://atlasgo.io/uploads/db-per-tenant/deployment-set.png) - -## Gaining Visibility - -The Atlas Cloud Control Plane provides a centralized view of all your deployments across multiple databases. -You can see the status of each deployment, the target databases, and the results of each migration. - -### Database Status - -![](https://atlasgo.io/uploads/db-per-tenant/databases-screen.png) - -To view the status of the different databases in your project, navigate to the "Databases" tab in the Atlas Cloud -UI. Here, you can see the status of each database, the most recent migration applied, and the results of the -migration. - -Databases can be in one of three states: -* Synced - The database is up-to-date with the most recent migration. -* Pending - The database is waiting for a new migration to be applied. -* Error - An error occurred while applying the migration. - -### Troubleshooting - -If an error occurs during a migration, having a centralized view of all your deployments can help you quickly -identify the issue and take corrective action. You can view the error message, the target database, and the -migration that caused the error. - -Suppose we run a deployment that fails during the schema migration phase, we can easily locate the error in the -Atlas Cloud UI by navigating to the "Migrations" tab: - -![](https://atlasgo.io/uploads/db-per-tenant/migrations-screen.png) - -We quickly find the failed deployment and drill down to diagnose the issue: - -![](https://atlasgo.io/uploads/db-per-tenant/deployment-set-error.png) - -From the logs, we see that 3 out of 4 migrations passed without action, but the last one failed. We see that it failed -on `tenant_4.db` with the error message: - -``` -Error: sql/migrate: executing statement "INSERT INTO users (id, name) VALUES (1, \"a8m\");" from version "20240721111345": UNIQUE constraint failed: users.id -``` -We can further drill down into the specific database target migration: - -![](https://atlasgo.io/uploads/db-per-tenant/deployment-error.png) - -We now clearly see the issue, our data migration failed due to a unique constraint violation. Now, we can take corrective -action to fix the issue and reapply the migration - usually by fixing the problematic data in our target database. - -## Conclusion - -In this section, we demonstrated how to use the Atlas Cloud Control Plane to manage migrations across multiple -target databases. We showed how to push our project to the Atlas Cloud Schema Registry, deploy migrations to -target databases, and gain visibility into the status of our deployments. - -While it is possible to manage migrations using the Atlas CLI, the Atlas Cloud Control Plane provides a centralized -view of all your deployments, making it easier to manage and troubleshoot issues across multiple databases. - diff --git a/doc/md/guides/deploying/cloud-dir.mdx b/doc/md/guides/deploying/cloud-dir.mdx deleted file mode 100644 index 46d2053fcb5..00000000000 --- a/doc/md/guides/deploying/cloud-dir.mdx +++ /dev/null @@ -1,117 +0,0 @@ ---- -id: cloud-dir -title: Working with Atlas Registry -slug: /guides/deploying/remote-directories ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -In the past, we have recommended users to [build a migrations Docker image](image.md) as part of their -CI pipeline and then use that image during their deployment process. This is still a valid approach, -as it bundles together the Atlas binary needed to run migrations with the migrations themselves. -However, over the last years we have received feedback from many users that this approach is cumbersome -and requires a lot of boilerplate code to be written. - -To address this, we have introduced the [Atlas Schema Registry](/cloud/features/registry), which allows you to store -schemas and migrations in the cloud and make them available later to your deployment pipelines by their tags. - -On a high-level this approach works as follows: - -1. Users sync their migration directory to the Atlas Registry whenever a new migration is merged to the main branch. - Learn more about it in the [Syncing Migration Directories](/cloud/directories) doc. -2. During deployment, Atlas fetches the migration directory from the Atlas Registry by its tag (defaults to `latest`) and - applies the migrations to the database. - -This guide shows you how to set up this approach for your project. - -### Prerequisites - -1. An Atlas Cloud account with administrator access. If you don't have an account, you can [sign up for free](https://atlasgo.cloud/). -2. Sync your migration directory from GitHub to your Atlas Cloud account. See [Syncing Migration Directories](/cloud/directories) for more information. -3. A token for an Atlas Cloud Bot user with permissions report CI/CD runs and read the migration directory. See - [Creating a Bot User](/cloud/bots) for more information. - -### Deploying migrations using Atlas Registry - -Once your migration directory is pushed to the Registry, you can use the `atlas` CLI to fetches the migration directory -from the Atlas Registry and apply the migrations to the database. - -To get started, create a project configuration file named `atlas.hcl`: - -```hcl -env { - name = atlas.env - url = getenv("DATABASE_URL") - migration { - dir = "atlas://" - } -} -``` - -Let's review what this configuration file does: -1. We define an environment using the `env` block. - To avoid setting database credentials in the configuration file, we use the `DATABASE_URL` environment variable. -2. To fetch the migration directory from the Atlas Registry we use the `atlas://` URL in the `migration.dir` attribute. - The name is the same as the name you used when you [synced your migration directory](/cloud/directories). - -### Read Migrations from Atlas Registry - -Once you have created your configuration file, you can read the available migrations from the Atlas Registry and apply -them to the database using the `atlas` CLI using the following commands: - - - - -```bash -# Login first to Atlas Cloud. -atlas login -# Run migrations. Give an environment name, such as --env local. -atlas migrate apply --env local -``` - -Let's review what these commands do: -1. We run the `atlas login` command to authenticate with Atlas Cloud. -2. We run the `atlas migrate apply` command to apply migrations to the database. The `--env` flag is used to specify - the name of the environment we defined in the configuration file. - - - - -```bash -ATLAS_TOKEN="{{ YOUR_ATLAS_TOKEN }}" atlas migrate apply --env production -``` - -Let's review what these commands do: -1. We set the `ATLAS_TOKEN` environment variable to the token we created earlier in the CI/CD pipeline. -2. We run the `atlas migrate apply` command to apply migrations to the database. The `--env` flag is used to specify - the name of the environment we defined in the configuration file. - - - - -The `atlas migrate apply` command will run all migrations that have not been applied to the database yet: - -```applylog -Migrating to version 20230306221009 (1 migrations in total): - - -- migrating version 20230306221009 - -> create table users ( - id int primary key - ); - -- ok (8.60933ms) - - ------------------------- - -- 68.037117ms - -- 1 migrations - -- 1 sql statements -``` - -### Viewing migration logs in Atlas Cloud - -After the migrations have been applied, you can view them in Atlas Cloud by heading to the `/deployments` page -in your Atlas Cloud account. You should see a new migration log with the name of the environment you specified in -the configuration file. Clicking on the migration-log will show you the details of the migration, including the -statements and checks that were applied: - -![](https://atlasgo.io/uploads/cloud/deploy/check-passed-v1.png) diff --git a/doc/md/guides/deploying/cloud-sql-via-github-actions.md b/doc/md/guides/deploying/cloud-sql-via-github-actions.md deleted file mode 100644 index d963bdb6176..00000000000 --- a/doc/md/guides/deploying/cloud-sql-via-github-actions.md +++ /dev/null @@ -1,289 +0,0 @@ ---- -id: cloud-sql-via-github-actions -slug: /guides/deploying/cloud-sql-via-github-actions -title: Deploying schema migrations to Google CloudSQL using Atlas ---- - -## In this article -* [Overview](#overview) -* [What is Cloud SQL?](#what-is-cloud-sql) -* [What is Cloud SQL Auth Proxy?](#what-is-cloud-sql-auth-proxy) -* [What is GitHub Actions?](#what-is-github-actions]) -* [Deploying Schema Migrations to Cloud SQL](#deploying-schema-migrations-to-cloud-sql) -* [Prerequisites](#prerequisites) -* [Step-by-Step](#step-by-step) - 1. [Authenticate to Google Cloud](#step-by-step) - 2. [Retrieve your instance connection name](#retrieve-your-instance-connection-name) - 3. [Store your password in GitHub Secrets](#store-your-password-in-github-secrets) - 4. [Setup GitHub Actions](#setup-github-actions) - 5. [Execute your GitHub Actions Workflow](#execute-your-github-actions-workflow) -* [Wrapping Up](#wrapping-up) - -## Overview - -In this guide, we demonstrate how to handle database schema changes when working with Cloud SQL. Within the framework of this topic, we are going to introduce how to set up a GitHub Actions workflow to automatically deploy database schema changes to a Cloud SQL instance. This approach is meant to enhance automation, version control, CI/CD, DevOps practices, and scalability, contributing to more efficient and reliable database management. - -Before diving into the practical implementation, let's first look at some of the underlying technologies that we will be working with. - -## What is Cloud SQL? -Cloud SQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your relational databases in the cloud. With Cloud SQL, you can deploy your databases in a highly available and scalable manner, with automatic failover and load balancing, so that your applications can handle a large number of concurrent requests and traffic spikes. You can also choose from different machine types and storage sizes to meet your specific performance and storage requirements. - -## What is Cloud SQL Auth Proxy? -The Cloud SQL Auth Proxy is a utility for ensuring simple, secure connections to your Cloud SQL instances. It provides a convenient way to control access to your database using Identity and Access Management (IAM) permissions while ensuring a secure connection to your Cloud SQL instance. Like most proxy tools, it serves as the intermediary authority on connection authorizations. Using the Cloud SQL Auth proxy is the recommended method for connecting to a Cloud SQL instance. - -## What is GitHub Actions? -GitHub Actions is a continuous integration and continuous delivery (CI/CD) platform that allows you to automate your build, test, and deployment pipeline. You can create workflows that build and test every pull request to your repository, or deploy merged pull requests to production. GitHub Actions goes beyond just DevOps and lets you run workflows when other events happen in your repository. For example, in this guide, you will run a workflow to automatically deploy migrations to a Cloud SQL database whenever someone pushes changes to the main branch in your repository. - -## Deploying Schema Migrations to Cloud SQL - -### Prerequisites - -Prerequisites to the guide: - -1. You will need to have the GCP **Project Editor** role. This role grants you full read and write access to resources within your project. -2. Google Cloud SDK installed on your workstation. If you have not installed the SDK, you can find [instructions for installing the SDK from the official documentation](https://cloud.google.com/sdk/docs/install/). -3. A running Cloud SQL instance to work against. If you have not created the instance yet, see [Creating instances at cloud.google.com](https://cloud.google.com/sql/docs/postgres/create-instance). -4. A GitHub repository to create and run a GitHub Actions workflow. - -### Step-by-Step -#### 1—Authenticate to Google Cloud -There are two approaches to authenticating with Google Cloud: Authentication via a Google Cloud Service Account Key JSON or authentication via [Workload Identity Federation](https://cloud.google.com/iam/docs/workload-identity-federation). - -**Setup Workload Identity Federation** -Identity federation allows you to grant applications running outside Google Cloud access to Google Cloud resources, without using Service Account Keys. It is recommended over Service Account Keys as it eliminates the maintenance and security burden associated with service account keys and also establishes a trust delegation relationship between a particular GitHub Actions workflow invocation and permissions on Google Cloud. - -For authenticating via Workload Identity Federation, you must create and configure a Google Cloud Workload Identity Provider. A Workload Identity Provider is an entity that describes a relationship between Google Cloud and an external identity provider, such as GitHub, AWS, Azure Active Directory, etc. - -To create and configure a Workload Identity Provider: - -1. Save your project ID as an environment variable. The rest of these steps assume this environment variable is set: - -```bash -$ export PROJECT_ID="my-project" # update with your value -``` -2. Create a Google Cloud Service Account. If you already have a Service Account, take note of the email address and skip this step. - -```bash -$ gcloud iam service-accounts create "my-service-account" \ - --project "${PROJECT_ID}" -``` - -3. Enable the IAM Credentials API: - -```bash -$ gcloud services enable iamcredentials.googleapis.com -``` - -4. Grant the Google Cloud Service Account permissions to edit Cloud SQL resources. - -```bash -$ gcloud projects add-iam-policy-binding [PROJECT_NAME] \ ---member serviceAccount:[SERVICE_ACCOUNT_EMAIL] \ ---role roles/editor -``` - -Replace **[PROJECT_NAME]** with the name of your project, and **[SERVICE_ACCOUNT_EMAIL]** with the email address of the service account you want to grant access to. - -5. Create a new workload identity pool: - -```bash -$ gcloud iam workload-identity-pools create "my-pool" \ - --project="${PROJECT_ID}" \ - --location="global" \ - --display-name="My pool" -``` - -6. Get the full ID of the Workload Identity Pool: - -```bash -$ gcloud iam workload-identity-pools describe "my-pool" \ - --project="${PROJECT_ID}" \ - --location="global" \ - --format="value(name)" -``` - -Save this value as an environment variable: - -```bash -$ export WORKLOAD_IDENTITY_POOL_ID="..." # value from above - -# This should look like: -# -# projects/123456789/locations/global/workloadIdentityPools/my-pool -# -``` - -7. Create a Workload Identity Provider in that pool: - -```bash -$ gcloud iam workload-identity-pools providers create-oidc "my-provider" \ - --project="${PROJECT_ID}" \ - --location="global" \ - --workload-identity-pool="my-pool" \ - --display-name="GitHub provider" \ - --attribute-mapping="google.subject=assertion.sub,attribute.actor=assertion.actor,attribute.repository=assertion.repository" \ - --issuer-uri="https://token.actions.githubusercontent.com" -``` - -8. Allow authentications from the Workload Identity Provider originating from your repository to impersonate the Service Account created above: - -```bash -# Update this value to your GitHub repository. - -$ export REPO="username/repo_name" # e.g. "ariga/atlas" - -$ gcloud iam service-accounts add-iam-policy-binding "my-service-account@${PROJECT_ID}.iam.gserviceaccount.com" \ - --project="${PROJECT_ID}" \ - --role="roles/iam.workloadIdentityUser" \ - --member="principalSet://iam.googleapis.com/${WORKLOAD_IDENTITY_POOL_ID}/attribute.repository/${REPO}" -``` - -Note that **$WORKLOAD_IDENTITY_POOL_ID** should be the full Workload Identity Pool resource ID, like: - -**projects/123456789/locations/global/workloadIdentityPools/my-pool** - -9. Extract the Workload Identity Provider resource name: - -```bash -$ gcloud iam workload-identity-pools providers describe "my-provider" \ - --project="${PROJECT_ID}" \ - --location="global" \ - --workload-identity-pool="my-pool" \ - --format="value(name)" -``` - -Use this value as the **workload_identity_provider** value in your GitHub Actions YAML. - -Using the Workload Identity Provider ID and Service Account email, the GitHub Action will mint a GitHub OIDC token and exchange the GitHub token for a Google Cloud access token. - -**Note:** It can take up to **5 minutes** from when you configure the Workload Identity Pool mapping until the permissions are available. - -#### 2—Retrieve your Instance Connection Name -The instance connection name is a connection string that identifies a Cloud SQL instance, and you need this string to establish a connection to your database. The format of the connection name is **projectID:region:instanceID**. - -To retrieve the Cloud SQL instance connection name, run the following command: - -```bash -$ gcloud sql instances describe --format='value(connectionName)' -``` - -For example, if your instance name is **"my-instance"**, you can retrieve its connection name using the following command: - -```bash -$ gcloud sql instances describe my-instance --format='value(connectionName)' -``` - -#### 3—Store your Password in GitHub Secrets -Secrets are a way to store sensitive information securely in a repository, such as passwords, API keys, and access tokens. To use secrets in your workflow, you must first create the secret in your repository's settings by following these steps: - -1. Navigate to your repository on GitHub. -2. Click on the **"Settings"** tab. -3. Click on **"Secrets"** in the left sidebar. -4. Click on **"New repository secret"**. -5. Enter **"DB_PASSWORD"** in the **"Name"** field. -6. Enter the actual password in the **"Value"** field. -7. Click on **"Add secret"**. - -Once you have added the secret, you can reference it in your workflow using **`${{ secrets.DB_PASSWORD }}`**. The action will retrieve the actual password value from the secret and use it in the **`DB_PASSWORD`** environment variable during the workflow run. - -#### 4—Setup GitHub Actions -Here is an example GitHub Actions workflow for authenticating to GCP with workload identity federation and deploying migrations to a Cloud SQL MySQL database using Cloud SQL Proxy: - -```yaml -name: Deploy Migrations - -on: - push: - branches: - - main - -env: - PROJECT_ID: my-project-id - INSTANCE_CONNECTION_NAME: my-instance-connection-name - DB_HOST: 127.0.0.1 - DB_PORT: 3306 - DB_NAME: my-db-name - DB_USER: my-db-user - DB_PASSWORD: ${{ secrets.DB_PASSWORD }} - -jobs: - deploy-migrations: - runs-on: ubuntu-latest - permissions: - contents: 'read' - id-token: 'write' - steps: - - name: Checkout Repository - uses: actions/checkout@v3 - - - name: Download and install Atlas CLI - run: | - curl -sSf https://atlasgo.sh | sh -s -- -y - - - name: Download wait-for-it.sh - run: | - wget https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh - chmod +x wait-for-it.sh - - - id: 'auth' - uses: 'google-github-actions/auth@v1' - with: - workload_identity_provider: 'projects/123456789/locations/global/workloadIdentityPools/my-pool/providers/my-provider' - service_account: 'my-service-account@my-project.iam.gserviceaccount.com' - - - name: 'Set up Cloud SDK' - uses: 'google-github-actions/setup-gcloud@v1' - with: - version: '>= 416.0.0' - - - name: Download Cloud SQL Proxy - run: | - wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy - chmod +x cloud_sql_proxy - - - name: Start Cloud SQL Proxy - run: ./cloud_sql_proxy -instances=$INSTANCE_CONNECTION_NAME=tcp:3306 & - - - name: Wait for Cloud SQL Proxy to Start - run: | - ./wait-for-it.sh $DB_HOST:$DB_PORT -s -t 10 -- echo "Cloud SQL Proxy is running" - - - name: Deploy Migrations - run: | - echo -ne '\n' | atlas migrate apply --url "mysql://$DB_USER:$DB_PASSWORD@$DB_HOST:$DB_PORT/$DB_NAME" --dir file://migrations - - - name: Stop Cloud SQL Proxy - run: kill $(ps aux | grep cloud_sql_proxy | grep -v grep | awk '{print $2}') -``` - -Note that for this workflow to work, you will need to replace the placeholders in the environment variables with your own values. Your migrations directory should be stored in your repository's root directory. - -Here's what this workflow does: - -1. Sets the name of the workflow to **"Deploy Migrations"**. -2. Triggers on a push to the **main** branch. -3. Sets the environment variables required for the Cloud SQL instance and the database we want to deploy migrations to. -4. Defines a job named **"deploy-migrations"** that runs on the latest version of Ubuntu. -5. Checkout the code. -6. Downloads and installs the Atlas CLI. -7. Uses the Google Cloud Workload Identity Federation to authenticate with Google Cloud. -8. Configures the [Google Cloud SDK](https://cloud.google.com/sdk/) in the GitHub Actions environment. -9. Downloads the Cloud SQL Proxy and makes it executable. -10. Starts the Cloud SQL Proxy, to create a secure tunnel between your GitHub Actions runner and your Cloud SQL instance. -11. Wait for Cloud SQL Proxy to start up before proceeding with the subsequent steps. -12. Deploys all pending migration files in the migration directory on a Cloud SQL database. -13. Stops the Cloud SQL Proxy - -#### 5—Execute your GitHub Actions Workflow -To execute this workflow once you commit to the main branch, follow these steps: - -1. Create a new file named **atlas_migrate_db.yml** in the **.github/workflows/** directory of your repository. -2. Add the code block we've just discussed to the **atlas_migrate_db.yml** file. -3. Commit the **atlas_migrate_db.yml** file to your repository's **main** branch. - -Now, whenever you push changes to the **main** branch, all pending migrations will be executed. You can monitor the progress of the GitHub Action in the "Actions" tab of your repository. - -## Wrapping Up -In this guide, you learned how to deploy schema migrations to Cloud SQL using Atlas, while ensuring secure connections via a Cloud SQL Proxy. With this knowledge, you can leverage the power of Atlas and Cloud SQL to manage your database schema changes with ease and confidence. - -In addition to the specific steps outlined in this guide, you also gained valuable experience with various concepts and tools that are widely used in database management, such as GitHub Actions, Cloud SQL, Cloud SQL Proxy, and the Google Cloud SDK. We hope that this guide has been helpful in expanding your knowledge and skills. \ No newline at end of file diff --git a/doc/md/guides/deploying/ecs-fargate.md b/doc/md/guides/deploying/ecs-fargate.md deleted file mode 100644 index 28670708570..00000000000 --- a/doc/md/guides/deploying/ecs-fargate.md +++ /dev/null @@ -1,174 +0,0 @@ ---- -id: aws-ecs-fargate -slug: /guides/deploying/aws-ecs-fargate -title: Deploying to ECS/Fargate ---- -[AWS Elastic Container Service (ECS)](https://aws.amazon.com/ecs/) is a popular way to deploy containerized applications -to AWS. ECS is a managed service that allows you to run containers on a cluster of EC2 -instances, or on AWS Fargate, a serverless compute engine for containers. - -In this guide, we will demonstrate how to deploy schema migrations to ECS/Fargate using -Atlas. As deploying to ECS/Fargate is a vast topic that is beyond the scope of this -guide, we will focus on the migration part only. - -Because of its operational simplicity, we will discuss deployment to ECS where tasks -are run on Fargate, but the techniques discussed here are relevant to any ECS deployment. - -## Prerequisites - -Prerequisites to the guide: - - -1. [A service running on ECS/Fargate](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_AWSCLI_Fargate.html) - defining your application. -2. A connection string to a database that is accessible from the ECS task. (e.g. An RDS running in the same VPC as the - ECS cluster with a security group allowing access from the ECS task) -3. Atlas and AWS CLIs installed and configured on your machine. - -## Storing database credentials in Secrets Manager - -In order to run migrations, Atlas needs a connection string to the database. In order to avoid storing the database -credentials in plain text in the ECS task definition, we will use AWS Secrets Manager -to store the database credentials and pass them to the migration container as environment -variables. - -Let's start by creating a secret in AWS Secrets Manager that contains the database credentials: - -```bash -aws secretsmanager create-secret --name mydb --secret-string 'postgres://user:password@host:port/dbname' -``` - -The CLI responds with the details about the created secret, which we will use later: - -```json -{ - "ARN": "arn:aws:secretsmanager:us-east-1::secret:mydb-gxZ0Qe", - "Name": "mydb", - "VersionId": "ab6d1fc0-d1a0-49c8-9bfb-5fd9922ffc37" -} -``` - -To make sure that the ECS task has access to the secrets, we will need to add to the -task's IAM role a policy that allows it to access the secrets. This will look something -similar to: - -```json -{ - "Statement": [ - { - "Action": [ - "secretsmanager:GetSecretValue", - "secretsmanager:DescribeSecret" - ], - "Effect": "Allow", - "Resource": "arn:aws:secretsmanager:us-east-2::secret:mydb-", - "Sid": "" - } - ], - "Version": "2012-10-17" -} -``` - -## Reading secrets during deployment - -To read our secret value during deployment we can use the [`runtimevar`](https://atlasgo.io/atlas-schema/projects#data-source-runtimevar) -data source. To use this, create a project file named `atlas.hcl`: - -```hcl -data "runtimevar" "url" { - url = "awssecretsmanager://mydb?region=us-east-2" -} -env "deploy" { - url = "${data.runtimevar.url}" -} -``` - -Be sure to replace `mydb` with the name of your secret and to set the correct region in the query parameter. - -Next, create a Dockerfile that will include your migration directory and project file. This is a variation -of the baseline example we introduced in the ["Creating container images for migrations"](image.md) guide: - -```dockerfile -FROM arigaio/atlas:latest - -COPY migrations /migrations - -COPY atlas.hcl . -``` - -This image should be built and pushed to ECR (or another container registry) as part of your CI -process. - -### Running migrations before the application starts - -In order to make sure that migrations run successfully before the application starts, we will need to update the -ECS task definition to make the main application container depend on the migration container running to completion. -This way, when you deploy a new version of the application, ECS will first run the migration container and only -start the application container once the migration container exits successfully. - -Notice that when running migrations for a distributed application, you will need to make sure that only one -actor in our system tries to run the migrations at any given time to avoid race conditions with unknown -outcomes. Luckily, Atlas supports this behavior out of the box. When running migrations, Atlas will -first acquire a lock in the database (using advisory locking, in databases that support it) and then begin execution. - -To achieve this, your task definition should look something similar to: - -```js -{ - "family":"fargate-demo-task-dev", - "taskRoleArn":"arn:aws:iam:::role/fargate-demo-ecsTaskRole", - "executionRoleArn":"arn:aws:iam:::role/fargate-demo-ecsTaskExecutionRole", - "networkMode":"awsvpc", - "requiresCompatibilities":[ - "FARGATE" - ], - "cpu":"256", - "memory":"512", - "containerDefinitions":[ - { - "name":"atlas", - "image":".dkr.ecr.us-east-2.amazonaws.com/fargate-demo:v5", - // highlight-start - "essential":false, - "command":[ - "migrate", - "apply", - "--env", - "deploy" - ] - // highlight-end - }, - { - "name":"fargate-demo-container-dev", - "image":"nginx:latest", - "portMappings":[ - { - "containerPort":80, - "hostPort":80, - "protocol":"tcp" - } - ], - "essential":true, - // highlight-start - "dependsOn":[ - { - "containerName":"atlas", - "condition":"SUCCESS" - } - ], - // highlight-end - } - ] -} -``` -Notice a few points of interest in the above task definition: -1. We define two containers: one for running Atlas migrations, named "atlas" and one for running the application, "app". -For the sake of the example, our application container is only running the latest version of `nginx`, but in a realistic - scenario it will contain your application code. -2. The `app` container has a `dependsOn` clause that makes it depend on the `atlas` container. This means that ECS will - only start the `app` container once the `atlas` container exits successfully. -3. The `atlas` container is not marked as `essential`. This is required for containers that aren't expected to keep - running through the task's lifecycle, ideal for use cases like running a setup script before the application starts. -4. The `atlas` container is configured to run the `migrate apply` command. This will run all pending migrations and then exit. - We provide this command with the `--env deploy` flag to make sure that it uses the `deploy` environment defined - in our project file. \ No newline at end of file diff --git a/doc/md/guides/deploying/fly-io.mdx b/doc/md/guides/deploying/fly-io.mdx deleted file mode 100644 index 0da4aa0317f..00000000000 --- a/doc/md/guides/deploying/fly-io.mdx +++ /dev/null @@ -1,167 +0,0 @@ ---- -id: fly-io -slug: /guides/deploying/fly-io -title: Using Atlas with Fly.io ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -[Fly.io](https://fly.io/) is a platform for running full stack apps and databases close to the users. Under the hood, Fly -converts Docker images (or other OCI-compliant image formats) to [Firecracker microVMs](https://firecracker-microvm.github.io/). Fly allows -the deployment of the app in different regions and route the requests based on the [app load and user closeness](https://fly.io/docs/reference/load-balancing/). - -Apps on Fly can be [deployed in one of three ways](https://fly.io/docs/reference/builders/): using a Dockerfile, a -Docker image or a [buildpack](https://buildpacks.io/). - -In this guide, we will demonstrate how Atlas can be used to perform database schema migrations for Fly.io deployments process using a Dockerfile. -We will assume that you have already have a Fly project and are able to deploy it, if you are new to Fly, [check the getting started guide](https://fly.io/docs/speedrun/). - -## Release command - -When you configure your Fly project, you can define a _release command_. This command is executed during the release phase before the new version of your application is deployed. If it fails, i.e exits with a status code other than zero, Fly marks the deployment as failed as well. The release command is the -[recommended way](https://fly.io/docs/reference/configuration/#the-deploy-section) to run database migrations on Fly. - -:::note -The release command only allows executing commands that are present in the application image, meaning that we -have to embed the Atlas binary with our application. [We usually suggest a separate step for handling the -migration](https://atlasgo.io/guides/deploying/intro#running-migrations-as-part-of-deployment-pipelines), -but since Fly currently does not support providing a separate image for the release phase, we recommend this solution. -::: - -## Defining the Dockerfile - -Using Docker multi-stage builds, we can compose lightweight images from multiple steps that use heavier base images. In this guide, we will use a Go app as an example. Because Go is a compiled language, so we can use a separate step for building the target binary and another for producing the runtime container. This way the runtime environment can be smaller, omitting the build environment. - -Suppose our project structure is similar to the one below: - -```text -. -├── fly.toml -├── go.mod -├── go.sum -├── main.go -└── migrations - ├── 20221220000101_create_users.sql - └── atlas.sum -``` - -Our objective is to build an image that contains the Atlas binary, the database migrations and our application code. For -our Go app the Dockerfile can be defined as: - -```dockerfile title="Dockerfile" -FROM arigaio/atlas:latest-alpine as atlas - -# build stage -FROM golang:1.19.2-bullseye as build -WORKDIR /build -ADD go.mod /build/go.mod -ADD go.sum /build/go.sum -ADD main.go /build/main.go -RUN CGO_ENABLED=0 go build -o app main.go - -# runtime stage -FROM alpine -COPY --from=atlas /atlas /atlas -COPY migrations /migrations - -COPY --from=build /build/app /app -CMD ["/app"] -``` - -If you are using another compiled programming language, most of the time you will only have to change the build stage. -If your application requires a runtime you may have to change the final stage as well. - -:::info -It's important for the final image to have a shell that is capable of environment variable interpolation, like `sh` or `bash`. This is mostly -due to [behavior on Fly's side](https://community.fly.io/t/using-env-variables-in-release-command/7186), -where expanding environment variables don't currently work correctly on the release command. -::: - -## Setting the database URL secret - -While running the migration, Atlas needs to know the URL for the database. Fly has support for -defining [secrets](https://fly.io/docs/reference/secrets/), sensitive values that are available during runtime as -environment variables. We can define the database URL using the command below: - -``` -flyctl secrets set DATABASE_URL="postgres://postgres:pass@0.0.0.0:5432/database?sslmode=disable" -``` - -If you use the **flyctl** command `postgres attach` the -secret [will be created automatically for you](https://fly.io/docs/postgres/managing/attach-detach/#attach-a-fly-app). - -## Configuring `fly.toml` - -To tell Fly to execute the release command during a deployment, we need to add a `deploy` block with the release -command provided: - -```toml title="fly.toml" -[deploy] -release_command = "sh -c '/atlas migrate apply --url $DATABASE_URL'" -``` - -With the release command defined, during new deployments a new temporary VM will be created and will execute the release -command. If the commands succeed the deployment will continue, in case of failures the deployment will be aborted. - -## Deploying the app - -We can deploy the app with the command `flyctl deploy`. Fly will use a Docker installation (or a remote builder) to -build the Docker image and push to the Fly registry. - -The output of the release command will be presented to you on your terminal, but if you missed it, you can use the -Monitoring page of your app or the `fly logs` command to see the previous logs entries. - -Atlas will provide helpful information during the execution, here are a few examples of logs outputs: - - - - -```text -Preparing to run: `/atlas migrate apply --env prod` as root -Migrating to version 20221220000101 (1 migrations in total): -migrating version 20221220000101 --> CREATE TABLE "public"."users" ("id" integer NOT NULL, "name" character varying(100) NULL, PRIMARY KEY ("id")); --- ok (6.204945ms) - ------------------------- --- 12.69747ms --- 1 migrations --- 1 sql statements -``` - - - - -```text -Preparing to run: `/atlas migrate apply --env prod` as root -No migration files to execute -``` - - - - -```text -Preparing to run: `/atlas migrate apply --env prod` as root -You have a checksum error in your migration directory. -This happens if you manually create or edit a migration file. -Please check your migration files and run -'atlas migrate hash' -to re-hash the contents and resolve the error -Error: checksum mismatch -Error release command failed, deployment aborted -``` - - - - -## Improving the deployment pipeline - -You can always improve the deployment pipeline by leveraging [Atlas](https://atlasgo.io/integrations/github-actions) -and [Fly](https://fly.io/docs/app-guides/continuous-deployment-with-github-actions/) GitHub Actions. For additional -insights on the database schema and migrations, we recommend giving [Atlas Cloud](https://atlasgo.io/cloud/getting-started) a try. diff --git a/doc/md/guides/deploying/helm.md b/doc/md/guides/deploying/helm.md deleted file mode 100644 index 517f5dd9d21..00000000000 --- a/doc/md/guides/deploying/helm.md +++ /dev/null @@ -1,107 +0,0 @@ ---- -id: helm -slug: /guides/deploying/helm -title: Deploying schema migrations to Kubernetes with Helm ---- - -:::note Deprecation Notice - -This method of running schema migrations is deprecated an no longer recommended. - -Please use the [Kubernetes Operator](k8s-argo.md) to manage schema migrations in Kubernetes. - -::: - -[Helm](https://helm.sh) is a popular package manager for Kubernetes that allows -developers to package applications into distributable modules called -[Charts](https://helm.sh/docs/intro/using_helm/#three-big-concepts) that can be -installed, upgraded, uninstalled, and more against a Kubernetes cluster. - -Helm is commonly used by software projects as a means for distributing software -in a way that will be simple for developers to manage on their clusters. For example, -[Bitnami](https://bitnami.com/) maintains [hundreds of charts](https://bitnami.com/stacks/helm) -for easily installing many popular applications, such as [MySQL](https://bitnami.com/stack/mysql/helm), -[Apache Kafka](https://bitnami.com/stack/kafka/helm) and others on Kubernetes. - -In addition, many teams ([Ariga](https://github.com/ariga) among them) use Helm -as a way to package internal applications for deployment on Kubernetes. - -In this guide, we demonstrate how schema migrations can be integrated into -Helm charts in such a way that satisfies the principles for deploying -schema migrations which we described in the [introduction](/guides/deploying/intro). - -Prerequisites to the guide: -1. [A migrations docker image](/guides/deploying/image) -2. [A Helm chart](https://helm.sh/docs/chart_template_guide/getting_started/) defining - your application. - -## Using Helm lifecycle hooks - -To satisfy the principle of having migrations run _before_ the new application -version starts, as well as ensure that only one migration job runs concurrently, -we use Helm's [pre-upgrade hooks](https://helm.sh/docs/topics/charts_hooks/) feature. - -Helm pre-upgrade hooks are chart hooks that: -> Executes on an upgrade request after templates are rendered, but before any resources are updated - -To use a pre-upgrade hook to run migrations with Atlas as part of our chart definition, -we create a template for a [Kubernetes Job](https://kubernetes.io/docs/concepts/workloads/controllers/job/) -and annotate it with the relevant [Helm hook annotations](https://helm.sh/docs/topics/charts_hooks/#the-available-hooks). - -```helm -apiVersion: batch/v1 -kind: Job -metadata: - # job name should include a unix timestamp to make sure it's unique - name: "{{ .Release.Name }}-migrate-{{ now | unixEpoch }}" - labels: - helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" - // highlight-start - annotations: - "helm.sh/hook": pre-install,pre-upgrade - "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded - // highlight-end -spec: - template: - metadata: - name: "{{ .Release.Name }}-create-tables" - labels: - app.kubernetes.io/managed-by: {{ .Release.Service | quote }} - app.kubernetes.io/instance: {{ .Release.Name | quote }} - helm.sh/chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" - spec: - restartPolicy: Never - imagePullSecrets: - - name: {{ .Values.imagePullSecret }} - containers: - - name: atlas-migrate - image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" - args: - - migrate - - apply - - -u - - {{ .Values.dburl }} - - --dir - - file:///src/ -``` - -Be sure to pass the following [values](https://helm.sh/docs/chart_template_guide/values_files/): - -* `imagePullSecret` - secret containing credentials to a - [private repository](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). - If you are hosting on ghcr.io, see [this guide](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/). -* `image.repository`: the container repository where you pushed your migration image to. -* `image.tag`: the tag of the latest migration image. -* `dburl`: the [URL](/concepts/url) of the database which you want to apply migrations to. - -Notice the `annotations` block at the top of the file. This block contains two important -attributes: -1. `"helm.sh/hook": pre-install,pre-upgrade`: configures this job to run as a pre-install - hook and as a pre-upgrade hook. -2. `"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded`: sets the following - deletion behavior for the jobs created by the hook: - * `before-hook-creation`: Delete the previous resource before a new hook is launched (default) - * `hook-succeeded`: Delete the resource after the hook is successfully executed. - This combination ensures that on the happy path jobs are cleaned after finishing and that - in case a job fails, it remains on the cluster for its operators to debug. In addition, it - ensures that when you retry a job, its past invocations are also cleaned up. \ No newline at end of file diff --git a/doc/md/guides/deploying/image.md b/doc/md/guides/deploying/image.md deleted file mode 100644 index 97e044bd043..00000000000 --- a/doc/md/guides/deploying/image.md +++ /dev/null @@ -1,121 +0,0 @@ ---- -id: image -slug: /guides/deploying/image -title: Creating container images for migrations ---- - -To integrate schema migrations into pipelines that deploy to container management -systems (such as Kubernetes, AWS ECS, Google Cloud Run, etc.) it is recommended -to create a dedicated container image per version that contains the -migration tool (such as Atlas) and the relevant migration files. - -In this guide we will demonstrate how to build a dedicated Docker image that -includes Atlas and the relevant migrations files. We will demonstrate -how to build this image as a [GitHub Actions Workflow](https://docs.github.com/en/actions/using-workflows), -but the same result can be achieved in any CI system. - -## Defining the Dockerfile - -Suppose our project structure looks something like: - -```text -. -├── main.go -└── migrations - ├── 20221031125934_init.sql - ├── 20221031125940_add_users_table.sql - ├── 20221031125948_add_products_table.sql - └── atlas.sum -``` - -Our goal is to build an image that contains: -1. The `migrations` directory -2. The Atlas binary - -To do this we can build our container image with the official [Atlas Docker image](https://hub.docker.com/r/arigaio/atlas) -as [the base layer](https://docs.docker.com/engine/reference/builder/#from). - -To do this, our Dockerfile should be placed in the directory _containing_ the `migrations` -directory and will look something like this: - -```dockerfile title=Dockerfile -FROM arigaio/atlas:latest - -COPY migrations /migrations -``` - -## Verify our image - -To test our new Dockerfile run: - -```text -docker build -t my-image . -``` - -Docker will build our image: - -```text - => [internal] load build definition from Dockerfile 0.0s - => => transferring dockerfile: 36B 0.0s - => [internal] load .dockerignore 0.0s - => => transferring context: 2B 0.0s - => [internal] load metadata for docker.io/arigaio/atlas:latest 0.0s - => [internal] load build context 0.0s - => => transferring context: 252B 0.0s - => [1/2] FROM docker.io/arigaio/atlas:latest 0.0s - => CACHED [2/2] COPY migrations /migrations 0.0s - => exporting to image 0.0s - => => exporting layers 0.0s - => => writing image sha256:c928104de31fc4c99d114d40ea849ade917beae3df7ffe9326113b289939878e 0.0s - => => naming to docker.io/library/my-image 0.0s -``` - -To verify Atlas can find your migrations directory and that its [integrity](/concepts/migration-directory-integrity) -is intact run: - -```text -docker run --rm my-image migrate validate -``` - -If no issues are found, no errors will be printed out. - -## Defining the GitHub Actions Workflow - -Next, we define a GitHub Actions workflow that will build our container -image and push it to the GitHub container repo (ghcr.io) on every push -to our mainline branch: - -```yaml title=.github/workflows/push-docker.yaml -name: Push Docker -on: - push: - branches: - - master -jobs: - docker-push: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v3 - - name: Set up Docker Buildx - uses: docker/setup-buildx-action@v2 - - name: Login to GitHub Container Registry - uses: docker/login-action@v2 - with: - registry: ghcr.io - username: ${{ github.actor }} - password: ${{ secrets.GITHUB_TOKEN }} - - name: Build and push - uses: docker/build-push-action@v3 - with: - push: true - file: ${{ matrix.file }} - tags: ghcr.io/ariga/:${{ github.sha }} - cache-from: type=gha - cache-to: type=gha,mode=max -``` - -Save this file in your GitHub repository under the `.github/workflows` directory. -After you push it to your mainline branch, you will see a run of the new -workflow in the [Actions](https://docs.github.com/en/actions/monitoring-and-troubleshooting-workflows/viewing-workflow-run-history) -tab of the repository. - diff --git a/doc/md/guides/deploying/init.md b/doc/md/guides/deploying/init.md deleted file mode 100644 index 313086ed4f7..00000000000 --- a/doc/md/guides/deploying/init.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -id: k8s-init-containers -slug: /guides/deploying/k8s-init-container -title: Deploying schema migrations to Kubernetes with Init Containers ---- - -:::note Deprecation Notice - -This method of running schema migrations is deprecated an no longer recommended. - -Please use the [Kubernetes Operator](k8s-argo.md) to manage schema migrations in Kubernetes. - -::: - -In [Kubernetes](https://kubernetes.io), [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) -are specialized containers that run before app containers in a Pod. Init containers -can contain utilities or setup scripts not present in an app image. - -Init containers can be utilized to run schema migrations with Atlas before the -application loads. Because init containers can use a container image different -from the application, developers can use a [purpose-built image](image.md) that -only contains Atlas and the migration scripts to run them. This way, less -can be included in the application runtime environment, which reduces -the attack surface from a security perspective. - -Depending on an application's [deployment strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy), -multiple replicas of an init container may run concurrently. In the case of -schema migrations, this can cause a dangerous race condition with unknown outcomes. -To prevent this, in databases that support advisory locking, Atlas will acquire -a lock on the migration operation before running migrations, making the -operation mutually exclusive. - -In this guide, we demonstrate how schema migrations can be integrated into -a Kubernetes deployment using an init container. - -Prerequisites to the guide: -1. [A migrations docker image](/guides/deploying/image) -2. [A Kubernetes Deployment manifest](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) defining - your application. -3. A running Kubernetes cluster to work against. - -## Adding an init container - -Suppose our deployment manifest looks similar to this: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx-deployment - labels: - app: nginx -spec: - replicas: 1 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - containers: - - name: nginx - image: nginx:1.14.2 - ports: - - containerPort: 80 -``` - -Now, let's say our [migration container image](image.md) which contains the Atlas binary -and our migration scripts is available at `ghcr.io/repo/migrations:v0.1.2`. We would like -to run `migrate apply` against our target database residing at `mysql://root:s3cr37p455@dbhostname.io:3306/db`. - -We will use a Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret/) to -store a [config file](/atlas-schema/projects) containing the credentials to our database. Create the file: - -```hcl title=atlas.hcl -env "k8s" { - url = "mysql://root:s3cr37p455@dbhostname.io:3306/db" -} -``` - -Kubernetes accepts secrets encoded as base64 strings. Let's calculate the -base64 string representing our project file: - -```text -cat atlas.hcl | base64 -``` -Copy the result: -```text -ZW52ICJrOHMiIHsKICB1cmwgPSAibXlzcWw6Ly9yb290OnMzY3IzN3A0NTVAZGJob3N0bmFtZS5pbzozMzA2L2RiIgp9Cg== -``` - -Create the secret manifest: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: atlas-project -type: Opaque -data: - atlas.hcl: ZW52ICJrOHMiIHsKICB1cmwgPSAibXlzcWw6Ly9yb290OnMzY3IzN3A0NTVAZGJob3N0bmFtZS5pbzozMzA2L2RiIgp9Cg== -``` - -Apply the secret on the cluster: - -```yaml -kubectl apply -f secret.yaml -``` - -The secret is created: -```text -secret/atlas-project created -``` - -Next, add a volume to mount the config file and an init container using it to -the deployment manifest: - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: nginx-deployment - labels: - app: nginx -spec: - replicas: 1 - selector: - matchLabels: - app: nginx - template: - metadata: - labels: - app: nginx - spec: - // highlight-start - volumes: - - name: atlas-project - secret: - secretName: atlas-project - initContainers: - - name: migrate - image: ghcr.io/repo/migrations:v0.1.2 - imagePullPolicy: Always - args: ["migrate", "apply", "-c", "file:///etc/atlas/atlas.hcl", "--env", "k8s"] - volumeMounts: - - name: atlas-project - mountPath: "/etc/atlas" - // highlight-end - containers: - - name: nginx - image: nginx:1.14.2 - ports: - - containerPort: 80 -``` - -Notice the new configuration blocks we added to our deployment manifest: -* We added our secret `atlas-project` [as a volume](https://kubernetes.io/docs/tasks/configure-pod-container/configure-volume-storage/#configure-a-volume-for-a-pod) to the - the deployment's PodSpec. -* We add an `initContainer` named `migrate` that runs the `ghcr.io/repo/migrations:v0.1.2` image. -* We mounted the `atlas-project` volume at `/etc/atlas` in our init container. -* We configured our init container to run with these flags: `["migrate", "apply", "-c", "file:///etc/atlas/atlas.hcl", "--env", "k8s"]` - -## Wrapping up - -That's it! After we apply our new deployment manifest, Kubernetes will first run -the init container and only then run the application containers. \ No newline at end of file diff --git a/doc/md/guides/deploying/intro.md b/doc/md/guides/deploying/intro.md deleted file mode 100644 index ba05a25e551..00000000000 --- a/doc/md/guides/deploying/intro.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -id: intro -title: Deploying schema migrations -slug: /guides/deploying/intro ---- - -## Schema changes as Deployments - -Changes to database schemas rarely happen in isolation. Most commonly, changes to the database -schema are related to some change in the application code. Because incompatibility between -the database schema and the application can cause serious problems, it is advised to give careful thought -to how these changes are rolled out. - -Based on our experience, we have come to the conclusion that changes -to the database schema should be thought of as part of the _deployment_ sequence, alongside -changing the application version, provisioning infrastructure or applying -configuration changes. - -This guide describes some strategies teams can employ to incorporate schema -changes into their deployment sequence. - -### Running migrations on server initialization - -In many cases, we have seen teams that run schema migration logic as part -of the application code: when servers start, before listening for traffic, -code that ensures that the database schema is up-to-date is invoked. -This is especially common for teams using ORMs that support an "auto-migration" -flow. - -In our experience, this strategy may work for simple use-cases, but may -cause issues in larger, more established projects. Some downsides of running -migrations on boot are: -* If multiple replicas of the server code are deployed concurrently - to avoid dangerous race conditions, some form of synchronization must be - employed to make sure only one instance tries to run the migration. -* If migrations fail, the server crashes, often entering a crash-loop, - which may reduce the overall capacity of the system to handle traffic. -* If migrations are driven by a dedicated tool (such as Atlas, Liquibase, Flyway, etc.) - the tool needs to be packaged into the same deployment artifact. This is both - cumbersome to invoke and goes against security best practices to reduce attack surface - by including only the bare minimum into runtime containers. - -### Running migrations as part of deployment pipelines - -Instead of running migrations on server init, we suggest using a deployment -strategy that follows these principles: - -1. Schema migrations are deployed as a discrete step in the deployment pipeline, - preceding application version changes. -2. If a migration fails, the whole deployment pipeline should halt. -3. Measures should be taken to ensure that only one instance of the migration - script runs concurrently. \ No newline at end of file diff --git a/doc/md/guides/deploying/k8s-argo.md b/doc/md/guides/deploying/k8s-argo.md deleted file mode 100644 index 1e43f7020af..00000000000 --- a/doc/md/guides/deploying/k8s-argo.md +++ /dev/null @@ -1,420 +0,0 @@ ---- -id: k8s-argo -title: Deploying to Kubernetes with the Atlas Operator and Argo CD -slug: /guides/deploying/k8s-argo ---- - -[GitOps](https://opengitops.dev/) is a software development and deployment methodology that uses Git as the central repository -for both code and infrastructure configurations, enabling automated and auditable deployments. - -[ArgoCD](https://argoproj.github.io/cd/) is a Kubernetes-native continuous delivery tool that implements GitOps principles. -It uses a declarative approach to deploy applications to Kubernetes, ensuring that the desired state of the -application is always maintained. - -[Kubernetes Operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) are software extensions to -Kubernetes that enable the automation and management of complex, application-specific operational tasks and -domain-specific knowledge within a Kubernetes cluster. - -In this guide, we will demonstrate how to use the [Atlas Kubernetes Operator](/integrations/kubernetes/operator) and -ArgoCD to achieve a GitOps-based deployment workflow for your database schema. - -## Pre-requisites - -* A running Kubernetes cluster - For learning purposes, you can use - [Minikube](https://minikube.sigs.k8s.io/docs/start/), which is a tool that runs a single-node - Kubernetes cluster inside a VM on your laptop. -* [kubectl](https://kubernetes.io/docs/tasks/tools/) - a command-line tool for interacting with Kubernetes clusters. -* [Helm](https://helm.sh/docs/intro/install/) - a package manager for Kubernetes. - -## High-level architecture - -Before we dive into the details of the deployment flow, let’s take a look at the high-level architecture of our application. - -![Application Architecture](https://atlasgo.io/uploads/k8s/argocd/app-diagram.png) - -On a high level, our application consists of the following components: - -1. A backend application - in our example we will use a plain NGINX server - as a placeholder for a real backend application. -2. A database - in our example we will use a MySQL pod for the database. - In a more realistic scenario, you might want to use a managed database service like AWS RDS or GCP Cloud SQL. -3. An `AtlasSchema` Custom Resource that defines the database schema and is managed by the Atlas Operator. - -In our application architecture, we have a database that is connected to our application and managed using -Atlas CR (Custom Resource). The database plays a crucial role in storing and retrieving data for the application, -while the Atlas CR provides seamless integration and management of the database schema within our Kubernetes environment. - -## How should you run schema changes in an Argo CD deployment? - -Integrating GitOps practices with a database in our application stack poses a unique challenge. - -Argo CD provides a declarative approach to GitOps, allowing us to define an Argo CD application -and seamlessly handle the synchronization process. By pushing changes to the database schema or -application code to the Git repository, Argo CD automatically syncs those changes to the Kubernetes cluster. - -However, as we discussed in the [introduction](/guides/deploying/intro#running-migrations-as-part-of-deployment-pipelines), -ensuring the proper order of deployments is critical. In our scenario, the database deployment -must succeed before rolling out the application to ensure its functionality. If the database deployment -encounters an issue, it is essential to address it before proceeding with the application deployment. - -Argo CD provides [Sync Waves](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/) -and Sync Hooks as features that help to control the order in which -manifests are applied within an application. Users may add an annotation to each resource -to specify in which "wave" it should be applied. Argo CD will then apply the resources in the order of the waves. - -By using annotations with specific order numbers, you can determine the sequence of manifest applications. -Lower numbers indicate the earlier application and negative numbers are also allowed. - -To ensure that database resources are created and applied before our application, -we will utilize Argo CD Sync Waves. The diagram shows our application deployment order: - -![Application Architecture](https://atlasgo.io/uploads/k8s/argocd/deployment-flow.png) - -To achieve the above order we'll annotate each resource with a sync wave annotation order number: - -```yaml -metadata: - annotations: - argocd.argoproj.io/sync-wave: "" -``` - -For more information refer to the [official documentation](https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/). - -With the theoretical background out of the way, let’s take a look at a practical example of how to deploy an application -with Argo CD and the Atlas Operator. - -## Installation - -### 1. Install ArgoCD - -To install ArgoCD run the following commands: - -```bash -kubectl create namespace argocd -kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml -``` - -Wait until all the pods in the `argocd` namespace are running: - -```bash -kubectl wait --for=condition=ready pod --all -n argocd -``` - -`kubectl` will print something like this: - -```bash -pod/argocd-application-controller-0 condition met -pod/argocd-applicationset-controller-69dbc8585c-6qbwr condition met -pod/argocd-dex-server-59f89468dc-xl7rg condition met -pod/argocd-notifications-controller-55565589db-gnjdh condition met -pod/argocd-redis-74cb89f466-gzk4f condition met -pod/argocd-repo-server-68444f6479-mn5gl condition met -pod/argocd-server-579f659dd5-5djb5 condition met -``` - -For more information or if you run into some error refer to the -[Argo CD Documentation](https://argo-cd.readthedocs.io/en/stable/getting_started/). - -### 2. Install the Atlas Operator - -```bash -helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator -``` - -Helm will print something like this: - -```bash -Pulled: ghcr.io/ariga/charts/atlas-operator:0.1.9 -Digest: sha256:4dfed310f0197827b330d2961794e7fc221aa1da1d1b95736dde65c090e6c714 -NAME: atlas-operator -LAST DEPLOYED: Tue Jun 27 16:58:30 2023 -NAMESPACE: default -STATUS: deployed -REVISION: 1 -TEST SUITE: None -``` - -Wait until the atlas-operator pod is running: - -```bash -kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=atlas-operator -n default -``` - -`kubectl` will print something like this: - -```bash -pod/atlas-operator-866dfbc56d-qkkkn condition met -``` - -For more information on the installation process refer to the [Atlas Operator Documentation](/integrations/kubernetes/operator#getting-started) - -## Define the application manifests - -### 1. Set up a Git repo - -Argo CD works by tracking changes to a Git repository and applying them to the cluster, so let's -set up a Git repository to serve as the central storage for all your application configuration. - -In this example, we’re using the [rotemtam/atlas-argocd-demo](https://github.com/rotemtam/atlas-argocd-demo) -repository, which contains all of the Kubernetes manifests necessary to deploy our application. - -### 2. Define the database resources - -Recall that in our first sync-wave, we want to deploy the database resources to our cluster. For the -purposes of this example we're deploying a simple MySQL pod to our cluster, but in a realistic scenario -you will probably want to use a managed database service such as AWS RDS, GCP Cloud SQL or one of the available -database operators for Kubernetes. - -In your repository, create a new directory named `manifests` and under it create a new file named `db.yaml`: - -```yaml title="manifests/db.yaml" -apiVersion: v1 -kind: Service -metadata: - annotations: - argocd.argoproj.io/sync-wave: "0" - name: mysql -spec: - ports: - - port: 3306 - selector: - app: mysql - clusterIP: None ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - annotations: - argocd.argoproj.io/sync-wave: "0" - name: mysql -spec: - selector: - matchLabels: - app: mysql - template: - metadata: - labels: - app: mysql - spec: - containers: - - image: mysql:8 - name: mysql - env: - - name: MYSQL_ROOT_PASSWORD - value: pass - - name: MYSQL_DATABASE - value: example - readinessProbe: - tcpSocket: - port: 3306 - initialDelaySeconds: 10 - periodSeconds: 10 - livenessProbe: - tcpSocket: - port: 3306 - initialDelaySeconds: 15 - periodSeconds: 15 - ports: - - containerPort: 3306 - name: mysql -``` - -### 3. Create the AtlasSchema Custom Resource - -Create the AtlasSchema custom resource to define the desired schema for your database, -refer to the [Atlas Operator documentation](/integrations/kubernetes/operator#configuration-for-the-atlasschema-resource) -and determine the specifications, such as the desired database schema, configuration options, and additional parameters. - -Here we’re creating a ***users*** table in an ***example*** database and annotating it with a sync wave order number of 1. -This annotation informs Argo CD to deploy them after the database has been successfully deployed. - -```yaml title="schema.yaml" -apiVersion: db.atlasgo.io/v1alpha1 -kind: AtlasSchema -metadata: - annotations: - argocd.argoproj.io/sync-wave: "1" - name: myapp -spec: - url: mysql://root:pass@mysql:3306/example - schema: - sql: | - create table users ( - id int not null auto_increment, - name varchar(255) not null, - email varchar(255) unique not null, - short_bio varchar(255) not null, - primary key (id) - ); -``` - - -### 4. Create your backend application deployment - -For the purpose of this guide, we will deploy a simple NGINX server to act as a placeholder for a real -backend server. Notice that we annotate the backend deployment with a sync wave order number of 2. -This informs Argo CD to deploy the backend application after the Atlas CR is deployed and confirmed to be in healthy. - -```yaml title="app.yaml" -apiVersion: apps/v1 -kind: Deployment -metadata: - annotations: - argocd.argoproj.io/sync-wave: "2" - name: nginx -spec: - selector: - matchLabels: - app: nginx - replicas: 2 - template: - metadata: - labels: - app: nginx - spec: - containers: - - name: nginx - image: nginx - ports: - - containerPort: 80 -``` - -### 5. Create a custom health check for Atlas objects - -To decide whether a SyncWave is complete and the next SyncWave can be started, Argo CD performs -a health check on the resources in the current SyncWave. If the health check fails, Argo CD will -not proceed with the next SyncWave. - -Argo CD has built-in health assessment for standard Kubernetes types, such as `Deployment` and `ReplicaSet`, -but it does not have a built-in health check for custom resources such as `AtlasSchema`. - -To bridge this gap, Argo CD supports custom health checks written in [Lua](https://lua.org), -allowing us to define our custom health assessment logic for the Atlas custom resource. - -To define the custom logic for the Atlas object in Argo CD, we can add -the custom health check configuration to the ***argocd-cm*** ConfigMap. -This ConfigMap is a global configuration for Argo CD that should be placed -in the same namespace as the rest of the Argo CD resources. Below is a custom -health check for the Atlas object: - -```yaml title="argocd-cm.yaml" -apiVersion: v1 -kind: ConfigMap -metadata: - name: argocd-cm - namespace: argocd - labels: - app.kubernetes.io/name: argocd-cm - app.kubernetes.io/part-of: argocd -data: - resource.customizations: | - db.atlasgo.io/AtlasSchema: - health.lua: | - hs = {} - if obj.status ~= nil then - if obj.status.conditions ~= nil then - for i, condition in ipairs(obj.status.conditions) do - if condition.type == "Ready" and condition.status == "False" then - hs.status = "Degraded" - hs.message = condition.message - return hs - end - if condition.type == "Ready" and condition.status == "True" then - hs.status = "Healthy" - hs.message = condition.message - return hs - end - end - end - end - - hs.status = "Progressing" - hs.message = "Waiting for reconciliation" - return hs -``` - -### 6. Create the Argo CD Application manifest - -Finally, create an Argo CD ***Application.yaml*** file which defines our Argo application: - -```yaml -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: - name: atlas-argocd-demo - namespace: argocd -spec: - source: - path: manifests - repoURL: 'https://github.com/rotemtam/atlas-argocd-demo' # <-- replace with your repo URL - targetRevision: master # <-- replace with your mainline - destination: - namespace: default - server: 'https://kubernetes.default.svc' - project: default - syncPolicy: - automated: - prune: true - selfHeal: true - retry: - limit: 5 - backoff: - duration: 5s - maxDuration: 3m0s - factor: 2 - syncOptions: - - CreateNamespace=true -``` - -## Deploying - -Make sure all of these files are pushed to your Git repository. If you want to follow along -you can use the [example repository](https://github.com/rotemtam/atlas-argocd-demo) for this guide. - -### 1. Apply the custom health check - -Before deploying our application, we need to apply the custom health check configuration to the Argo CD ConfigMap. - -```bash -kubectl apply -f https://raw.githubusercontent.com/rotemtam/atlas-argocd-demo/master/argo-cm.yaml -n argocd -``` - -### 2. Deploy our application - -With the custom health check in place, we can now deploy our application. - -```bash -kubectl apply -f https://raw.githubusercontent.com/rotemtam/atlas-argocd-demo/master/Application.yaml -``` - -Once you create an Argo CD application, Argo automatically pulls the configuration files from your Git -repository and applies them to your Kubernetes cluster. As a result, the corresponding resources are -created based on the manifests. This streamlined process ensures that the desired state of your -application is synchronized with the actual state in the cluster. - -To verify the application is successfully deployed and the resources are healthy: - -```bash -kubectl get -n argocd applications.argoproj.io atlas-argocd-demo -o=jsonpath='{range .status.resources[*]}{"\n"}{.kind}: {"\t"} {.name} {"\t"} ({.status}) {"\t"} ({.health}){end}' -``` -`kubectl` will print something like this: - -``` -Service: mysql (Synced) ({"status":"Healthy"}) -Deployment: mysql (Synced) ({"status":"Healthy"}) -Deployment: nginx (Synced) ({"status":"Healthy"}) -AtlasSchema: myapp (Synced) ({"message":"The schema has been applied successfully. Apply response: {\"Changes\":{}}","status":"Healthy"})% -``` - -Finally, on the ArgoCD UI we can see the health and dependency and status of all the resources: - -![ArgoCD UI](https://atlasgo.io/uploads/k8s/argocd/argo-flow.png) - -## Conclusion - -In this guide, we demonstrated how to use Argo CD to deploy an application that uses the Atlas Operator -to manage the lifecycle of the database schema. We also showed how to use Argo CD's custom health check -to ensure that the schema changes were successfully applied before deploying the backend application. - -Using the techniques described in this guide, you can now integrate schema management into your CI/CD -pipeline and ensure that your database schema is always in sync with your application code. diff --git a/doc/md/guides/deploying/k8s-cloud-versioned.md b/doc/md/guides/deploying/k8s-cloud-versioned.md deleted file mode 100644 index 580e4225f82..00000000000 --- a/doc/md/guides/deploying/k8s-cloud-versioned.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: Deploying Versioned Migrations to Kubernetes from Atlas Schema Registry -slug: /guides/deploying/k8s-cloud-versioned ---- -This guide will walk you through deploying versioned migrations to Kubernetes from Atlas Schema Registry. - -Use this setup if: - -* You are using the Atlas Kubernetes Operator with the versioned migrations flow (e.g using `AtlasMigration` CRDs). -* You have a CI/CD pipelines pushing your migration directory to the Atlas Schema Registry. - -## Prerequisites - -* An Atlas Cloud account with a project on the Atlas Schema Registry -* An Atlas Cloud Bot Token (see [Creating a Bot Token](/cloud/bots#creating) -* A Kubernetes cluster -* Helm and Kubectl installed - -## Steps - -1. Create a Kubernetes Secret with your Atlas Cloud Bot Token - - ```shell - kubectl create secret generic atlas-registry-secret --from-literal=token= - ``` -2. Create a Kubernetes Secret with your database credentials. - - ```shell - kubectl create secret generic db-credentials --from-literal=url="mysql://root:pass@localhost:3306/myapp" - ``` - Replace the `url` value with your database credentials. - -3. Install the Atlas Operator - - ```shell - helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator - ``` - -4. Locate your Cloud project name in the Atlas Schema Registry - - ![Atlas Schema Registry](https://atlasgo.io/uploads/k8sver/cloud-project-name.png) - - Open the Project Information pane on the right and locate the project slug (e.g `project-name`) - in the URL. - -4. Create an file named `migration.yaml` with the following content: - - ```yaml title="migration.yaml" - apiVersion: db.atlasgo.io/v1alpha1 - kind: AtlasMigration - metadata: - name: atlasmigration - spec: - urlFrom: - secretKeyRef: - key: url - name: db-credentials - cloud: - tokenFrom: - secretKeyRef: - key: token - name: atlas-registry-secret - dir: - remote: - name: "project-name" # Migration directory name in your atlas cloud project - tag: "latest" - ``` - Replace `project-name` with the name of your migration directory in the Atlas Schema Registry. - - If you would like to deploy a specific version of the migrations, replace `latest` with the version tag. - -5. Apply the AtlasMigration CRD manifest - - ```shell - kubectl apply -f migration.yaml - ``` - -6. Check the status of the AtlasMigration CRD: - - ```shell - kubectl get atlasmigration - ``` - - `kubectl` will output the status of the migration: - - ``` - NAME READY REASON - atlasmigration True Applied - ``` - -7. Observe the reported migration logs on your Cloud project in the Atlas Schema Registry: - - ![Atlas Schema Registry](https://atlasgo.io/uploads/k8sver/k8s-cloud-logs.png) \ No newline at end of file diff --git a/doc/md/guides/deploying/k8s-flux.md b/doc/md/guides/deploying/k8s-flux.md deleted file mode 100644 index 01240221937..00000000000 --- a/doc/md/guides/deploying/k8s-flux.md +++ /dev/null @@ -1,573 +0,0 @@ ---- -id: k8s-flux -title: Deploying to Kubernetes with the Atlas Operator and Flux CD -slug: /guides/deploying/k8s-flux ---- - -[GitOps](https://opengitops.dev/) is a software development and deployment methodology that uses Git as the central repository -for both code and infrastructure configurations, enabling automated and auditable deployments. - -[FluxCD](https://fluxcd.io/) is a Continuous Delivery tool that implements GitOps principles. It uses a declarative approach -to keep Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates -configuration updates when there is new code to deploy. - -[Kubernetes Operators](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) are software extensions to Kubernetes that enable the automation and management -of complex, application-specific operational tasks and domain-specific knowledge within a Kubernetes cluster. - -In this guide, we will demonstrate how to use the [Atlas Kubernetes Operator](/integrations/kubernetes/operator) and Flux CD to achieve a -GitOps-based deployment workflow for your database schema. - -## Prerequisites - -* A running Kubernetes cluster - For learning purposes, you can use - [Minikube](https://minikube.sigs.k8s.io/docs/start/), which is a tool that runs a single-node - Kubernetes cluster inside a VM on your laptop. -* [kubectl](https://kubernetes.io/docs/tasks/tools/) - a command-line tool for interacting with Kubernetes clusters. -* [Helm](https://helm.sh/docs/intro/install/) - a package manager for Kubernetes. - -## High-level architecture - -Before we dive into the details of the deployment flow, let’s take a look at the high-level -architecture of our application. - -![Application Architecture](https://atlasgo.io/uploads/k8s/argocd/app-diagram.png) - -On a high level, our application consists of the following components: - -1. A backend application - in our example we will use a plain NGINX server - as a placeholder for a real backend application. -2. A database - in our example we will use a MySQL pod for the database. In a more realistic scenario, - you might want to use a managed database service like AWS RDS or GCP Cloud SQL. -3. An `AtlasSchema` Custom Resource that defines the database schema and is managed by the Atlas Operator. - -In our application architecture, we have a database that is connected to our application and managed -using Atlas CR (Custom Resource). The database plays a crucial role in storing and retrieving -data for the application, while the Atlas CR provides seamless integration and management of the -database schema within our Kubernetes environment. - -## Incorporating schema changes into a GitOps flow - -### Databases should be migrated before the application is deployed - -Integrating GitOps practices with a database in our application stack poses a unique challenge. - -Flux CD provides a declarative approach to GitOps, allowing us to define a Flux CD application and -effortlessly handle the synchronization process. When pushing changes to the database schema or application -code to the Git repository, Flux CD automatically syncs those changes to the Kubernetes cluster. - -However, as we discussed in the introduction, ensuring the proper order of deployments is critical. -In our scenario, the database deployment must succeed before rolling out the application to guarantee its -functionality. If the database deployment encounters an issue, it is essential to address it before -proceeding with the application deployment. - -### Flux CD Dependencies 101 - -Flux CD supports [Dependencies](https://fluxcd.io/flux/components/kustomize/kustomizations/#dependencies), -via `.spec.dependsOn`, a mechanism used to orchestrate multiple deployments in a specific ordered sequence to ensure certain resources -are healthy before subsequent resources are synced/reconciled. - -By using `.spec.dependsOn`, you can define the apply order and thus determine the sequence of manifest -applications. `.spec.dependsOn` is used to refer to other Kustomization objects that the Kustomization -depends on. If specified, then the Kustomization is only applied after the referred Kustomizations -are ready, i.e. have the `Ready` condition marked as `True`. The readiness state of a Kustomization -is determined by its last applied status condition. - -For example, let's assume we have a scenario where our application is comprised of two services, a `backend` -service and a `database` service. The `backend` service depends on the `database` service, and we want -to ensure that the `database` service is ready before the `backend` service is applied. We can codify this -dependency in the following way: - -```yaml ---- -apiVersion: kustomize.toolkit.fluxcd.io/v1 -kind: Kustomization -metadata: - name: database - namespace: flux-system -spec: - interval: 5m - path: "./kustomize" - prune: true - sourceRef: - kind: GitRepository - name: flux-system - // highlight-start - healthChecks: - - apiVersion: apps/v1 - kind: Deployment - name: mysql - namespace: default - // highlight-end ---- -apiVersion: kustomize.toolkit.fluxcd.io/v1 -kind: Kustomization -metadata: - name: backend - namespace: flux-system -spec: - // highlight-start - dependsOn: - - name: mysql - // highlight-end - interval: 5m - path: "./kustomize" - prune: true - sourceRef: - kind: GitRepository - name: flux-system -``` - -In this manifest, `.spec.healthChecks` is used to refer to resources for which the Flux -controller will perform health checks. This is used to determine the rollout status of -deployed workloads and the `Ready` status of custom resources. - -This is helpful when there is a need to make sure other resources exist before the workloads -defined in a Kustomization are deployed. To ensure that database resources are created and applied -before our application, we will utilize Flux CD `dependsOn` and `health checks` feature. - -With the theoretical background out of the way, let’s take a look at a practical example of how -to deploy an application with Flux CD and the Atlas Operator. - -## Installation - -### 1. Install the Atlas Operator - -To install the `Atlas Operator` run the following command: - -```bash -helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator -``` - -`Helm` will print something like this: - -```bash -Pulled: ghcr.io/ariga/charts/atlas-operator:0.3.0 - Digest: sha256:4dfed310f0197827b330d2961794e7fc221aa1da1d1b95736dde65c090e6c714 - NAME: atlas-operator - LAST DEPLOYED: Tue Jun 27 16:58:30 2023 - NAMESPACE: default - STATUS: deployed - REVISION: 1 - TEST SUITE: None -``` - -Wait until the `atlas-operator` pod is running: - -```bash -kubectl wait --for=condition=ready pod -l app.kubernetes.io/name=atlas-operator -n default -``` - -`kubectl` will print something like this: - -```bash -pod/atlas-operator-866dfbc56d-qkkkn condition met -``` - -### 2. Install the Flux CLI - -The flux command-line interface (CLI) is used to bootstrap and interact with Flux. - -To install it on macOS or Linux, run: - -```bash -curl -s https://fluxcd.io/install.sh | sh -``` - -To install the CLI with [Chocolatey](https://chocolatey.org/) for Windows, run: - -```bash -choco install flux -``` - -Check you have everything needed to run Flux by running the following command: - -```bash -flux check --pre -``` - -The output is similar to: - -```bash -► checking prerequisites -✔ kubernetes 1.26.3 >=1.23.0 -✔ prerequisites checks passed -``` - -### 3. Create a GitHub Personal Access Token - -The GitHub personal access token will be used in place of a password when authenticating to GitHub in the command line or with the API. - -1. In the upper-right corner of any page, click your profile photo, then click `Settings`. -2. In the left sidebar, click `< > Developer settings`. -3. In the left sidebar, under `Personal access tokens`, click `Tokens (classic)``. -4. Select `Generate new token`, then click `Generate new token (classic)``. -5. In the `"Note"`` field, give your token a descriptive name. -6. To give your token an expiration, select `Expiration`, then choose a default option or click `Custom` to enter a date. -7. Select the scopes you'd like to grant this token. To use your token to access repositories from the command line, select `repo`. A token with no assigned scopes can only access public information. -8. Click `Generate token`. -9. Copy the new token to your clipboard. - -### 4. Export your GitHub Credentials - -Export your GitHub personal access token and username: - -```bash -export GITHUB_TOKEN= -export GITHUB_USER= -``` - -### 5. Install Flux onto your Kubernetes Cluster - -Run the bootstrap command: - -```bash -flux bootstrap github \ - --owner=$GITHUB_USER \ - --repository=flux-infrastructure \ - --branch=main \ - --path=./clusters/dev \ - --personal -``` - -You will be prompted to enter your GitHub personal access token. The output is similar to: - -```bash -► connecting to github.com -✔ repository created -✔ repository cloned -✚ generating manifests -✔ components manifests pushed -► installing components in flux-system namespace -deployment "source-controller" successfully rolled out -deployment "kustomize-controller" successfully rolled out -deployment "helm-controller" successfully rolled out -deployment "notification-controller" successfully rolled out -✔ install completed -► configuring deploy key -✔ deploy key configured -► generating sync manifests -✔ sync manifests pushed -► applying sync manifests -◎ waiting for cluster sync -✔ bootstrap finished -``` - -Using the flux bootstrap command, you can install Flux on a Kubernetes cluster and configure it to manage itself -from a Git repository. The bootstrap command above does the following: - -- Creates a git repository flux-infrastructure on your GitHub account. -- Adds Flux component manifests to the repository. -- Deploys Flux Components to your Kubernetes Cluster. -- Configures Flux components to track the path /clusters/dev/ in the repository. - -## Deploy the Sample Application on your Cluster - -In this example, we're using the `jmushiri/atlas-flux-demo` repository, which contains all of the -Kubernetes manifests necessary to deploy our application. - -### 1. Fork the Sample Application - -To get started, you need to fork and then clone the sample application repository to your local machine. -1. Open your web browser and go to the [rotemtam/atlas-operator-flux-demo](https://github.com/jmushiri/atlas-flux-demo) - repo on GitHub. -2. Click on the `"Fork"` button in the top-right corner of the GitHub page. This will create a copy - of the repository under your GitHub account. - -### 2. Clone the Sample Application - -Once the forking process is complete, you will be redirected to your own forked repository. Open your terminal -or command prompt and run the following command to clone the forked repository to your local machine: - -```shell -git clone https://github.com/$GITHUB_USER/atlas-operator-flux-demo.git -``` - -Observe this repository's structure: - -```text -. -├── LICENSE -├── README.md -└── kustomize - ├── app.yaml - ├── db.yaml - ├── kustomization.yaml - └── schema.yaml -``` - -* `kustomize/app.yaml` - contains the `Kustomization` manifest for the application, in our case, a dummy - nginx server. -* `kustomize/db.yaml` - contains the `Kustomization` manifest for the database, in our case, a MySQL pod, a service - that exposes it, and a secret that contains the database credentials. -* `kustomize/schema.yaml` - contains the `AtlasSchema` manifest that defines the database schema. -* `kustomize/kustomization.yaml` - contains the `Kustomization` manifest that defines the different resources - that are managed by this `Kustomization`. - -### 3. Clone the Infra Repository - -Clone the `flux-infrastructure` repository to your local machine: - -```bash -git clone https://github.com/$GITHUB_USER/flux-infrastructure - -cd flux-infrastructure -``` - -### 4. Add the Repository (atlas-flux-demo repository) to Flux - -Create a GitRepository manifest pointing to `atlas-flux-demo` repository’s main branch: - -```bash -flux create source git atlas-operator-flux-demo --url=https://github.com/$GITHUB_USER/atlas-operator-flux-demo --branch=main --interval=30s --export > ./clusters/dev/atlas-operator-flux-demo-source.yaml -``` - -The output is similar to: - -```yaml -apiVersion: source.toolkit.fluxcd.io/v1 -kind: GitRepository -metadata: - name: atlas-operator-flux-demo - namespace: flux-system -spec: - interval: 30s - ref: - branch: main - url: https://github.com/$GITHUB_USER/atlas-operator-flux-demo -``` - -Commit and push the `atlas-flux-demo-source.yaml` file to the `flux-infrastructure` repository: - -```bash -git add -A && git commit -m "Add atlas-flux-demo GitRepository" - -git push -``` - -## Implement a Continuous Deployment (CD) Flow - -It's time to configure Flux to build and apply the kustomize directory located in the `atlas-operator-flux-demo` repository. - -Use the `flux create` command to create a `Kustomization` that applies the `atlas-operator-flux-demo` deployment. - -```bash -flux create kustomization atlas-flux-demo \ - --target-namespace=default \ - --source=atlas-operator-flux-demo \ - --path="./kustomize" \ - --prune=true \ - --interval=5m \ - --export > ./clusters/dev/atlas-operator-flux-demo-kustomization.yaml -``` - -The output is similar to: - -```yaml -apiVersion: kustomize.toolkit.fluxcd.io/v1 -kind: Kustomization -metadata: - name: atlas-flux-demo - namespace: flux-system -spec: - interval: 5m0s - path: ./kustomize - prune: true - sourceRef: - kind: GitRepository - name: atlas-operator-flux-demo - targetNamespace: default -``` - -The structure of the `flux-infrastructure` repo should be similar to: - -```bash -flux-infrastructure -└── clusters/ - └── dev/ - ├── flux-system/ - │ ├── gotk-components.yaml - │ ├── gotk-sync.yaml - │ └── kustomization.yaml - ├── atlas-flux-demo-kustomization.yaml - └── atlas-flux-demo-source.yaml -``` - -## Implement the Deployment Flow - -To implement the deployment flow in a specific ordered sequence, we will use Flux CD’s `.spec.dependsOn` -and `.spec.healthChecks` features. - -Edit the `atlas-operator-flux-demo-kustomization.yaml` file as follows: - -```yaml ---- -apiVersion: kustomize.toolkit.fluxcd.io/v1 -kind: Kustomization -metadata: - name: mysql - namespace: flux-system -spec: - interval: 30s - path: ./kustomize - prune: true - sourceRef: - kind: GitRepository - name: atlas-operator-flux-demo - targetNamespace: default - // highlight-start - healthChecks: - - apiVersion: apps/v1 - kind: Deployment - name: mysql - namespace: default - // highlight-end ---- -apiVersion: kustomize.toolkit.fluxcd.io/v1 -kind: Kustomization -metadata: - name: app-schema - namespace: flux-system -spec: - // highlight-start - dependsOn: - - name: mysql - // highlight-end - interval: 30s - path: ./kustomize - prune: true - sourceRef: - kind: GitRepository - name: atlas-operator-flux-demo - targetNamespace: default - // highlight-start - healthChecks: - - apiVersion: db.atlasgo.io/v1alpha1 - kind: AtlasSchema - name: app-schema - namespace: default - // highlight-end ---- -apiVersion: kustomize.toolkit.fluxcd.io/v1 -kind: Kustomization -metadata: - name: nginx - namespace: flux-system -spec: - // highlight-start - dependsOn: - - name: app-schema - // highlight-end - interval: 30s - path: ./kustomize - prune: true - sourceRef: - kind: GitRepository - name: atlas-operator-flux-demo - targetNamespace: default -``` - -Notice the highlighted sections in the manifest above: -* We define health checks for the `mysql` deployment and the `app-schema` Atlas CR. This instructs Flux CD - explicitly when to consider each of these resources as ready. -* We declare the dependency of the `app-schema` Atlas CR on the `mysql` deployment. This ensures that the - `mysql` deployment is ready before the `app-schema` Atlas CR is applied. In addition, we declare the dependency - of the `nginx` deployment on the `app-schema` Atlas CR. This ensures that the `app-schema` Atlas CR is ready - before the `nginx` deployment is applied. - -Together, these declarations achieve our requirement of ensuring that the database schema is applied before -the application is deployed. - -Commit and push the `Kustomization` manifest to the repository: - -```bash -git add -A && git commit -m "Add atlas-flux-demo Kustomization" - -git push -``` - -## Watch Flux sync the Application - -Use the `flux get` command to watch the deployment flow. - -```bash -flux get kustomizations --watch -``` - -This command allows you to fetch and observe the status of `Kustomize` resources managed by Flux in your -Kubernetes cluster, with real-time updates as changes are made. - -To check whether the schema migrations have been successfully applied, run: - -```bash -kubectl exec -it $(kubectl get pods -l app=mysql -o jsonpath='{.items[0].metadata.name}') -- mysql -uroot -ppass -e "describe example.users" -``` - -Our schema migrations have been successfully applied: - -```text -+-----------+--------------+------+-----+---------+----------------+ -| Field | Type | Null | Key | Default | Extra | -+-----------+--------------+------+-----+---------+----------------+ -| id | int | NO | PRI | NULL | auto_increment | -| name | varchar(255) | NO | | NULL | | -| email | varchar(255) | NO | UNI | NULL | | -| short_bio | varchar(255) | NO | | NULL | | -+-----------+--------------+------+-----+---------+----------------+ -``` - -### Make a change to the Application Database Schema - -To show how the continuous deployment flow works, let's make a change to the database schema. - -Open the `kustomize/schema.yaml` file and add a column to the `users` table in the `AtlasSchema` manifest: - -```yaml -apiVersion: db.atlasgo.io/v1alpha1 -kind: AtlasSchema -metadata: - name: app-schema -spec: - url: mysql://root:pass@mysql:3306/example - schema: - sql: | - create table users ( - id int not null auto_increment, - name varchar(255) not null, - email varchar(255) unique not null, - short_bio varchar(255) not null, - // highlight-next-line - new_col varchar(255) not null, - primary key (id) - ); -``` - -Commit and push the change to the repository: - -```bash -git add kustomize/schema.yaml -git commit -m "Add new column to users table" -git push -``` - -Next, let's wait for Flux to sync the changes, and check that our schema migrations have been successfully applied: - -```text -+-----------+--------------+------+-----+---------+----------------+ -| Field | Type | Null | Key | Default | Extra | -+-----------+--------------+------+-----+---------+----------------+ -| id | int | NO | PRI | NULL | auto_increment | -| name | varchar(255) | NO | | NULL | | -| email | varchar(255) | NO | UNI | NULL | | -// highlight-next-line -| short_bio | varchar(255) | NO | | NULL | | -| new_col | varchar(255) | NO | | NULL | | -+-----------+--------------+------+-----+---------+----------------+ -``` -Amazing, our schema migrations have been successfully applied! - -## Conclusion - -In this guide, we demonstrated how to use Flux CD to deploy an application that uses the -Atlas Operator to manage the lifecycle of a database schema. We also showed how to use Flux -dependency management to ensure that the schema changes were successfully applied before -deploying the application. diff --git a/doc/md/guides/deploying/k8s-operator-certs.mdx b/doc/md/guides/deploying/k8s-operator-certs.mdx deleted file mode 100644 index 59fe593e404..00000000000 --- a/doc/md/guides/deploying/k8s-operator-certs.mdx +++ /dev/null @@ -1,63 +0,0 @@ ---- -id: k8s-operator-certs -title: Using SSL Certs with the Atlas Operator -slug: /guides/deploying/k8s-operator-certs ---- -Many modern databases support SSL/TLS encryption for secure communication between clients and the database. -In this document we provide some basic guidance on how to use SSL/TLS certificates with the -[Atlas Operator](/integrations/kubernetes/operator) on Kubernetes. - -## Step 1: Create a Secret for the SSL/TLS Certificates - -The first step is to create a Kubernetes Secret that contains the SSL/TLS certificates. If you are using -a Kubernetes Operator that supports automatically creating certificates such as the -[CockroachDB Operator](https://github.com/cockroachdb/cockroach-operator), you can use the certificates -created by that Operator. - -Here is an example of how to create a Secret with SSL/TLS certificates: - -```shell -kubectl create secret generic my-secret \ - --from-file=ca.crt=./path/to/ca.crt \ - --from-file=tls.key=./path/to/tls.key \ - --from-file=tls.crt=./path/to/tls.crt -``` - -This will create a Secret named `my-secret` with the SSL/TLS certificates. - -## Step 2: Mount the Certificates into the Atlas Operator - -The next step is to mount the SSL/TLS certificates into the Atlas Operator. To do this, by create -a file named `values.yaml` with the following content: - -```yaml -extraVolumes: - - name: certs - secret: - secretName: my-secret - defaultMode: 0640 -extraVolumeMounts: - - name: certs - mountPath: /certs - readOnly: true -``` - -Now, install the operator using this `values.yaml` file: - -```shell -helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator -f values.yaml -``` - -This will install the Atlas Operator, overriding the `extraVolumes` and `extraVolumeMounts` values to mount the -SSL/TLS certificates into the Operator. - -## Step 3: Use the Certificates in the Database URL - -The final step is to use the SSL/TLS certificates in the database [URL](/concepts/url). For example, if you are using the -PostgreSQL or CockroachDB databases, you can use the following database URL: - -```shell -postgresql://username@hostname:port/database?sslmode=verify-full&sslcert=/certs/tls.crt&sslkey=/certs/tls.key&sslrootcert=/certs/ca.crt -``` - -To learn more about how to securely provide the database URL to the operator, see the [docs](/integrations/kubernetes/operator#providing-credentials). \ No newline at end of file diff --git a/doc/md/guides/deploying/secrets.mdx b/doc/md/guides/deploying/secrets.mdx deleted file mode 100644 index 558dbec4fa6..00000000000 --- a/doc/md/guides/deploying/secrets.mdx +++ /dev/null @@ -1,318 +0,0 @@ ---- -id: secrets -title: Working with secrets -slug: /guides/deploying/secrets ---- - -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; - -Database credentials are considered sensitive information and should be treated as such. In this guide, -we will show how to use Atlas to handle database credentials in a secure manner. We will present two strategies -for handling database credentials, and show how to use Atlas to implement them: IAM Authentication and Secret Stores. - -## Secret Stores - -Secret stores are systems or services that allow users to store and retrieve sensitive information. The main features -of secret stores are encryption, access control, and auditing. All cloud providers offer some form of secret store -service, and there are also many open-source alternatives. - -When working with secret stores, Atlas assumes that the secret store is already provisioned and configured. Atlas -supports the following secret stores: - -* [AWS Secrets Manager](https://aws.amazon.com/secrets-manager) -* [AWS Systems Manager Parameter Store](https://aws.amazon.com/systems-manager/features/#Parameter_Store) -* [GCP Secret Manager](https://cloud.google.com/secret-manager) - -Support for other secret stores is planned, if you have a specific request, please [open an issue](https://github.com/ariga/atlas/issues/new). - -## IAM Authentication - -IAM authentication is a mechanism that allows users to authenticate to a database using their cloud provider credentials. -The main advantage of IAM authentication is that it allows users to avoid storing database credentials altogether. Although -setting this up may be more cumbersome, it is considered a best practice for many cloud providers. IAM authentication is also more secure than -using passwords. Even strong passwords stored in encrypted form can be leaked and used by attackers. IAM authentication -allows users to avoid storing database credentials altogether. - -IAM authentication is currently supported on GCP and AWS. Support for other cloud providers is planned as well, -if you have a specific request, please [open an issue](https://github.com/ariga/atlas/issues/new). - -## Retrieving Credentials from a Secret Store - -Atlas can retrieve information from a secret store at runtime using the `runtimevar` data source. The `runtimevar` -data source uses the [`runtimevar` package](https://gocloud.dev/howto/runtimevar/) from the Go [CDK](https://gocloud.dev/). -To read more about using `runtimevar` with Atlas, view the [data source documentation](atlas-schema/projects#data-source-runtimevar). - - - - -1. Create a secret a secret to store the database password using the AWS CLI: - - ```bash - aws secretsmanager create-secret \ - --name db-pass-demo \ - --secret-string "p455w0rd" - ``` - - The CLI prints out: - - ```text - { - "ARN": "arn:aws:secretsmanager:us-east-1:1111111111:secret:db-pass-demo-aBiM5k", - "Name": "db-pass-demo", - "VersionId": "b702431d-174f-4a8f-aee5-b80e687b8bf1" - } - ``` - - Note the database secret name and the region (`us-east-1`), we will use them in the next part. - -2. Create a new file named `atlas.hcl` with the following contents: - - ```hcl - data "runtimevar" "pass" { - url = "awssecretsmanager://db-pass-demo?region=us-east-1" - } - - env "dev" { - url = "mysql://root:${data.runtimevar.pass}@host:3306/database" - } - ``` - - Let's breakdown the configuration: - * The `runtimevar` data source is used to retrieve the database password from AWS Secrets Manager. - * We define an `env` named `dev`. The value retrieved by the `runtimevar` data source - is interpolated into the `url` attribute using the `${data.runtimevar.pass}` expression. - -3. Run `atlas schema inspect --env dev` to verify that Atlas is able to connect to the database. - -:::note - -If you using [RDS Password Management](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html#:~:text=RDS%20automatically%20generates%20database%20credentials,access%20and%20plain%20text%20view.), -RDS will maintain a secret in JSON format similar to: -```json -{ - "username": "admin", - "password": "p455w0rd" -} -``` - -To decode the json payload and retrieve the password from it, use the `jsondecode` standard lib func. -also notice this password may contain special characters and therefore must be escaped using the `urlescape` func. - -Use the next `atlas.hcl` file as an example: - -```hcl -data "runtimevar" "pass" { - url = "awssecretsmanager://?region=us-east-1" -} -locals { - pass = jsondecode(data.runtimevar.pass).password -} -env "dev" { - url = "mysql://root:${urlescape(local.pass)}@host:3306/database" -} -``` - -::: - - - - -1. Create a encrypted parameter to store the database password using the AWS CLI: - - ```bash - aws ssm put-parameter \ - --name "db-pass-demo" \ - --value "p455w0rd" \ - --region "us-east-1" \ - --type "SecureString" \ - --tags "Key=Env,Value=AtlasDemo" - ``` - - The CLI prints out: - - ```text - { - "Version": 1, - "Tier": "Standard" - } - ``` - - Note the database parameter name and the region (`us-east-1`), we will use them in the next part. - -2. Create a new file named `atlas.hcl` with the following contents: - - ```hcl - data "runtimevar" "pass" { - url = "awsparamstore://db-pass-demo?region=us-east-1&decoder=string" - } - - env "dev" { - url = "mysql://root:${data.runtimevar.pass}@host:3306/database" - } - ``` - - Let's breakdown the configuration: - * The `runtimevar` data source is used to retrieve the database password from AWS Parameter Store. - * We define an `env` named `dev`. The value retrieved by the `runtimevar` data source - is interpolated into the `url` attribute using the `${data.runtimevar.pass}` expression. - -3. Run `atlas schema inspect --env dev` to verify that Atlas is able to connect to the database. - - - - -1. Create a secret a secret to store the database password using the GCP CLI: - - ```bash - printf "p455w0rd" | gcloud secrets create db-pass-demo --data-file=- - ``` - - The CLI prints out: - - ```text - Created version [1] of the secret [db-pass-demo]. - ``` - -2. Create a new file named `atlas.hcl` with the following contents: - - ```hcl - data "runtimevar" "pass" { - url = "gcpsecretmanager://projects/my-project/secrets/db-pass-demo" - } - - env "dev" { - url = "mysql://root:${data.runtimevar.pass}@host:3306/database" - } - ``` - - Let's breakdown the configuration: - * The `runtimevar` data source is used to retrieve the database password from GCP Secret Manager. The URL - is composed of the project and secret name. If you are working locally in a multi-project environment, you - can find out the name of the active project by running `gcloud config get-value project`. - * We define an `env` named `dev`. The value retrieved by the `runtimevar` data source - is interpolated into the `url` attribute using the `${data.runtimevar.pass}` expression. - -3. Run `atlas schema inspect --env dev` to verify that Atlas is able to connect to the database. - - - - -## Using IAM Authentication - -Atlas can retrieve short-lived credentials from the cloud provider and use them to connect to the database. The -passwords are retrieved using special data sources that are specific to each cloud provider. - - - - -1. Enable IAM Authentication for your database. For instructions on how to do this, - [see the AWS documentation](https://aws.github.io/aws-sdk-go-v2/docs/sdk-utilities/rds/#iam-authentication). - -2. Create a database user and grant it permission to authenticate using IAM, see - [the AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html) - for instructions. - -3. Create an IAM role with the "rds-db:connect" permission for the specific database and user. For instructions on how to do this, - [see the AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html). - -4. Create a new file named `atlas.hcl` with the following contents: - - - - - ```hcl - locals { - user = "iamuser" - endpoint = "hostname-of-db.example9y7k.us-east-1.rds.amazonaws.com:5432" - } - - data "aws_rds_token" "db" { - region = "us-east-1" - endpoint = local.endpoint - username = local.user - } - - env "rds" { - url = "postgres://${local.user}:${urlescape(data.aws_rds_token.db)}@${local.endpoint}/postgres" - } - ``` - - - - - ```hcl - locals { - user = "iamuser" - endpoint = "hostname-of-db.example9y7k.us-east-1.rds.amazonaws.com:3306" - } - - data "aws_rds_token" "db" { - region = "us-east-1" - endpoint = local.endpoint - username = local.user - } - - env "rds" { - url = "mysql://${local.user}:${urlescape(data.aws_rds_token.db)}@${local.endpoint}?tls=preferred&allowCleartextPasswords=true" - } - ``` - - - - - Let's breakdown the configuration: - * The `aws_rds_token` data source is used to retrieve the database password from AWS Secrets Manager. - * We define an `env` named `rds`. The value retrieved by the `aws_rds_token` data source - is interpolated into the `url` attribute using the `${data.aws_rds_token.db}` expression. - - - - -The `gcp_cloudsql_token` data source generates a short-lived token for an [GCP CloudSQL](https://cloud.google.com/sql) database -using [IAM Authentication](https://cloud.google.com/sql/docs/mysql/authentication#manual). - -To use this data source: - -1. Enable IAM Authentication for your database. For instructions on how to do this, - [see the GCP documentation](https://cloud.google.com/sql/docs/mysql/create-edit-iam-instances). -2. Create a database user and grant it permission to authenticate using IAM, see - [the GCP documentation](https://cloud.google.com/sql/docs/mysql/add-manage-iam-users) - for instructions. -3. Create a file named `atlas.hcl` with the following contents: - - ```hcl title="atlas.hcl" - locals { - user = "iamuser" - endpoint = "34.143.100.1:3306" - } - - data "gcp_cloudsql_token" "db" {} - - env "cloudsql" { - url = "mysql://${local.user}:${urlescape(data.gcp_cloudsql_token.db)}@${local.endpoint}/?allowCleartextPasswords=1&tls=skip-verify&parseTime=true" - } - ``` - - :::note - The `allowCleartextPasswords` and `tls` parameters are required for the MySQL driver to connect to CloudSQL. For PostgreSQL, use `sslmode=require` to connect to the database. - ::: - - Let's breakdown the configuration: - * The `gcp_cloudsql_token` data source is used to retrieve the database password from GCP CloudSQL. - * We define an `env` named `cloudsql`. The value retrieved by the `gcp_cloudsql_token` data source - is interpolated into the `url` attribute using the `${data.gcp_cloudsql_token.db}` expression. - - - diff --git a/doc/md/guides/docker.mdx b/doc/md/guides/docker.mdx deleted file mode 100644 index 50e05af7747..00000000000 --- a/doc/md/guides/docker.mdx +++ /dev/null @@ -1,109 +0,0 @@ ---- -title: Running Atlas in Docker -id: atlas-in-docker -slug: atlas-in-docker ---- -Atlas ships as a set of official [Docker Images](https://hub.docker.com/r/arigaio/atlas) for you to use. - -To run Atlas in Docker, execute: - -```shell -docker run --rm -it arigaio/atlas:latest-alpine -``` - -Depending on your use case, you may want to use a different image type: - -| Base Image | Image Tags | Purpose | -|------------|-------------------------------|------------------------------------------------| -| Distroless | `latest`, `latest-distroless` | Bare bone image containing only Atlas | -| Alpine | `latest-alpine` | Alpine based image, with basic shell (/bin/sh) | - -## Common Issues - -### `Use 'atlas login' to access this feature` - -Atlas is an open-core project, with some features available only to signed-in users. To use these features, you -must sign in to Atlas. To sign in: - -1. Run: - - ```shell - docker run --rm -it \ - //highlight-next-line - -v ~/.atlas:/root/.atlas \ - arigaio/atlas:latest login - ``` - -2. Atlas will provide you with a URL to visit in your browser: - - ``` - Please visit: - - https://auth.atlasgo.cloud/login?cli=ade66529-e6c0-4c56-8311-e23d0efe9ee9&port=33281 - - Follow the instructions on screen. (Hit to manually provide the code.) - ``` - -3. Visit the URL in your browser and follow the on-screen instructions. - -4. Copy the code provided by Atlas Cloud: - ![](https://atlasgo.io/uploads/docker-guide/copy-this-code.png) - -5. Paste the code back into the terminal where you ran `atlas login` and hit ``: - ``` - Please enter the auth code: - ``` - -6. Atlas will verify your code and provide you with a success message: - ``` - You are now connected to acme-corp-1337-ltd on Atlas Cloud. - ``` - -7. You can now use Atlas features that require authentication. Use the `-v ~/.atlas:/root/.atlas` flag to persist your - login credentials across Docker runs. For example: - - ```shell - docker run --rm -it \ - //highlight-next-line - -v ~/.atlas:/root/.atlas \ - arigaio/atlas:latest-alpine schema inspect --url "" - ``` - -### `"docker": executable file not found in $PATH` - -Atlas heavily relies on the presence of a [Dev Database](/concepts/dev-database) for various calculations -and schema normalization. To use a Dev Database, users provide Atlas with the URL to connect to an empty -database of the type they wish to operate on. - -To streamline work with Dev Databases, Atlas provides a convenience driver named `docker://`, in which Atlas -depends on the Docker CLI command `docker` to be present in the runtime environment. Running Docker-in-Docker -is a notoriously nuanced topic and so we do not ship `docker` in the distributed Atlas images. - -For this reason, Atlas users who wish to run Atlas in Docker, cannot, by default use the `docker://` driver. - -#### Workaround: Spin up a local database container and use it - -A common workaround is to spin up a local, empty database container and connect to it. - -1. Create a Docker Network to establish network connectivity between your local DB and Atlas: - ``` - docker network create db-network - ``` -2. Run the database: - ``` - docker run --name pg-dev-db --network db-network -e POSTGRES_PASSWORD=mysecretpassword -d postgres:16 - ``` -3. Use the new dev db: - ``` - docker run --rm --network db-network \ - -v $PWD:/data \ - arigaio/atlas migrate diff \ - --to file:///data/schema.hcl \ - --dir file:///data/migrations \ - --dev-url "postgres://postgres:mysecretpassword@pg-dev:5432/postgres?sslmode=disable" - ``` - Note a few things about this command: - * We use the `--network` flag to use the network we created for our dev database on step 1. - * We mount our local dir as `/data` - * We use the URL for our dev database as the `--dev-url` flag, note that the hostname `pg-dev` was specified in step - 2 as the container name. diff --git a/doc/md/guides/evaluation/01-intro.md b/doc/md/guides/evaluation/01-intro.md deleted file mode 100644 index ef508dfab1d..00000000000 --- a/doc/md/guides/evaluation/01-intro.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -id: intro -title: Evaluating Atlas for your Use Case -slug: /guides/evaluation/intro -sidebar_label: Introduction ---- -### Who is this guide for? - -This guide is for developers, architects, and DevOps engineers who are evaluating Atlas for their use case. -It will help you understand the core concepts of Atlas and how it can be used to manage your database schema. - -### What is Atlas? - -Atlas is a language-agnostic tool for managing and migrating database schemas using modern DevOps principles. - -### What's in this guide? - -This guide is divided into the following sections: - -1. **Installation** - How to install Atlas and verify that it is able to understand your database. -2. **Setting Up** - Setting up your project's structure for local work. -3. **Developer Workflow** - Review working with Atlas from the end-user perspective. -4. **Setting Up CI Pipelines** - How to integrate Atlas into your CI/CD pipelines. -5. **Security Checklist** - Things you should consider for when designing deployment pipelines. -6. **Migrating Existing Projects** - How to migrate existing projects to Atlas. - -### Need help? - -If you are evaluating Atlas for commercial use, you might be eligible for free PoC support -from the team developing Atlas including a dedicated, private Slack Connect channel -to unblock you in your research - -To apply for free PoC support [Click Here](#) \ No newline at end of file diff --git a/doc/md/guides/evaluation/02-installation.mdx b/doc/md/guides/evaluation/02-installation.mdx deleted file mode 100644 index 3f20f08d7c8..00000000000 --- a/doc/md/guides/evaluation/02-installation.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -id: install -title: Installing Atlas -slug: /guides/evaluation/install -sidebar_label: Installing Atlas ---- - -import Installation from '../../components/_installation_instructions.mdx'; - -### Install Atlas Locally - -To get started with your Atlas evaluation, install the Atlas CLI on your local machine. - - - -### Create an Atlas Account - -To get the most of your Atlas evaluation, create an Atlas account and use the CLI to log in. - -It is possible to use Atlas without logging in, but you will only be able to use a subset of its full -feature set. For a more detailed comparison of the features available to logged-in and anonymous users, -see the [Atlas Feature Comparison](/features) guide. - -:::info Data Privacy - -We do not collect your schemas or any data from your database unless you explicitly decide send it to us. -To learn more about how the CLI handles your data head over to the [CLI Data Privacy](/cli/data-privacy) -section. - -::: - -To log in to your account or create a new one, run the following command and follow the instructions: - -```bash -atlas login -``` \ No newline at end of file diff --git a/doc/md/guides/evaluation/03-connect.mdx b/doc/md/guides/evaluation/03-connect.mdx deleted file mode 100644 index 07935e3a474..00000000000 --- a/doc/md/guides/evaluation/03-connect.mdx +++ /dev/null @@ -1,173 +0,0 @@ ---- -id: connect -title: Connect to your database ---- - -:::info - -In this section we perform read-only operations and make no changes to your database. - -::: - -With the Atlas CLI installed and logged-in to your account, let's now make sure we can connect to your database -with the CLI. - -## Step 1: Determine the database's URL - -Atlas uses a standard URL format to connect to databases and load schemas and migrations from various sources. The format -below covers the supported parts of a URL, with subsequent sections providing more detailed examples. - -``` -driver://[username[:password]@]address/[schema|database][?param1=value1&...¶mN=valueN] -``` - -Here are some popular examples you can use: - -| Engine | | -|---------------|--------------------------------------------------------------------------| -| PostgreSQL | `postgres://root:pass@localhost:5432/database?search_path=public` | -| MySQL | `mysql://root:pass@localhost:3306/app_schema` | -| SQL Server | `sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master&mode=schema` | -| ClickHouse | `clickhouse://user:pass@localhost:9000/schema?secure=true` | -| More examples | In the [URL Docs](/concepts/url) | - -## Step 2: Choose connection scope - -### Schema vs Database-scope - -Most databases provide a way for users to namespace resources. This feature is commonly named `schemas`. -For instance, PostgreSQL and MySQL users may create additional schemas in a target database using a command such as: - -```sql -CREATE SCHEMA auth; -``` - -Going forward, resources may be created under the `auth` schema, in a few ways: - -1. By qualifying the resource name with the schema name, so the database knows which specific - resource we are talking about, for example: - ```sql - create table `auth`.`session_log` ( - -- ... - ) - ``` -2. By using the `USE ` statement before executing the relevant statement. For example: - ``` - USE `auth`; - create table `session_log` ( - -- ... - ) - ``` - -### Which kind of scope should you use? - -Depending on your use-case, you may require a schema or database scope connection. The following table summarizes -some of these cases. - -| Use-case | Scope to use | -|----------------------------------------------------------|----------------| -| All of my tables are in one schema | Schema-scope | -| I want to use qualified identifiers (e.g `auth`.`users`) | Database-scope | -| I want to use database `EXTENSIONS` (Postgres) [Why?](/atlas-schema/hcl#extension) | Database-scope | -| I want to see the entire database | Database-scope | - -### Scoped connections - -By default, Atlas will create a database-scoped connection. However, it is possible to limit the scope of -the connection by providing it within the connection URL. For example: - - | Engine | Schema-scope | Database-scope | - |---------------|-------------------------------------------------------|-----------------------------| - | PostgreSQL | `postgres://localhost:5432/database?search_path=public` | `postgres://:5432/database` | - | MySQL | `mysql://localhost:3306/app_schema` | `mysql://localhost:3306/` | - | ClickHouse | `clickhouse://localhost:9000/schema` | `clickhouse://localhost:9000` | - | SQL Server | `sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master&mode=schema` | `sqlserver://sa:P@ssw0rd0995@localhost:1433?database=master&mode=database` | - | More examples | In the [URL Docs](/concepts/url) | | - -## Test your connection - -Once you have determined your full connection URL including its scope, you can test that Atlas is able to -connect to the database using the following command: - -```bash -atlas schema inspect --url "" --format "OK" -``` - -Be sure to replace `` with your database's URL. - -If Atlas is able to connect to the given URL you should see the following output: - -``` -OK -``` - -### Troubleshooting - -#### My password contains non-alphanumeric characters - -Atlas uses standard [RFC 3986](https://datatracker.ietf.org/doc/html/rfc3986) URLs, which means -that "userinfo" section (which contains user and password) needs to be "pct-encoded" - commonly referred to -as [URI encoding](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/encodeURIComponent). - -For example, `Pa$$w@rd` becomes `Pa%24%24w%40rd` and `a%b%c` becomes `a%25b%25c`. - -In case your password contains non-alphanumeric characters (as is often required by system administrators who want to -increase minimal password complexity), you must encode your password before using it in your Atlas URL. - -Suppose your database password is `BnB+PjA`, attempting to use this password without encoding it first will result in -an error message similar to: - -``` -Error: mysql: query system variables: dial tcp: lookup root:BnB+PjA:3309: no such host -``` - -To solve this, first encode your password using URL encoding. For example: - -```python -import urllib.parse -pw = "BnB+PjA" -print(urllib.parse.quote(pw)) -``` -This program prints: -``` -BnB%2BPjA -``` - -#### My connection times out - -When you try to connect to your database using Atlas, you might encounter an error message similar to: - -``` -Error: postgres: scanning system variables: dial tcp 10.0.5.243:5432: connect: operation timed out -``` - -Or - -``` -Error: mysql: query system variables: dial tcp 10.0.5.243:5432: connect: operation timed out -``` - -These errors suggest that Atlas is attempting to connect to the database but is unable to establish -a network connection to the remote database at all. - -To resolve this error, ensure that you can establish a network connection to the database from your -workstation. This might involve: - -* Connecting to your company's VPN which has access to the database -* Connecting through a bastion server -* Changing firewall rules for the database to allow a connection from your machine. - -:::info Other Issues - -If you encounter any issues during this step, don't hesitate to reach out to us. - -#### Commercial PoC - -Please reach out to us via our shared Slack Connect channel. - -#### Independent PoC - -Please use our [Community Support](/support) channels to contact our team. If possible, -please supply us with a minimal example schema to reproduce the issue. - -::: \ No newline at end of file diff --git a/doc/md/guides/evaluation/04-verify-atlas.mdx b/doc/md/guides/evaluation/04-verify-atlas.mdx deleted file mode 100644 index c2fd61a4e1b..00000000000 --- a/doc/md/guides/evaluation/04-verify-atlas.mdx +++ /dev/null @@ -1,107 +0,0 @@ ---- -id: verify-atlas -title: Verify Atlas understands your Schema ---- - -After successfully connecting to your database using Atlas, our next step will be to verify that Atlas -is able to inspect your database schema and that it correctly normalizes it such that if re-applied to the -database no diff will be detected. - -### Step 1: Export your database URL as an env var - -To make the examples in this section brief, start by setting a local environment variable -containing the URL you determined for your database: - -``` -export DB_URL="" -``` - -### Step 2: Store the schema in a local file - -Use the Atlas `schema inspect` command to connect to your database, calculate its schema graph -and store it in a local file: - -``` -atlas schema inspect --url "$DB_URL" > schema.hcl -``` - -If everything worked correctly, you should find a file named `schema.hcl` in your current working -directory which contains the [Atlas DDL](/atlas-schema/hcl) representation of your schema. - -### Step 3: Determine your Dev Database URL - -:::info - -Notice that when we discuss a **Dev Database** in the Atlas documentation we **DO NOT** refer to the -local database you use for development, but to a different concept explained below. - -::: - -To operate correctly, Atlas utilizes a [Dev Database](/concepts/dev-database) to normalize and verify -schemas. Essentially, a dev-database is an *empty* database of the same type and version that you use -in production. When Atlas runs, it may run some operations against this database and is responsible -for cleaning up after and leaving the database in an empty state. - -When working with Atlas, you can bring your own dev database, but most Atlas users prefer to use -Atlas's built-in `docker://` driver which will spin up a local, ephemeral Docker container and dispose -of it after for you. - -As we mentioned in the [previous section](03-connect.mdx#schema-vs-database-scope), Atlas operates -differently if configured to use a database-scope vs a schema-scope. Be sure to use the same kind of -scope for your dev-database to avoid miscalculations and other trouble. The following table summarizes -some commonly used URLs. - -| Engine | Scope | URL Example | -|------------|----------------|---------------------------------------------| -| MySQL | Schema-scope | `docker://mysql/8/dev` | -| | Database-scope | `docker://mysql/8` | -| PostgreSQL | Schema-scope | `docker://postgres/16/dev?search_path=public` | -| | Database-scope | `docker://postgres/16/dev` | -| ClickHouse | Schema-scope | `docker://clickhouse/23.11/dev` | -| | Database-scope | `docker://clickhouse/23.11` | -| SQL Server | Schema-scope | `docker://sqlserver/2022-latest/dev?mode=schema` | -| | Database-scope | `docker://sqlserver/2022-latest/dev?mode=database` | -| Others | | Find more examples in the [Dev Database](/concepts/dev-database) guide. | - -### Step 4: Verify Zero Diff - -Next, use Atlas's `schema diff` command to check that Atlas sees no difference between your inspected schema -as it is represented in the `schema.hcl` file and the actual database schema: - -```shell -atlas schema diff --dev-url --from file://schema.hcl --to "$DB_URL" -``` - -Be sure to replace `` with the Dev Database URL you determined in step 3. - -If everything works correctly, Atlas should print out a message similar to: - -``` -Schemas are synced, no changes to be made. -``` - -### Step 5 (Optional): Manual Verification - -If your database schema contains resources with esoteric or uncommon database features, you may want to manually review -the inspected schema to make sure that it is inspected correctly. - -If you prefer to review this in plain SQL DDL commands instead of the Atlas's HCL syntax, run the following command: - -```shell -atlas schema inspect --url "$DB_URL" --format "{{ sql . }}" -``` - -:::info Other Issues - -If you encounter any issues during this step, don't hesitate to reach out to us. - -#### Commercial PoC - -Please reach out to us via our shared Slack Connect channel. - -#### Independent PoC - -Please use our [Community Support](/support) channels to contact our team. If possible, -please supply us with a minimal example schema to reproduce the issue. - -::: minimal example schema to reproduce the issue. \ No newline at end of file diff --git a/doc/md/guides/evaluation/05-project-structure.mdx b/doc/md/guides/evaluation/05-project-structure.mdx deleted file mode 100644 index 69c39e3f9fa..00000000000 --- a/doc/md/guides/evaluation/05-project-structure.mdx +++ /dev/null @@ -1,69 +0,0 @@ ---- -id: project-structure -title: Project Structure ---- -After successfully verifying Atlas can connect to your database and understand its schema, let's proceed -to setting up your project. - - -A typical project layout may look something similar to this: - -```text -. -├── atlas.hcl -├── migrations -│ ├── 20240520182315.sql -│ ├── 20240520182336.sql -│ └── atlas.sum -└── schema.hcl -``` - -A typical Atlas project comprises 3 important parts: - -* Schema-as-Code - The desired state of your database should be present "as code" in the source control repository of - your project. -* Project Configuration File - this file is typically named `atlas.hcl` and is located in the root directory - of your project. If you are using a "Monorepo" architecture, you will have one of these files per Atlas project. -* Migrations - While Atlas supports both purely [Versioned and Declarative](/concepts/declarative-vs-versioned) workflows, currently most teams - that use Atlas in production use the mixed approach where the source of truth for the desired state of the schema - is stored "as code", but migrations are still explicitly managed in a dedicated directory, typically named `migrations/`. - -### Step 1: Create your project config file - -Before moving on to building CI and deployment pipelines, we will focus on building a setup for local development. - -Let's start this by creating our initial project configuration file. - -Create a file named `atlas.hcl` and place it at your project root. - -In this file define an `env` block for local development: - -```hcl -env "local" { - dev = "docker://postgres/16/dev" // <-- Replace your with your dev database URL. -} -``` - -Be sure to replace the `dev` URL with the URL you determined in [Step 3](verify-atlas#step-3-determine-your-dev-database-url) -of the previous section. - -### Step 2: Verify your configuration works - -Let's repeat our verification from the previous section, this time using the `--env` flag instead of spelling out -our Dev Database URL explicitly: - -```shell -atlas schema diff --env local --from file://schema.hcl --to "$DB_URL" -``` - -This command will calculate the diff between the schema which we saved from your live database to `schema.hcl` in -a previous step and the current schema of the database in `$DB_URL`. - -Unless your database schema changed in the interim, you should see the same output you got in the verification step -of the previous section: - -``` -Schemas are synced, no changes to be made. -``` - -In the next section, we will explore how to define the desired schema of your database, as code. \ No newline at end of file diff --git a/doc/md/guides/evaluation/06-schema-as-code.mdx b/doc/md/guides/evaluation/06-schema-as-code.mdx deleted file mode 100644 index 956faa56028..00000000000 --- a/doc/md/guides/evaluation/06-schema-as-code.mdx +++ /dev/null @@ -1,227 +0,0 @@ ---- -id: schema-as-code -title: Your Schema-as-Code ---- -Atlas is built on the concept of "Database Schema-as-Code", where teams define the desired state of their database -as part of their code, and use an automated tool (such as Atlas) to plan, verify and apply schema changes (migrations) -automatically. - -Naturally, the first part of setting up your project is to determine how your "schema as code" project is going to be -structured. Atlas provides a lot of flexibility around how to define this state as we will show below. - -## Step 1: Choose your language - -Atlas offers three main ways to define the desired state of your database schema: - -1. [Atlas DDL](/atlas-schema/hcl) - is an HCL-based syntax (similar to Terraform) for defining database schemas. -For example: -```hcl -schema "default" { -} - -table "users" { - schema = schema.default - column "name" { - type = varchar(100) - } - // ... more -} - ``` -2. [Plain SQL](/atlas-schema/sql) - alternatively, you can use plain SQL DDL commands to define your database -schema. For example: -```sql -CREATE TABLE `users` ( - name varchar(100) -); - ``` -3. [ORMs](/guides#orms) - Atlas can extract the desired database schema from ORM data models, for example: - ```python - from django.db import models - - class User(models.Model): - name = models.CharField(max_length=100) - ``` - -### What should you choose? - -As with any important engineering decision, there are pros and cons to every alternative, which you should weigh against -your use case. Let's try to enumerate some benefits and potential issues with each alternative: - -| | SQL | Atlas DDL | ORM | -|----------------|----------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------| -| Familiarity | Teams likely already know how to define SQL tables. | New syntax specific to Atlas, familiar to teams using Terraform. | Developers are used to defining data models in their favorite framework's API. | -| Order | Schema order matters; dependencies between resources require careful attention. | Resources can be defined in any order, providing flexibility. Dependencies resolved by Atlas. | Models can be defined in different files; dependencies are resolved by the ORM engine. | -| Editor Support | Imperative syntax limits IDE features like "Jump to Definition" and code completion. | VSCode and JetBrains support with features like syntax highlighting and auto-completion. | Editor support varies; some frameworks offer type-safe APIs while others use string annotations.| - -#### Familiarity -- **SQL:** Your team probably already knows how to define SQL tables, making SQL a familiar choice for many teams, - leveraging existing knowledge and skills. -- **Atlas DDL:** The syntax is new and specific to Atlas but will feel familiar to teams used to working with Terraform, - balancing the introduction of new concepts with leveraging Terraform's HCL familiarity. -- **ORM:** Your developers are already used to defining data models in their favorite framework's API, which can speed - up development and reduce the learning curve for new team members. - -#### Order - -- **SQL:** Schemas must be replayable on a target database, meaning the order in which statements are written matters. - For example, you cannot define a table that references another table via a foreign key before the dependent table is defined. This requires either disabling foreign key checks or splitting the table definition into a `CREATE` statement followed by an `ALTER` statement, complicating schema management and requiring careful planning. -- **Atlas DDL:** Atlas DDL is based on HCL, the same configuration language used by Terraform. Resources can be defined - in any order, helping with logically splitting resources into different domains and offering more flexibility. - This reduces mental overhead and simplifies operations for end developers. -- **ORM:** This depends on the ORM specifics, but in general, ORM Models can be defined in different files where their - dependencies are resolved by the ORM engine. This modularity simplifies development and maintenance of data models. - -#### Editor Support -- **SQL:** Because SQL DDL is an imperative syntax, important IDE features (such as "Jump to Definition" and - code completion) do not work great. This can hinder development efficiency and increase the potential for errors. -- **Atlas DDL:** Editor support for both VSCode and JetBrains includes syntax highlighting, auto-completion, syntax - checking, jump to definition, multi-file support, and more. This comprehensive support enhances developer productivity and reduces the likelihood of errors. -- **ORM:** Editor support greatly varies. Some frameworks provide a 100% type-safe API which the editor understands, - while others heavily rely on string-based annotations. Depending on the framework, developer efficiency and code - robustness may vary. - - -### Must you choose only one? - -In some cases, no alternative checks all the boxes and you need to find a way to build schemas that are -built from multiple building blocks. For example, you may want to use your favorite ORM to describe your schema -only to discover that it lacks support for some needed database functionality (e.g `VIEW` or `TRIGGER` resources). - -In cases like these, we advise using the `composite_schema` data source which can be used to layer multiple schema -definitions one on top of the other. This way you can have your base models defined as _Django Models_, but additional -views and triggers layered on top in Atlas DDL. - -Here's a basic example of how a `composite_schema` might look like: - -``` -data "composite_schema" "project" { - schema "inventory" { - url = data.external_schema.django.url - } - schema "triggers" { - url = "file://path/to/triggers.hcl" - } -} -``` - -To learn more about `composite_schema`, head over to the [documentation](/atlas-schema/projects#data-source-composite_schema) - -## Step 2: Add your schema files - -After you have determined what you would like to use as the desired state for your Database Schema-as-Code -setup with Atlas, let's add the relevant files to our project. - -### Option A: Atlas DDL - -Use the following command to inspect your target database schema to create an initial baseline for your project -with Atlas DDL: - -```shell -atlas schema inspect --url "$DB_URL" --env local > schema.hcl -``` - -This command tells Atlas to connect to the database at `$DB_URL`, inspect it, use the -Dev Database defined in the `local` env. By default, Atlas DDL is used as the output format. -We then redirect the output from Atlas into a new file, `schema.hcl` which we will use the the -source schema for our project. - -Next, update your project configuration file, `atlas.hcl`, to add the `src` attribute which tells -Atlas it can find the desired state of the database in the given path: - -```hcl -env "local" { - //highlight-next-line - src = "file://schema.hcl" - dev = "docker://postgres/16/dev" // <-- Replace your with your dev database URL. -} -``` - -### Option B: Plain SQL - -Use the following command to inspect your target database schema to create an initial baseline for your project -with plain SQL: - -```shell -atlas schema inspect --url "$DB_URL" --env local --format "{{ sql . \" \"}}" > schema.sql -``` - -This command tells Atlas to connect to the database at `$DB_URL`, inspect it, use the -Dev Database defined in the `local` env and format it as plain SQL. We then redirect the output -from Atlas into a new file, `schema.sql` which we will use the the source schema for our project. - -Next, update your project configuration file, `atlas.hcl`, to add the `src` attribute which tells -Atlas it can find the desired state of the database in the given path: - -```hcl -env "local" { - //highlight-next-line - src = "file://schema.sql" - dev = "docker://postgres/16/dev" // <-- Replace your with your dev database URL. -} -``` - -### Option C: ORMs - -ORMs require a bit more involvement in setting up our project, so refer to the relevant ORM guide: - -#### Go - -* [GORM](/guides/orms/gorm) -* [Ent](#) -* [Beego](/guides/orms/beego) - -#### Python - -* [SQLAlchemy](/guides/orms/sqlalchemy) -* [Django](/guides/orms/django) - -#### Java - -* [Hibernate](/guides/orms/hibernate) - -#### Node.js - -* [TypeORM](/guides/orms/typeorm) -* [Sequelize](/guides/orms/sequelize) - -All of the examples above are a result of our team's work with our users and customers. -If you are missing something on this list, please don't hesitate to reach out to someone -from our team! - -## Step 3: Verify our setup - -Next, let's verify Atlas is able to read our desired schema, by introducing a new flag for -the `schema inspect` command, `--web`. After running a normal inspection of our database schema, -when provided with this flag, Atlas will offer to visualize our database schema in an online -ER Diagram on Atlas Cloud: - -```shell -atlas schema inspect --env local --url "env://src" --web -``` - -Notice that this command uses `env://src` as the target URL for inspection, meaning "the schema represented by the -`src` attribute of the `local` environment block. - -When you use the `--web` flag you can choose if you want to upload it a public link or privately -to your Atlas Cloud account: - -``` -? Where would you like to share your schema visualization?: - ▸ Publicly (gh.atlasgo.cloud) - Privately (your-account.atlasgo.cloud) -``` - -Public links can be shared with your colleagues without requiring them to have an Atlas Cloud account -while private links can only be accessed from within your account. - -After choosing the option you prefer, if everything worked correctly you should see a diagram similar to -this: - -![](https://atlasgo.io/uploads/erd-example.png) - -:::info Alternative: Text-based Inspection - -If you do not wish to visualize your schema on Atlas Cloud at this point, you can omit the `--web` option -to get a textual representation of your schema. - -::: \ No newline at end of file diff --git a/doc/md/guides/evaluation/07-setup-migrations.mdx b/doc/md/guides/evaluation/07-setup-migrations.mdx deleted file mode 100644 index 9b8ddf57e3f..00000000000 --- a/doc/md/guides/evaluation/07-setup-migrations.mdx +++ /dev/null @@ -1,96 +0,0 @@ ---- -id: setup-migrations -title: Setup Versioned Migrations ---- - -## Intro - -Similarly to other migration tools, in its _Versioned Migrations flow_ Atlas keeps track of schema changes in a -_Migration Directory_. This is a directory containing SQL migration scripts, which are named following this convention - -``` -_