Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCFIX] Update bin/alluxio usage format #18128

Merged
merged 1 commit into from
Sep 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion cli/src/alluxio.org/cli/launch/launch.go
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,9 @@ func Run(jarEnvVars map[bool]map[string]string, appendClasspathJars map[string]f
var flagDebugLog bool
rootCmd.PersistentFlags().BoolVar(&flagDebugLog, "debug-log", false, "True to enable debug logging")
var flagIsDeployed bool
rootCmd.PersistentFlags().BoolVar(&flagIsDeployed, "deployed-env", false, "True to set paths to be compatible with a deployed environment")
const deployedEnv = "deployed-env"
rootCmd.PersistentFlags().BoolVar(&flagIsDeployed, deployedEnv, false, "True to set paths to be compatible with a deployed environment")
rootCmd.PersistentFlags().MarkHidden(deployedEnv)

rootCmd.PersistentPreRunE = func(cmd *cobra.Command, args []string) error {
if flagDebugLog {
Expand Down
4 changes: 2 additions & 2 deletions docs/en/api/REST-API.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ can use in-memory streams, the REST API decouples the stream creation and access
`create` and `open` REST API methods and the `streams` resource endpoints for details).

The HTTP proxy is a standalone server that can be started using
`${ALLUXIO_HOME}/bin/alluxio-start.sh proxy` and stopped using `${ALLUXIO_HOME}/bin/alluxio-stop.sh
proxy`. By default, the REST API is available on port 39999.
`${ALLUXIO_HOME}/bin/alluxio process start proxy` and stopped using `${ALLUXIO_HOME}/bin/alluxio process stop proxy`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

talked to @JiamingMai and i think the proxy is no longer support in 30x.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we still have the code for the process; is there a plan to remove it?
should this doc be completely removed/replaced? @LuQQiu i remember you mentioned the rest api was being replaced; any suggestions on how to handle the doc page?

By default, the REST API is available on port 39999.

There are performance implications of using the HTTP proxy. In particular, using the proxy requires
an extra network hop to perform filesystem operations. For optimal performance, it is recommended to
Expand Down
83 changes: 19 additions & 64 deletions docs/en/contributor/Contributor-Tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,11 +31,10 @@ list.

Some source files in Alluxio are generated from templates or compiled from other languages.

1. gRPC and ProtoBuf definitions are compiled into Java source files. Alluxio 2.2 moved generated
gRPC proto source files into `core/transport/target/generated-sources/protobuf/`.
2. Compile time project constants are defined in
`core/common/src/main/java-templates/` and compiled to
`core/common/target/generated-sources/java-templates/`.
1. gRPC and ProtoBuf definitions are compiled into Java source files and generated files are located in `common/transport/target/generated-sources/protobuf/`.
1. Compile time project constants are defined in
`dora/core/common/src/main/java-templates/` and compiled to
`dora/core/common/target/generated-sources/java-templates/`.

You will need to mark these directories as "Generated Sources Root" for IntelliJ to resolve the
source files. Alternatively, you can let IntelliJ generate them and mark the directories
Expand All @@ -50,13 +49,12 @@ action from the `Navigate > Search Everywhere` dialog.

##### Start a single master Alluxio cluster
1. Run `dev/intellij/install-runconfig.sh`
2. Restart IntelliJ IDEA
3. Edit `conf/alluxio-site.properties` to contain these configurations
1. Restart IntelliJ IDEA
1. Edit `conf/alluxio-site.properties` to contain these configurations
```properties
alluxio.master.hostname=localhost
alluxio.job.master.hostname=localhost
```
4. Edit `conf/log4j.properties` to print log in console
1. Edit `conf/log4j.properties` to print log in console
Replace the `log4j.rootLogger` configuration with
```properties
log4j.rootLogger=INFO, ${alluxio.logger.type}, ${alluxio.remote.logger.type}, stdout
Expand All @@ -68,13 +66,10 @@ action from the `Navigate > Search Everywhere` dialog.
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
```
5. Format the Alluxio master by running `bin/alluxio formatMasters`
6. In Intellij, start Alluxio master process by selecting `Run > Run > AlluxioMaster`
7. In Intellij, start Alluxio job master process by selecting `Run > Run > AlluxioJobMaster`
8. Prepare the RamFS and format the Alluxio Worker with `bin/alluxio-mount.sh SudoMount && bin/alluxio formatWorker`
9. In Intellij, start Alluxio worker process by selecting `Run > Run > AlluxioWorker`
10. In Intellij, start Alluxio job worker process by selecting `Run > Run > AlluxioJobWorker`
11. [Verify the Alluxio cluster is up]({{ '/en/deploy/Get-Started.html#starting-alluxio' | relativize_url }}).
1. Format the Alluxio master by running `bin/alluxio journal format`
1. In Intellij, start Alluxio master process by selecting `Run > Run > AlluxioMaster`
1. In Intellij, start Alluxio worker process by selecting `Run > Run > AlluxioWorker`
1. [Verify the Alluxio cluster is up]({{ '/en/deploy/Get-Started.html#starting-alluxio' | relativize_url }}).

##### Start a High Availability (HA) Alluxio cluster
1. Create journal directories for the masters
Expand All @@ -87,67 +82,27 @@ action from the `Navigate > Search Everywhere` dialog.
`alluxio/dev/intellij/runConfigurations/AlluxioMaster_0.xml`.
> Note: If the journal folders exist, and you want to apply a new HA cluster, you should clear
> files in the journal folders first.
2. Run `dev/intellij/install-runconfig.sh`
3. Restart IntelliJ IDEA
4. Edit `conf/alluxio-site.properties` to contain these configurations
1. Run `dev/intellij/install-runconfig.sh`
1. Restart IntelliJ IDEA
1. Edit `conf/alluxio-site.properties` to contain these configurations
```properties
alluxio.master.hostname=localhost
alluxio.job.master.hostname=localhost
alluxio.master.embedded.journal.addresses=localhost:19200,localhost:19201,localhost:19202
alluxio.master.rpc.addresses=localhost:19998,localhost:19988,localhost:19978
```
The ports are defined in the run configurations.
5. In Intellij, start the Alluxio master processes by selecting `Run > Run >
1. In Intellij, start the Alluxio master processes by selecting `Run > Run >
AlluxioMaster-0`, `Run > Run > AlluxioMaster-1`, and `Run > Run > AlluxioMaster-2`
6. Prepare the RamFS and format the Alluxio Worker with `bin/alluxio-mount.sh SudoMount && bin/alluxio formatWorker`
7. In Intellij, start the Alluxio worker process by selecting `Run > Run > AlluxioWorker`
8. In Intellij, start the Alluxio job master process by selecting `Run > Run > AlluxioJobMaster`
9. In Intellij, start the Alluxio job worker process by selecting `Run > Run > AlluxioJobWorker`
10. Verify the HA Alluxio cluster is up, by running
`bin/alluxio fsadmin journal quorum info -domain MASTER`, and you will see output like this:
```shell
Journal domain : MASTER
Quorum size : 3
Quorum leader : localhost:19201

STATE | PRIORITY | SERVER ADDRESS
AVAILABLE | 0 | localhost:19200
AVAILABLE | 0 | localhost:19201
AVAILABLE | 0 | localhost:19202
```

**You can also start a High Availability (HA) Job Master process on this basis.**

1. Stop the Alluxio job master and job worker processes from steps 8 and 9 if they are running.
2. Edit `conf/alluxio-site.properties` and add these configurations
```properties
alluxio.job.master.rpc.addresses=localhost:20001,localhost:20011,localhost:20021
alluxio.job.master.embedded.journal.addresses=localhost:20003,localhost:20013,localhost:20023
```
3. In Intellij, start the Alluxio job master processes by selecting `Run > Run >
AlluxioJobMaster-0`, `Run > Run > AlluxioJobMaster-1`, and `Run > Run > AlluxioJobMaster-2`
4. In Intellij, start the Alluxio job worker process by selecting `Run > Run > AlluxioJobWorker`
5. Verify the HA JobMaster cluster is up, by running
`bin/alluxio fsadmin journal quorum info -domain JOB_MASTER`, and you will
see output like this:
```shell
Journal domain : JOB_MASTER
Quorum size : 3
Quorum leader : localhost:20013

STATE | PRIORITY | SERVER ADDRESS
AVAILABLE | 0 | localhost:20003
AVAILABLE | 0 | localhost:20013
AVAILABLE | 0 | localhost:20023
```
1. In Intellij, start the Alluxio worker process by selecting `Run > Run > AlluxioWorker`

##### Start an AlluxioFuse process

1. Start a [single master Alluxio cluster](#start-a-single-master-alluxio-cluster)
or a [High Availability cluster](#start-a-high-availability-ha-alluxio-cluster) in Intellij.
2. In Intellij, start AlluxioFuse process by selecting `Run > Run > AlluxioFuse`.
1. In Intellij, start AlluxioFuse process by selecting `Run > Run > AlluxioFuse`.
This creates a FUSE mount point at `/tmp/alluxio-fuse`.
3. Verify the FUSE filesystem is working by running these commands:
1. Verify the FUSE filesystem is working by running these commands:
```shell
$ touch /tmp/alluxio-fuse/tmp1
$ ls /tmp/alluxio-fuse
Expand All @@ -169,7 +124,7 @@ You may also have to add the classpath variable `M2_REPO` by running:
$ mvn -Declipse.workspace="your Eclipse Workspace" eclipse:configure-workspace
```

> Note: Alluxio 2.2 moved generated gRPC proto source files into `alluxio/core/transport/target/generated-sources/protobuf/`.
> Note: Generated gRPC proto source files are located in `alluxio/core/transport/target/generated-sources/protobuf/`.
You will need to mark the directory as a source folder for Eclipse to resolve the source files.

## Maven Targets and Plugins
Expand Down
10 changes: 4 additions & 6 deletions docs/en/deploy/Get-Started.md
Original file line number Diff line number Diff line change
Expand Up @@ -131,14 +131,13 @@ Alluxio needs to be formatted before starting the process. The following command
the Alluxio journal and worker storage directories.

```shell
$ ./bin/alluxio format
$ ./bin/alluxio init format
```

Start the Alluxio services

```shell
$ ./bin/alluxio-start.sh master
$ ./bin/alluxio-start.sh worker SudoMount
$ ./bin/alluxio process start local
```

Congratulations! Alluxio is now up and running!
Expand Down Expand Up @@ -201,8 +200,7 @@ When the file is read, it will also be cached by Alluxio to speed up future data
Stop Alluxio with the following command:

```shell
$ ./bin/alluxio-stop.sh master
$ ./bin/alluxio-stop.sh worker
$ ./bin/alluxio process stop local
```

## Next Steps
Expand All @@ -226,7 +224,7 @@ our documentation, such as [Data Caching]({{ '/en/core-services/Data-Caching.htm

For the users who are using macOS 11(Big Sur) or later, when running the command
```shell
$ ./bin/alluxio format
$ ./bin/alluxio init format
```
you might get the error message:
```
Expand Down
37 changes: 17 additions & 20 deletions docs/en/deploy/Install-Alluxio-Cluster-with-HA.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,33 +89,30 @@ Before Alluxio can be started for the first time, the Alluxio master journal and

On all the Alluxio master nodes, list all the worker hostnames in the `conf/workers` file, and list all the masters in the `conf/masters` file.
This will allow alluxio scripts to run operations on the cluster nodes.
`format` Alluxio cluster with the following command in one of the master nodes:
`init format` Alluxio cluster with the following command in one of the master nodes:

```shell
$ ./bin/alluxio format
$ ./bin/alluxio init format
```

### Launch Alluxio

In one of the master nodes, start the Alluxio cluster with the following command:

```shell
$ ./bin/alluxio-start.sh all SudoMount
$ ./bin/alluxio process start all
```

This will start Alluxio masters on all the nodes specified in `conf/masters`, and start the workers
on all the nodes specified in `conf/workers`.
Argument `SudoMount` indicates to mount the RamFS on each worker using `sudo` privilege, if it is
not already mounted.
On MacOS, make sure your terminal has full disk access (tutorial [here](https://osxdaily.com/2018/10/09/fix-operation-not-permitted-terminal-error-macos/)).

### Verify Alluxio Cluster

To verify that Alluxio is running, you can visit the web UI of the leading master. To determine the
leading master, run:

```shell
$ ./bin/alluxio fs masterInfo
$ ./bin/alluxio info report
```

Then, visit `http://<LEADER_HOSTNAME>:19999` to see the status page of the Alluxio leading master.
Expand All @@ -124,7 +121,7 @@ Alluxio comes with a simple program that writes and reads sample files in Alluxi
program with:

```shell
$ ./bin/alluxio runTests
$ ./bin/alluxio exec basicIOTest
```

## Access an Alluxio Cluster with HA
Expand Down Expand Up @@ -217,25 +214,25 @@ Below are common operations to perform on an Alluxio cluster.
To stop an Alluxio service, run:

```shell
$ ./bin/alluxio-stop.sh all
$ ./bin/alluxio process stop all
```

This will stop all the processes on all nodes listed in `conf/workers` and `conf/masters`.

You can stop just the masters and just the workers with the following commands:

```shell
$ ./bin/alluxio-stop.sh masters # stops all masters in conf/masters
$ ./bin/alluxio-stop.sh workers # stops all workers in conf/workers
$ ./bin/alluxio process stop masters # stops all masters in conf/masters
$ ./bin/alluxio process stop workers # stops all workers in conf/workers
```

If you do not want to use `ssh` to login to all the nodes and stop all the processes, you can run
commands on each node individually to stop each component.
For any node, you can stop a master or worker with:

```shell
$ ./bin/alluxio-stop.sh master # stops the local master
$ ./bin/alluxio-stop.sh worker # stops the local worker
$ ./bin/alluxio process stop master # stops the local master
$ ./bin/alluxio process stop worker # stops the local worker
```

### Restart Alluxio
Expand All @@ -244,23 +241,23 @@ Starting Alluxio is similar. If `conf/workers` and `conf/masters` are both popul
the cluster with:

```shell
$ ./bin/alluxio-start.sh all
$ ./bin/alluxio process start all
```

You can start just the masters and just the workers with the following commands:

```shell
$ ./bin/alluxio-start.sh masters # starts all masters in conf/masters
$ ./bin/alluxio-start.sh workers # starts all workers in conf/workers
$ ./bin/alluxio process start masters # starts all masters in conf/masters
$ ./bin/alluxio process start workers # starts all workers in conf/workers
```

If you do not want to use `ssh` to login to all the nodes and start all the processes, you can run
commands on each node individually to start each component. For any node, you can start a master or
worker with:

```shell
$ ./bin/alluxio-start.sh master # starts the local master
$ ./bin/alluxio-start.sh worker # starts the local worker
$ ./bin/alluxio process start master # starts the local master
$ ./bin/alluxio process start worker # starts the local worker
```

### Add/Remove Workers Dynamically
Expand All @@ -271,15 +268,15 @@ In most cases, the new worker's configuration should be the same as all the othe
Run the following command on the new worker to add

```shell
$ ./bin/alluxio-start.sh worker SudoMount # starts the local worker
$ ./bin/alluxio process start worker # starts the local worker
```

Once the worker is started, it will register itself with the Alluxio leading master and become part of the Alluxio cluster.

Removing a worker is as simple as stopping the worker process.

```shell
$ ./bin/alluxio-stop.sh worker # stops the local worker
$ ./bin/alluxio process stop worker # stops the local worker
```

Once the worker is stopped, and after
Expand Down
Loading
Loading