Skip to content

Commit

Permalink
Use consistent style for anchors across docs
Browse files Browse the repository at this point in the history
- specifically replacing underscore with dash usage
  • Loading branch information
michaeleby1 authored May 11, 2023
1 parent f32875c commit 5353db5
Show file tree
Hide file tree
Showing 95 changed files with 294 additions and 294 deletions.
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/admin/dynamic-filtering.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,8 @@ Dynamic filtering is enabled by default. It can be disabled by setting either th
Support for push down of dynamic filters is specific to each connector,
and the relevant underlying database or storage system. The documentation for
specific connectors with support for dynamic filtering includes further details,
for example the :ref:`Hive connector <hive_dynamic_filtering>`
or the :ref:`Memory connector <memory_dynamic_filtering>`.
for example the :ref:`Hive connector <hive-dynamic-filtering>`
or the :ref:`Memory connector <memory-dynamic-filtering>`.

Analysis and confirmation
-------------------------
Expand Down
10 changes: 5 additions & 5 deletions docs/src/main/sphinx/admin/event-listeners-http.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ You need to perform the following steps:
* Provide an HTTP/S service that accepts POST events with a JSON body.
* Configure ``http-event-listener.connect-ingest-uri`` in the event listener properties file
with the URI of the service.
* Detail the events to send in the :ref:`http_event_listener_configuration` section.
* Detail the events to send in the :ref:`http-event-listener-configuration` section.

.. _http_event_listener_configuration:
.. _http-event-listener-configuration:

Configuration
-------------
Expand All @@ -43,7 +43,7 @@ as an example:
http-event-listener.connect-ingest-uri=<your ingest URI>
And set add ``etc/http-event-listener.properties`` to ``event-listener.config-files``
in :ref:`config_properties`:
in :ref:`config-properties`:

.. code-block:: properties
Expand Down Expand Up @@ -78,7 +78,7 @@ Configuration properties

* - http-event-listener.connect-http-headers
- List of custom HTTP headers to be sent along with the events. See
:ref:`http_event_listener_custom_headers` for more details
:ref:`http-event-listener-custom-headers` for more details
- Empty

* - http-event-listener.connect-retry-count
Expand Down Expand Up @@ -107,7 +107,7 @@ Configuration properties
- Pass configuration onto the HTTP client
-

.. _http_event_listener_custom_headers:
.. _http-event-listener-custom-headers:

Custom HTTP headers
^^^^^^^^^^^^^^^^^^^
Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/admin/graceful-shutdown.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Keep the following aspects in mind:
* The ``default`` :doc:`/security/built-in-system-access-control` does not allow
graceful shutdowns. You can use the ``allow-all`` system access control, or
configure :ref:`system information rules
<system-file-auth-system_information>` with the ``file`` system access
<system-file-auth-system-information>` with the ``file`` system access
control. These configuration must be present on all workers.


Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/admin/jmx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Monitoring with JMX
Trino exposes a large number of different metrics via the Java Management Extensions (JMX).

You have to enable JMX by setting the ports used by the RMI registry and server
in the :ref:`config.properties file <config_properties>`:
in the :ref:`config.properties file <config-properties>`:

.. code-block:: text
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/admin/properties-logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ exceptions as singular fields in a logging search system.

The path to the log file used by Trino. The path is relative to the data
directory, configured to ``var/log/server.log`` by the launcher script as
detailed in :ref:`running_trino`. Alternatively, you can write logs to separate
detailed in :ref:`running-trino`. Alternatively, you can write logs to separate
the process (typically running next to Trino as a sidecar process) via the TCP
protocol by using a log path of the format ``tcp://host:port``.

Expand Down Expand Up @@ -89,7 +89,7 @@ Flag to enable or disable compression of the log files of the HTTP server.

The path to the log file used by the HTTP server. The path is relative to
the data directory, configured by the launcher script as detailed in
:ref:`running_trino`.
:ref:`running-trino`.

``http-server.log.max-history``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/admin/properties-resource-management.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ it is killed.

The sum of :ref:`prop-resource-query-max-memory-per-node` and
:ref:`prop-resource-memory-heap-headroom-per-node` must be less than the
maximum heap size in the JVM on the node. See :ref:`jvm_config`.
maximum heap size in the JVM on the node. See :ref:`jvm-config`.

.. note::

Expand Down Expand Up @@ -99,7 +99,7 @@ for allocations that are not tracked by Trino.

The sum of :ref:`prop-resource-query-max-memory-per-node` and
:ref:`prop-resource-memory-heap-headroom-per-node` must be less than the
maximum heap size in the JVM on the node. See :ref:`jvm_config`.
maximum heap size in the JVM on the node. See :ref:`jvm-config`.

.. _prop-resource-exchange-deduplication-buffer-size:

Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/admin/resource-groups.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ values in the ``priority`` field.

The ``resource_groups`` table also contains an ``environment`` field which is
matched with the value contained in the ``node.environment`` property in
:ref:`node_properties`. This allows the resource group configuration for different
:ref:`node-properties`. This allows the resource group configuration for different
Trino clusters to be stored in the same database if required.

The configuration is reloaded from the database every second, and the changes
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/admin/web-interface.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Web UI
Trino provides a web-based user interface (UI) for monitoring a Trino cluster
and managing queries. The Web UI is accessible on the coordinator via
HTTP or HTTPS, using the corresponding port number specified in the coordinator
:ref:`config_properties`. It can be configured with :doc:`/admin/properties-web-interface`.
:ref:`config-properties`. It can be configured with :doc:`/admin/properties-web-interface`.

The Web UI can be disabled entirely with the ``web-ui.enabled`` property.

Expand All @@ -20,7 +20,7 @@ allowed. Typically, users login with the same username that they use for
running queries.

If no system access control is installed, then all users are able to view and kill
any query. This can be restricted by using :ref:`query rules <query_rules>` with the
any query. This can be restricted by using :ref:`query rules <query-rules>` with the
:doc:`/security/built-in-system-access-control`. Users always have permission to view
or kill their own queries.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/client/cli.rst
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ enabled.

Invoking the CLI with Kerberos support enabled requires a number of additional
command line options. You also need the :ref:`Kerberos configuration files
<server_kerberos_principals>` for your user on the machine running the CLI. The
<server-kerberos-principals>` for your user on the machine running the CLI. The
simplest way to invoke the CLI is with a wrapper script:

.. code-block:: text
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/bigquery.rst
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ a few caveats:
it, set the ``bigquery.experimental.arrow-serialization.enabled``
configuration property to ``true`` and add
``--add-opens=java.base/java.nio=ALL-UNNAMED`` to the Trino
:ref:`jvm_config`.
:ref:`jvm-config`.

Reading from views
^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -279,7 +279,7 @@ which exposes BigQuery view definition. Given a BigQuery view ``example_view``
you can send query ``SELECT * example_view$view_definition`` to see the SQL
which defines view in BigQuery.

.. _bigquery_special_columns:
.. _bigquery-special-columns:

Special columns
---------------
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/hive-alluxio.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Alluxio client-side configuration
To configure Alluxio client-side properties on Trino, append the Alluxio
configuration directory (``${ALLUXIO_HOME}/conf``) to the Trino JVM classpath,
so that the Alluxio properties file ``alluxio-site.properties`` can be loaded as
a resource. Update the Trino :ref:`jvm_config` file ``etc/jvm.config``
a resource. Update the Trino :ref:`jvm-config` file ``etc/jvm.config``
to include the following:

.. code-block:: text
Expand All @@ -44,7 +44,7 @@ to bypass the network (*short-circuit*). See `Performance Tuning Tips for Presto
<https://www.alluxio.io/blog/top-5-performance-tuning-tips-for-running-presto-on-alluxio-1/?utm_source=trino&utm_medium=trinodocs>`_
for more details.

.. _alluxio_catalog_service:
.. _alluxio-catalog-service:

Alluxio catalog service
-----------------------
Expand Down
36 changes: 18 additions & 18 deletions docs/src/main/sphinx/connector/hive.rst
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ When not using Kerberos with HDFS, Trino accesses HDFS using the
OS user of the Trino process. For example, if Trino is running as
``nobody``, it accesses HDFS as ``nobody``. You can override this
username by setting the ``HADOOP_USER_NAME`` system property in the
Trino :ref:`jvm_config`, replacing ``hdfs_user`` with the
Trino :ref:`jvm-config`, replacing ``hdfs_user`` with the
appropriate username:

.. code-block:: text
Expand All @@ -125,7 +125,7 @@ Whenever you change the user Trino is using to access HDFS, remove
``/tmp/presto-*`` on HDFS, as the new user may not have access to
the existing temporary directories.

.. _hive_configuration_properties:
.. _hive-configuration-properties:

Hive general configuration properties
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -316,7 +316,7 @@ Hive connector documentation.
- ``true``
* - ``hive.auto-purge``
- Set the default value for the auto_purge table property for managed
tables. See the :ref:`hive_table_properties` for more information on
tables. See the :ref:`hive-table-properties` for more information on
auto_purge.
- ``false``
* - ``hive.partition-projection-enabled``
Expand Down Expand Up @@ -581,7 +581,7 @@ properties:
- Number of threads for parallel statistic writes to Glue.
- ``5``

.. _partition_projection:
.. _partition-projection:

Accessing tables with Athena partition projection metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand All @@ -600,7 +600,7 @@ you have partition projection enabled, you can set the
``partition_projection_ignore`` table property to ``true`` for a table to bypass
any errors.

Refer to :ref:`hive_table_properties` and :ref:`hive_column_properties` for
Refer to :ref:`hive-table-properties` and :ref:`hive-column-properties` for
configuration of partition projection.

Metastore configuration for Avro
Expand Down Expand Up @@ -665,7 +665,7 @@ on migrating from Hive to Trino.

The following sections provide Hive-specific information regarding SQL support.

.. _hive_examples:
.. _hive-examples:

Basic usage examples
^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -801,7 +801,7 @@ The following procedures are available:
``create_empty_partition``). If ``partition_values`` argument is omitted, stats are dropped for the
entire table.

.. _register_partition:
.. _register-partition:

* ``system.register_partition(schema_name, table_name, partition_columns, partition_values, location)``

Expand All @@ -813,14 +813,14 @@ The following procedures are available:
Due to security reasons, the procedure is enabled only when ``hive.allow-register-partition-procedure``
is set to ``true``.

.. _unregister_partition:
.. _unregister-partition:

* ``system.unregister_partition(schema_name, table_name, partition_columns, partition_values)``

Unregisters given, existing partition in the metastore for the specified table.
The partition data is not deleted.

.. _hive_flush_metadata_cache:
.. _hive-flush-metadata-cache:

* ``system.flush_metadata_cache()``

Expand Down Expand Up @@ -893,7 +893,7 @@ as Hive. For example, converting the string ``'foo'`` to a number,
or converting the string ``'1234'`` to a ``tinyint`` (which has a
maximum value of ``127``).

.. _hive_avro_schema:
.. _hive-avro-schema:

Avro schema evolution
"""""""""""""""""""""
Expand Down Expand Up @@ -1023,7 +1023,7 @@ session property:
to the Trino logs and query failure messages to see which files must be
deleted.

.. _hive_table_properties:
.. _hive-table-properties:

Table properties
""""""""""""""""
Expand All @@ -1048,7 +1048,7 @@ to the connector using a :doc:`WITH </sql/create-table-as>` clause::
partition is deleted instead of a soft deletion using the trash.
-
* - ``avro_schema_url``
- The URI pointing to :ref:`hive_avro_schema` for the table.
- The URI pointing to :ref:`hive-avro-schema` for the table.
-
* - ``bucket_count``
- The number of buckets to group data into. Only valid if used with
Expand All @@ -1075,7 +1075,7 @@ to the connector using a :doc:`WITH </sql/create-table-as>` clause::
- ``,``
* - ``external_location``
- The URI for an external Hive table on S3, Azure Blob Storage, etc. See the
:ref:`hive_examples` for more information.
:ref:`hive-examples` for more information.
-
* - ``format``
- The table file format. Valid values include ``ORC``, ``PARQUET``,
Expand Down Expand Up @@ -1147,7 +1147,7 @@ to the connector using a :doc:`WITH </sql/create-table-as>` clause::
The properties are not included in the output of ``SHOW CREATE TABLE`` statements.
-
.. _hive_special_tables:
.. _hive-special-tables:

Metadata tables
"""""""""""""""
Expand All @@ -1161,7 +1161,7 @@ You can inspect the property names and values with a simple query::

SELECT * FROM example.web."page_views$properties";

.. _hive_column_properties:
.. _hive-column-properties:

Column properties
"""""""""""""""""
Expand Down Expand Up @@ -1227,7 +1227,7 @@ Column properties
`projection.${columnName}.interval.unit <https://docs.aws.amazon.com/athena/latest/ug/partition-projection-supported-types.html>`_.
-

.. _hive_special_columns:
.. _hive-special-columns:

Metadata columns
""""""""""""""""
Expand Down Expand Up @@ -1413,7 +1413,7 @@ and by default will also collect column level statistics:
* - ``BOOLEAN``
- Number of nulls, number of true/false values

.. _hive_analyze:
.. _hive-analyze:

Updating table and partition statistics
"""""""""""""""""""""""""""""""""""""""
Expand Down Expand Up @@ -1458,7 +1458,7 @@ You can also drop statistics for selected partitions only::
table_name => 'table',
partition_values => ARRAY[ARRAY['p2_value1', 'p2_value2']])

.. _hive_dynamic_filtering:
.. _hive-dynamic-filtering:

Dynamic filtering
^^^^^^^^^^^^^^^^^
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/iceberg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1142,7 +1142,7 @@ The output of the query has the following columns:
- ``bigint``
- For branch only, the max snapshot age allowed in a branch. Older snapshots in the branch will be expired.

.. _iceberg_metadata_columns:
.. _iceberg-metadata-columns:

Metadata columns
""""""""""""""""
Expand Down Expand Up @@ -1430,7 +1430,7 @@ statement. This can be disabled using ``iceberg.extended-statistics.enabled``
catalog configuration property, or the corresponding
``extended_statistics_enabled`` session property.

.. _iceberg_analyze:
.. _iceberg-analyze:

Updating table statistics
"""""""""""""""""""""""""
Expand Down
2 changes: 1 addition & 1 deletion docs/src/main/sphinx/connector/memory.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ Upon execution of a ``DROP TABLE`` operation, memory is not released
immediately. It is instead released after the next write operation to the
catalog.

.. _memory_dynamic_filtering:
.. _memory-dynamic-filtering:

Dynamic filtering
-----------------
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/system.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ that can be set when creating a new schema.
The table properties table contains the list of available properties
that can be set when creating a new table.

.. _system_metadata_materialized_views:
.. _system-metadata-materialized-views:

``metadata.materialized_views``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -107,7 +107,7 @@ The table comments table contains the list of table comment.
The nodes table contains the list of visible nodes in the Trino
cluster along with their status.

.. _optimizer_rule_stats:
.. _optimizer-rule-stats:

``runtime.optimizer_rule_stats``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/develop/event-listener.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,14 +49,14 @@ Example configuration file:
custom-property1=custom-value1
custom-property2=custom-value2
.. _multiple_listeners:
.. _multiple-listeners:

Multiple event listeners
------------------------

Trino supports multiple instances of the same or different event listeners.
Install and configure multiple instances by setting
``event-listener.config-files`` in :ref:`config_properties` to a comma-separated
``event-listener.config-files`` in :ref:`config-properties` to a comma-separated
list of the event listener configuration files:

.. code-block:: text
Expand Down
Loading

0 comments on commit 5353db5

Please sign in to comment.