Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release notes for 3.2.8 #227

Merged
merged 3 commits into from
Jan 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/low-code-spark/fabrics/create-a-fabric.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Please refer below video for step-by-step example, or learn how to configure an
- **Spark Connection** - Here you will provide the Livy URL, Authentication, Spark version and Scala version. Spark and Scala versions are used when user tries to attach a cluster using this Fabric.

:::note
The Spark and Scala versions are now mandatory with recent [Release](/release_notes/Feb_2023#spark-and-scala-versions-are-now-required-in-livy-Fabrics).
The Spark and Scala versions are now mandatory with recent [Release](/docs/release_notes/2023/feb2023.md#spark-and-scala-versions-are-now-required-in-livy-Fabrics).
If users has an old Fabric which doesn't have Spark abd Scala versions present, an error (seen below) will appear when trying to attach to a cluster. User would need to update the Fabric from the metadata page or by clicking `Update Fabric` button (seen below).
![Fabric_misconfigured](img/fabric_misconfigured.png)
:::
Expand Down
6 changes: 6 additions & 0 deletions docs/release_notes/2023/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"label": "2023",
"position": 2,
"collapsible": true,
"collapsed": true
}
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,15 @@ At any step of the process, data users can interactively run their models to mak
Projects built through Prophecy are stored in the dbt Core format as repositories on Git, which allows data teams to follow the best software engineering practices for CI/CD.
Data users can import an existing dbt Core project or start from scratch. They can publish those projects to other teams or subscribe to already existing ones.

Please refer [here](./../low-code-sql) for detailed documentation.
Please refer [here](../../low-code-sql) for detailed documentation.

#### Expression Builder

To perform data analysis and manipulation, Spark has provided various functions. These functions enable users to perform various transformations on their data.
In order to perform complex data analysis tasks, it's important to be able to construct expressions that combine the Spark functions in various ways.
Prophecy has now made this easy with support of **[Expression Builder](./../low-code-spark/expression-builder)**.
Prophecy has now made this easy with support of **[Expression Builder](/docs/low-code-spark/expression-builder.md)**.
Using the Spark Expression Builder can save you time and effort when constructing complex expressions, and can help you to better understand the relationships between different functions and their arguments.
Please refer [here](./../low-code-spark/expression-builder) for Detailed documentation on this feature.
Please refer [here](/docs/low-code-spark/expression-builder.md) for Detailed documentation on this feature.

#### Data Quality (_Beta_)

Expand All @@ -43,7 +43,7 @@ Data quality refers to the accuracy, relevance, completeness, and consistency of
There are several factors that affect data quality. These include data entry errors, data duplication, missing data, inconsistent data, and outdated data. These issues can arise from human error, system errors, or technical issues such as data integration problems.

In Prophecy, you, as a user, can now easily set **Expectations** on your data such as uniqueness, range, and more. You can then schedule these **Expectations** to run at fixed **Schedules**, and receive alerts whenever any mismatch occurs, empowering you to maintain the quality of your data.
Please refer [here](./../low-code-spark/data-quality) for detailed documentation on this feature.
Please refer [here](/docs/low-code-spark/data-quality.md) for detailed documentation on this feature.

#### Airflow (_Beta_)

Expand All @@ -53,7 +53,7 @@ Airflow allows users to define workflows as DAGs (Directed Acyclic Graphs), wher
Tasks can be Data Pipelines in Spark/Python or SQL or scripts. Dependencies between tasks are defined using operators, which are essentially plugins that define how tasks interact with each other.

With Prophecy, you can create and manage Airflow jobs using a visual Drag and drop interface. This allows you to easily design and schedule complex workflows, without having to write any code.
Please refer [here](./../low-code-jobs/airflow) for detailed documentation on this feature.
Please refer [here](/docs/low-code-jobs/airflow.md) for detailed documentation on this feature.

### Improvements {#UXImprovements300}

Expand All @@ -74,7 +74,7 @@ Please see below screenshot for example
#### New Data Observability Page

Prophecy has updated the UI for the Data Observability page (previously known as Monitoring). You can now monitor the Data quality runs as well as Jobs from this page.
Please refer [here](./../low-code-spark/data-quality#data-observability-view) for more details on this.
Please refer [here](/docs/low-code-spark/data-quality.md#data-observability-view) for more details on this.

#### Option to Attach a cluster and Run from the Gem Screens

Expand Down
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ Improved the cluster restart behaviour when adding a new Pipeline dependency. Wh

#### Spark and Scala Versions are now Required in Livy Fabrics

For creating a Livy Fabric, previously the Spark and Scala versions were optional and had default values 2.3.0 and 2.11. With this release these fields are made mandatory, and users will need to update their Fabric configurations with respective Spark and Scala version values. Please refer [here](../low-code-spark/fabrics/create-a-fabric.md#livy) for more details.
For creating a Livy Fabric, previously the Spark and Scala versions were optional and had default values 2.3.0 and 2.11. With this release these fields are made mandatory, and users will need to update their Fabric configurations with respective Spark and Scala version values. Please refer [here](/docs/low-code-spark/fabrics/create-a-fabric.md#livy) for more details.

### New Features {#NewFeaturesRelease27}

Expand Down Expand Up @@ -169,19 +169,19 @@ Prophecy now has support for integrating through Databricks Partner Connect on G

#### Interims for Unity Catalog Workspaces

Pipelines running on **Unity Catalog cluster** now have the ability to show [Vanilla interims](../low-code-spark/execution/executions_on_databricks_clusters#vanilla-interims) on all Actions/Tail nodes of the Pipeline.
Pipelines running on **Unity Catalog cluster** now have the ability to show [Vanilla interims](/docs/low-code-spark/execution/databricks-clusters-behaviours.md#vanilla-interims) on all Actions/Tail nodes of the Pipeline.

#### Team-level execution metrics

An administrator will have the option to create the following tables at the time of team creation which are used for storing [Execution Metrics](docs/low-code-spark/execution/execution-metrics.md). The administrator will grant access to the tables accordingly to their team members.
An administrator will have the option to create the following tables at the time of team creation which are used for storing [Execution Metrics](/docs/low-code-spark/execution/execution-metrics.md). The administrator will grant access to the tables accordingly to their team members.

- Pipeline Metrics Table
- Component (Dataset) Metrics Table
- Interim Table

#### Seamless Git Integration with stored credentials

Added the ability to store [Git](docs/metadata/git.md) credentials for a user across projects. Now the user can re-use Git credentials without re-authenticating.
Added the ability to store [Git](/docs/metadata/git.md) credentials for a user across projects. Now the user can re-use Git credentials without re-authenticating.

#### Streaming Pipeline Support (_Beta_)

Expand All @@ -191,7 +191,7 @@ With this release, Prophecy now supports Streaming Pipelines as a beta feature.

#### Union By Name

Added Union by name in [Set Operations](docs/low-code-spark/gems/transform/set-operation.md) Gem. This can be used now to get union by column names when positions are different in two datasets.
Added Union by name in [Set Operations](/docs/low-code-spark/gems/transform/set-operation.md) Gem. This can be used now to get union by column names when positions are different in two datasets.

### Improvements {#ImprovementsRelease26}

Expand Down
Loading