Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/show all metrics and table cells #798

Merged
merged 10 commits into from
Mar 30, 2022
42 changes: 20 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Live Demo: <a href="https://demo.kedro.org/" target="_blank">https://demo.kedro.

## Introduction

Kedro-Viz is an interactive development tool for building data science pipelines with [Kedro](https://github.com/kedro-org/kedro). Kedro-Viz also allows users to view and compare different runs in the Kedro project.
Kedro-Viz is an interactive development tool for building data science pipelines with [Kedro](https://github.com/kedro-org/kedro). Kedro-Viz also allows users to view and compare different runs in the Kedro project.

## Features

Expand All @@ -39,26 +39,26 @@ Kedro-Viz is an interactive development tool for building data science pipelines
- 🧪 Supports tracking and comparing runs in a Kedro project
- 🎩 Many more to come


## Installation

There are two ways you can use Kedro-Viz:

* As a [Kedro plugin](https://kedro.readthedocs.io/en/stable/07_extend_kedro/04_plugins.html) (the most common way).
- As a [Kedro plugin](https://kedro.readthedocs.io/en/stable/07_extend_kedro/04_plugins.html) (the most common way).

To install Kedro-Viz as a Kedro plugin:

To install Kedro-Viz as a Kedro plugin:
```bash
pip install kedro-viz
```

```bash
pip install kedro-viz
```
- As a standalone React component (for embedding Kedro-Viz in your web application).

* As a standalone React component (for embedding Kedro-Viz in your web application).
To install the standalone React component:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I've seen the same change and suggested in Rashida's PR - I think it's worth keeping the wording(for embedding Kedro-Viz in your web application). to give context of the usage of the kedro-viz react component

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Somehow this line was moved to the line above, so it's still in there and does hopefully give the context for the users.

The lines around this one you've commented on look like this:

- As a standalone React component (for embedding Kedro-Viz in your web application).

  To install the standalone React component:

  npm install @quantumblack/kedro-viz


To install the standalone React component:
```bash
npm install @quantumblack/kedro-viz
```

```bash
npm install @quantumblack/kedro-viz
```
## Usage

### CLI Usage
Expand Down Expand Up @@ -107,7 +107,7 @@ Options:

To enable [experiment tracking](https://kedro.readthedocs.io/en/stable/08_logging/02_experiment_tracking.html) in Kedro-Viz, you need to add the Kedro-Viz `SQLiteStore` to your Kedro project.

This can be done by adding the below code to `settings.py` in the `src` folder of your Kedro project.
This can be done by adding the below code to `settings.py` in the `src` folder of your Kedro project.

```python
from kedro_viz.integrations.kedro.sqlite_store import SQLiteStore
Expand All @@ -120,8 +120,8 @@ Once the above set-up is complete, tracking datasets can be used to track releva

**Notes:**

* Experiment Tracking is only available for Kedro-Viz >= 4.0.2 and Kedro >= 0.17.5
* Prior to Kedro 0.17.6, when using tracking datasets, you will have to explicitly mark the datasets as `versioned` for it to show up properly in Kedro-Viz experiment tracking tab. From Kedro >= 0.17.6, this is done automatically:
- Experiment Tracking is only available for Kedro-Viz >= 4.0.2 and Kedro >= 0.17.5
- Prior to Kedro 0.17.6, when using tracking datasets, you will have to explicitly mark the datasets as `versioned` for it to show up properly in Kedro-Viz experiment tracking tab. From Kedro >= 0.17.6, this is done automatically:

```yaml
train_evaluation.r2_score_linear_regression:
Expand All @@ -138,12 +138,10 @@ To use Kedro-Viz as a standalone React component, import the component and suppl
import KedroViz from '@quantumblack/kedro-viz';

const MyApp = () => (
<div style={{ height: "100vh" }}>
<KedroViz
data={json}
/>
<div style={{ height: '100vh' }}>
<KedroViz data={json} />
</div>
)
);
```

The JSON can be obtained by running:
Expand All @@ -158,8 +156,8 @@ We also recommend wrapping the `Kedro-Viz` component with a parent HTML/JSX elem

Kedro-Viz uses features flags to roll out some experimental features. The following flags are currently in use:

| Flag | Description |
|------| ------------|
| Flag | Description |
| ----------- | --------------------------------------------------------------------------------------- |
| sizewarning | From release v3.9.1. Show a warning before rendering very large graphs (default `true`) |

To enable or disable a flag, click on the settings icon in the toolbar and toggle the flag on/off.
Expand Down
3 changes: 3 additions & 0 deletions RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Please follow the established format:
-->

## Major features and improvements

- Set up of a pop-up reminder to nudge users to upgrade Kedro-Viz when a new version is released. (#746)
- Set up the 'export run' button to allow exporting of selected run data into a csv file for download. (#757)

Expand All @@ -20,6 +21,8 @@ Please follow the established format:
- Create a `version` GraphQL query to get versions of Kedro-Viz. (#727)
- Fix Kedro-Viz to work with projects that have no `__default__` registered pipeline. This also fixes the `--pipeline` CLI option. (#729)
- Fix lazy pipelines loading causes `get_current_session` to throw an error. (#726, #727)
- Fix experiment tracking not showing all metrics. (#788)
- Fix experiment tracking not display the correct empty table cells. (#788)

# Release 4.3.1

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,6 @@
"oob_score": false,
"verbose": 0,
"warm_start": false,
"ccp_alpha": 0
}
"ccp_alpha": 0,
"model_author": "richard feynman"
}
9 changes: 3 additions & 6 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@
},
"dependencies": {
"@apollo/client": "^3.5.6",
"@faker-js/faker": "^6.0.0-beta.0",
"@graphql-tools/schema": "7.1.5",
"@material-ui/core": "^4.11.4",
"@material-ui/icons": "^4.11.2",
Expand All @@ -55,7 +56,6 @@
"d3-zoom": "^2.0.0",
"dayjs": "^1.10.7",
"deepmerge": "^4.2.2",
"@faker-js/faker": "^6.0.0-beta.0",
"fetch-mock": "^9.11.0",
"fishery": "^1.4.0",
"graphql": "^15.8.0",
Expand Down
5 changes: 3 additions & 2 deletions src/components/experiment-tracking/details/details.js
Original file line number Diff line number Diff line change
Expand Up @@ -12,15 +12,15 @@ const Details = ({
metadataError,
onRunSelection,
pinnedRun,
runMetadata,
runTrackingData,
selectedRunIds,
setPinnedRun,
setShowRunDetailsModal,
showRunDetailsModal,
sidebarVisible,
theme,
trackingDataError,
runMetadata,
runTrackingData,
}) => {
const [runMetadataToEdit, setRunMetadataToEdit] = useState(null);

Expand Down Expand Up @@ -66,6 +66,7 @@ const Details = ({
enableShowChanges={enableShowChanges}
isSingleRun={isSingleRun}
pinnedRun={pinnedRun}
selectedRunIds={selectedRunIds}
trackingData={runTrackingData}
/>
</div>
Expand Down
57 changes: 49 additions & 8 deletions src/components/experiment-tracking/run-dataset/run-dataset.js
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,18 @@ const resolveRunDataWithPin = (runData, pinnedRun) => {

/**
* Display the dataset of the experiment tracking run.
* @param {array} props.isSingleRun Whether or not this is a single run.
* @param {boolean} props.enableShowChanges Are changes enabled or not.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* @param {boolean} props.enableShowChanges Are changes enabled or not.
* @param {boolean} props.enableShowChanges Indicator for enabling 'show changes' feature.

* @param {boolean} props.isSingleRun Whether or not this is a single run.
tynandebold marked this conversation as resolved.
Show resolved Hide resolved
* @param {string} props.pinnedRun ID of the pinned run.
* @param {array} props.selectedRunIds Array of strings of runIds.
* @param {array} props.trackingData The experiment tracking run data.
*/
const RunDataset = ({
enableShowChanges,
isSingleRun,
trackingData = [],
pinnedRun,
enableShowChanges,
selectedRunIds,
trackingData = [],
}) => {
return (
<div
Expand All @@ -61,15 +65,18 @@ const RunDataset = ({
size="large"
>
{Object.keys(data)
.sort()
.sort((a, b) => {
return a.localeCompare(b);
})
Comment on lines +68 to +70
Copy link
Contributor

@studioswong studioswong Mar 30, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, I wonder if there are any specific reasons why you prefer to use localeCompare other than the normal sort method?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah great question. I noticed that when I added a new metric with a capital letter (e.g. NEW_METRIC) it wasn't sorted correctly, and that's because using sort() by default, without a compare function, is case sensitive. Using localCompare fixes that :)

.map((key, rowIndex) => {
return buildDatasetDataMarkup(
key,
dataset.data[key],
rowIndex,
isSingleRun,
pinnedRun,
enableShowChanges
enableShowChanges,
selectedRunIds
);
})}
</Accordion>
Expand All @@ -85,17 +92,21 @@ const RunDataset = ({
* @param {array} datasetValues A single dataset array from a run.
* @param {number} rowIndex The array index of the dataset data.
* @param {boolean} isSingleRun Whether or not this is a single run.
* @param {string} pinnedRun ID of the pinned run.
* @param {boolean} enableShowChanges Are changes enabled or not.
* @param {array} selectedRunIds Array of strings of runIds.
*/
function buildDatasetDataMarkup(
datasetKey,
datasetValues,
rowIndex,
isSingleRun,
pinnedRun,
enableShowChanges
enableShowChanges,
selectedRunIds
) {
// function to return new set of runData with appropriate pin from datasetValues and pinnedRun
const runDataWithPin = resolveRunDataWithPin(datasetValues, pinnedRun);
const updatedDatasetValues = fillEmptyMetrics(datasetValues, selectedRunIds);
const runDataWithPin = resolveRunDataWithPin(updatedDatasetValues, pinnedRun);

return (
<React.Fragment key={datasetKey + rowIndex}>
Expand Down Expand Up @@ -144,4 +155,34 @@ function buildDatasetDataMarkup(
);
}

/**
* Fill in missing run metrics if they don't match the number of runIds.
* @param {array} datasetValues Array of objects for a metric, e.g. r2_score.
* @param {array} selectedRunIds Array of strings of runIds.
* @returns Array of objects, the length of which matches the length
* of the selectedRunIds.
*/
function fillEmptyMetrics(datasetValues, selectedRunIds) {
if (datasetValues.length === selectedRunIds.length) {
return datasetValues;
}

const metrics = [];

selectedRunIds.forEach((id) => {
const foundIdIndex = datasetValues.findIndex((item) => {
return item.runId === id;
});

// We didn't find a metric with this runId, so add a placeholder.
if (foundIdIndex === -1) {
metrics.push({ runId: id, value: null });
} else {
metrics.push(datasetValues[foundIdIndex]);
}
});

return metrics;
}

export default RunDataset;
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ describe('RunDataset', () => {
const wrapper = shallow(
<RunDataset
isSingleRun={runs.length === 1 ? true : false}
selectedRunIds={['abc']}
trackingData={trackingData}
/>
);
Expand All @@ -48,15 +49,23 @@ describe('RunDataset', () => {

it('renders a boolean value as a string', () => {
const wrapper = mount(
<RunDataset isSingleRun={true} trackingData={booleanTrackingData} />
<RunDataset
isSingleRun={true}
selectedRunIds={['abc']}
trackingData={booleanTrackingData}
/>
);

expect(wrapper.find('.details-dataset__value').text()).toBe('false');
});

it('renders a boolean value as a string', () => {
const wrapper = mount(
<RunDataset isSingleRun={true} trackingData={ObjectTrackingData} />
<RunDataset
isSingleRun={true}
selectedRunIds={['abc']}
trackingData={ObjectTrackingData}
/>
);

const datasetValue = wrapper.find('.details-dataset__value').text();
Expand All @@ -67,10 +76,11 @@ describe('RunDataset', () => {
it('renders the comparison arrow when showChanges is on', () => {
const wrapper = mount(
<RunDataset
isSingleRun={false}
trackingData={ComparisonTrackingData}
enableShowChanges={true}
isSingleRun={false}
pinnedRun={'My Favorite Sprint'}
selectedRunIds={['abc', 'def']}
trackingData={ComparisonTrackingData}
/>
);

Expand Down
16 changes: 9 additions & 7 deletions src/components/experiment-wrapper/experiment-wrapper.js
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ import Sidebar from '../sidebar';

import './experiment-wrapper.css';

const MAX_NUMBER_COMPARISONS = 2; // 0-based, so three
const MAX_NUMBER_COMPARISONS = 2; // 0-based, so three.

const ExperimentWrapper = ({ theme }) => {
const [disableRunSelection, setDisableRunSelection] = useState(false);
Expand All @@ -26,9 +26,10 @@ const ExperimentWrapper = ({ theme }) => {
const [selectedRunData, setSelectedRunData] = useState(null);
const [showRunDetailsModal, setShowRunDetailsModal] = useState(false);

// Fetch all runs.
const { subscribeToMore, data, loading } = useApolloQuery(GET_RUNS);

// Fetch all metadata and tracking data from selected runs
// Fetch all metadata for selected runs.
const { data: { runMetadata } = [], metadataError } = useApolloQuery(
GET_RUN_METADATA,
{
Expand All @@ -37,10 +38,11 @@ const ExperimentWrapper = ({ theme }) => {
}
);

// Fetch all tracking data for selected runs.
const { data: { runTrackingData } = [], error: trackingDataError } =
useApolloQuery(GET_RUN_TRACKING_DATA, {
skip: selectedRunIds.length === 0,
variables: { runIds: selectedRunIds, showDiff: false },
variables: { runIds: selectedRunIds, showDiff: true },
});

const onRunSelection = (id) => {
Expand Down Expand Up @@ -163,8 +165,8 @@ const ExperimentWrapper = ({ theme }) => {
isExperimentView
onRunSelection={onRunSelection}
onToggleComparisonView={onToggleComparisonView}
runsListData={data.runsList}
runMetadata={runMetadata}
runsListData={data.runsList}
runTrackingData={runTrackingData}
selectedRunData={selectedRunData}
selectedRunIds={selectedRunIds}
Expand All @@ -177,18 +179,18 @@ const ExperimentWrapper = ({ theme }) => {
<Details
enableComparisonView={enableComparisonView}
enableShowChanges={enableShowChanges && selectedRunIds.length > 1}
onRunSelection={onRunSelection}
metadataError={metadataError}
onRunSelection={onRunSelection}
pinnedRun={pinnedRun}
runMetadata={runMetadata}
runTrackingData={runTrackingData}
selectedRunIds={selectedRunIds}
setPinnedRun={setPinnedRun}
setShowRunDetailsModal={setShowRunDetailsModal}
showRunDetailsModal={showRunDetailsModal}
sidebarVisible={isSidebarVisible}
theme={theme}
trackingDataError={trackingDataError}
runMetadata={runMetadata}
runTrackingData={runTrackingData}
/>
) : null}
</>
Expand Down
Loading