Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add patterns #331

Merged
merged 3 commits into from
Dec 20, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,5 @@ on existing practices for publishing open-source software.

For more info about the project as a whole, please visit
[frictionlessdata.io](http://frictionlessdata.io).

Frictionless Data is an [Open Knowledge International](https://okfn.org/) project.
6 changes: 6 additions & 0 deletions index.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,12 @@ For more info about the project as a whole, please visit
- [Tabular Data Package](/tabular-data-package/)
- [Fiscal Data Package](/fiscal-data-package/)

## Patterns

Usage of the Frictionless Data specifications has led to the emerge of certain patterns to facilitate common data handling use cases. The patterns we identify here are those that are either implemented in the core libraries for Frictionless Data in Python and JavaScript, or, are commonly used and are candidates for inclusion in the specifications at a later stage.

- [Patterns](/patterns/)

## DataProtocols

The Frictionless Data specifications grew out of **DataProtocols**, a
Expand Down
255 changes: 255 additions & 0 deletions patterns/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,255 @@

---
layout: spec
title: Frictionless Data Patterns
listed: true
updated: 11 December 2017
created: 11 December 2017
ietf-keywords: true
author:
- Paul Walsh (Open Knowledge International)
summary: A collection of patterns for frictionless handling of data.
ietf-keywords: true
---

[issues]: https://github.com/frictionlessdata/specs/issues
[repo]: https://github.com/frictionlessdata/specs

### Table of Contents
{:.no_toc}

* Will be replaced with the ToC, excluding the "Contents" header
{:toc}

# Private properties

## Overview

Some software that implements the Frictionless Data specifications may need to store additional information on the various Frictionless Data descriptors.

For example, a data registry that provides metatdata via `datapackage.json` may wish to set an internal version or identifier that is system-specific, and should not be considered as part of the user-generated metadata.

Properties to store such information should be considered "private", and by convention, the names should be prefixed by an underscore `_`.

## Implementations

There are no known implementations at present.

## Specification

On any Frictionless Data descriptor, data that is not generated by the author/contributors, but is generated by software/a system handling the data, `SHOULD` be considered as "private", and be prefixed by an underscore `_`.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"data" here is a bit confusing. I initially thought it talks about the csv or other data placed in resources.


To demonstrate, let's take the example of a data registry that implements `datapackage.json` for storing dataset metadata.

A user might upload a `datapackage.json` as follows:

```
{
"name": "my-package",
"resources": [
{
"name": "my-resource",
"path": "my-resource.csv"
}
]
}
```

The registry itself may have a platform-specific version system, and increment versions on each update of the data. To store this information on the datapackage itself, the platform could save this information in a "private" `_platformVersion` property as follows:

```
{
"name": "my-package",
"_platformVersion": 7
"resources": [
{
"name": "my-resource",
"path": "my-resource.csv"
}
]
}
```

Usage of "private" properties ensures a clear distinction between data stored on the descriptor that is user (author/contributor) defined, and any additional data that may be stored by a 3rd party.

# Caching of resources

## Overview

All Frictionless Data specifications allow for referencing resources via http or a local filesystem.

In the case of remote resources via http, there is always the possibility that the remote server will be unavailable, or, that the resource itself will be temporarily or permanently removed.

Applications that are concerned with the persistent storage of data described in Frictionless Data specifications can use a `_cache` property that mirrors the functionality and usage of the `path` property, and refers to a sotrage location for the data that the application can fall back to if the canonical resource is unavailable.

## Implementations

There are no known implementations of this pattern at present.

## Specification

Implementations `MAY` handle a `_cache` property on any descriptor that supports a `path` property. In the case that the data referenced in `path` is unavailable, `_cache` should be used as a fallback to access the data. The handling of the data stored at `_cache` is beyond the scope of the specification. Implementations might store a copy of the resources in `path` at ingestion time, update at regular intervals, or any other method to keep an up-to-date, persistent copy.

Some examples of the `_cache` property.

```
{
"name": "my-package",
"resources": [
{
"name": "my-resource",
"path": "http://example.com/data/csv/my-resource.csv",
"_cache": "my-resource.csv"
},
{
"name": "my-resource",
"path": "http://example.com/data/csv/my-resource.csv",
"_cache": "http://data.registry.com/user/files/my-resource.csv"
},
{
"name": "my-resource",
"path": [
"http://example.com/data/csv/my-resource.csv",
"http://somewhere-else.com/data/csv/resource2.csv"
]
"_cache": [
"my-resource.csv",
"resource2.csv"
]
},
{
"name": "my-resource",
"path": "http://example.com/data/csv/my-resource.csv",
"_cache": "my-resource.csv"
}
]
}
```

# Language support for descriptors and data

## Overview

Language support is a different concern to translation support. Language support deals with declaring the default language of a descriptor and the data it contains in the resources array. Language support makes no claim about the presence of translations when one or more languages are supported in a descriptor or in data. Via the introduction of a `languages` array to any descriptor, we can declare the default language, and any other languages that `SHOULD` be found in the descriptor and the data.

## Implementations

There are no known implementations of this pattern at present.

## Specification

Any Frictionless Data descriptor can declare the language configuration of its metadata and data with the `languages` array.

`languages` `MUST` be an array, and the first item in the array is the default (non-translated) language.

If no `languages` array is present, the default language is English (`en`), and therefore is equivalent to:

```
{
"name": "my-package",
"languages": ["en"]
}
```

The presence of a languages array does not ensure that the metadata or the data has translations for all supported languages.

The descriptor and data sources `MUST` be in the default language. The descriptor and data sources `MAY` have translations for the other languages in the array, using the same language code. `IF` a translation is not present, implementing code `MUST` fallback to the default language string.

Example usage of `languages`, implemented in the metadata of a descriptor:

```
{
"name": "sun-package",
"languages": ["es", "en"],
"title": "Sol"
}
}

# which is equivalent to
{
"name": "sun-package",
"languages": ["es", "en"],
"title": {
"": "Sol",
"en": "Sun"
}
}
```

Example usage of `languages` implemented in the data described by a resource:

```
# resource descriptor
{
"name": "solar-system",
"path": "solar-system.csv"
"fields": [
...
],
"languages": ["es", "en", "he", "fr", "ar"]
}

# data source
# some languages have translations, some do not
# assumes a certain translation pattern, see the related section
id,name,name@fr,name@he,name@en
1,Sol,Soleil,שמש,Sun
2,Luna,Lune,ירח,Moon
```

# Translation support for data

## Overview

Following on from a general pattern for language support, and the explicit support of metadata translations in Frictionless Data descriptors, it would be desirable to support translations in source data.

We currently have two patterns for this in discussion. Both patterns arise from real-world implementations that are not specifically tied to Frictionless Data.

One pattern suggests inline translations with the source data, reserving the `@` symbol in the naming of fields to denote translations.

The other describes a pattern for storing additional translation sources, co-located with the "source" file described in a descriptor `path`.

## Implementations

There are no known implementations of this pattern in the Frictionless Data core libraries at present.

## Specification

## Inline

**Uses a column naming convention for accessing translations**.

Tabular resource descriptors support translations using `{field_name}@{lang_code}` syntax for translated field names. `lang_code` `MUST` be present in the `languages` array that applies to the resource.

Any field with the `@` symbol `MUST` be a translation field for another field of data, and `MUST` be parsable according to the `{field_name}@{lang_code}` pattern.

If a translation field is found in the data that does not have a corresponding `field` (e.g.: `title@es` but no `title`), then the translation field `SHOULD` be ignored.

If a translation field is found in the data that uses a `lang_code` *not* declared in the applied `languages` array, then the translation field `SHOULD` be ignored.

Translation fields `MUST NOT` be described in a schema `fields` array.

Translation fields `MUST` match the `type`, `format` and `constraints` of the field they translate, with a single exception: Translation fields are never required, and therefore `constraints.required` is always `false` for a translation field.

## Co-located translation sources

**Uses a file storage convention for accessing translations**.

To be contributed by @jheeffer

- Has to handle local and remote resources
- Has to be explicit about the translation key/value pattern in the translation files

```
# local
data/file1.csv
data/lang/file1-en.csv
data/lang/file1-es.csv

# remote
http://example/com/data/file2.csv
http://example/com/data/lang/file2-en.csv
http://example/com/data/lang/file2-es.csv
```

?