Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change most ednotes to issue markers, and remove some closed issues. #115

Merged
merged 3 commits into from
Jun 9, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 6 additions & 40 deletions spec/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -260,19 +260,6 @@ <h2>Uses of Dataset Canonicalization</h2>
used within that N-Quads document with those issued in the
<a>normalized dataset</a>.</p>

<div class="ednote">
<p>Add descriptions for relevant historical discussions and prior art:</p>
<dl>
<dt>[[DesignIssues-Diff]]</dt>
<dd>TimBL's design note on problems with Diff.</dd>
<dt>[[eswc2014Kasten]]</dt>
<dd>A Framework for Iterative Signing of Graph Data on the Web.</dd>
<dt>[[Hogan-Canonical-RDF]]</dt>
<dd>Aiden Hogan's paper on canonicalizing RDF</dd>
<dt>[[HPL-2003-142]]</dt>
<dd>Jeremy J. Carroll's paper on signing RDF graphs.</dd>
</dl>
</div>
<div class="issue" data-number="19"></div>
</section>

Expand Down Expand Up @@ -465,9 +452,7 @@ <h2>Canonicalization</h2>
"Universal RDF Dataset Canonicalization Algorithm 2015"
(<abbr title="Universal RDF Dataset Canonicalization Algorithm 2015"><dfn class="export">URDNA2015</dfn></abbr>).</p>

yamdan marked this conversation as resolved.
Show resolved Hide resolved
<p class="ednote">This statement is overly prescriptive and does not include normative language.
This spec should describe the theoretical basis for graph canonicalization and describe
behavior using normative statements. The explicit algorithms should follow as an informative appendix.</p>
<p class="issue" data-number="112"></p>

<section id="canon-overview" class="informative">
<h3>Overview</h3>
Expand Down Expand Up @@ -511,7 +496,7 @@ <h2>Canonicalization State</h2>
<dd>An <a>identifier issuer</a>, initialized with the
prefix <code>c14n</code>, for issuing canonical
<a>blank node identifiers</a>.
<div class="ednote">
<div class="note">
Mapping all <a>blank nodes</a> to use this
identifier spec means that an <a>RDF dataset</a> composed of two
different <a>RDF graphs</a> will issue different
Expand Down Expand Up @@ -562,15 +547,8 @@ <h2>Blank Node Identifier Issuer State</h2>
<h2>Canonicalization Algorithm</h2>

<p class="ednote">At the time of writing, there are several open issues that will determine important details of the canonicalization algorithm.</p>
<div class="issue" data-number="7"></div>
<div class="issue" data-number="8"></div>
<div class="issue" data-number="10"></div>
<div class="issue" data-number="11"></div>
<div class="issue" data-number="16"></div>
<div class="issue" data-number="84"></div>
<div class="issue" data-number="87"></div>
<div class="issue" data-number="88"></div>
<div class="issue" data-number="89"></div>
<div class="issue" data-number="98"></div>

<p>The canonicalization algorithm converts an <a>input dataset</a>
into a <a>normalized dataset</a>. This algorithm will assign
Expand Down Expand Up @@ -1806,9 +1784,7 @@ <h2>Hash N-Degree Quads</h2>
This process proceeds in every greater degrees of indirection until
a unique hash is obtained.</p>

<p class="ednote">The 'path' terminology could also be changed to better
indicate what a path is (a particular deterministic serialization for
a subgraph/subdataset of nodes without globally-unique identifiers).</p>
<p class="issue" data-number="113"></p>

<section id="hash-nd-quads-overview" class="informative">
<h3>Overview</h3>
Expand Down Expand Up @@ -2243,16 +2219,6 @@ <h3>Examples</h3>
<section id="hash-nd-quads-algorithm">
<h3>Algorithm</h3>

<div class="issue" data-number="16">
An additional input to this algorithm should be added that
allows it to be optionally skipped and throw an error if any
equivalent related hashes were produced that must be permuted
during step 5.4.4. For practical uses of the algorithm, this step
should never be encountered and could be turned off, disabling
canonizing datasets that include a need to run it as a security
measure.
</div>

<p>The inputs to this algorithm are the <a>canonicalization state</a>,
the <var>identifier</var> for the <a>blank node</a> to
recursively hash quads for, and path identifier <var>issuer</var> which is
Expand Down Expand Up @@ -2817,7 +2783,7 @@ <h3>Dataset Poisoning</h3>

<section id="use-cases" class="informative">
<h2>Use Cases</h2>
<p class="ednote">TBD</p>
<p class="issue" data-number="110"></p>
</section>

<section id="examples" class="informative">
Expand Down Expand Up @@ -3417,7 +3383,7 @@ <h2>Acknowledgements</h2>

<p data-include="common/participants.html"></p>

<p class="ednote">Acknowledge CCG members.</p>
<p class="issue" data-number="114"></p>
</section>

</body>
Expand Down