-
-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Documenter #170
base: master
Are you sure you want to change the base?
Use Documenter #170
Conversation
Codecov Report
@@ Coverage Diff @@
## master #170 +/- ##
=======================================
Coverage 88.34% 88.34%
=======================================
Files 6 6
Lines 1047 1047
=======================================
Hits 925 925
Misses 122 122 Continue to review full report at Codecov.
|
Personally, I think that Documenter is a bit overkill for this package since the exposed API is so small (meaning the extra indirection over just looking at the README is significant). Is there any concrete reason why this would be an improvement? |
Granted, what prompted this PR was not knowing what Using Documenter means that you can meaningfully cross-reference identifiers. To the point about the exposed API:
export tokenize, untokenize, Tokens and
startpos(t)::Tuple{Int, Int} # row and column where the token start
endpos(t)::Tuple{Int, Int} # row and column where the token ends
startbyte(T)::Int # byte offset where the token start
endbyte(t)::Int # byte offset where the token ends
untokenize(t)::String # string representation of the token
kind(t)::Token.Kind # kind of the token
exactkind(t)::Token.Kind # exact kind of the token However, other packages are using more than just those exports and what is documented in the README. import Tokenize.Tokens: RawToken, AbstractToken, iskeyword, isliteral, isoperator, untokenize
import Tokenize.Lexers: Lexer, peekchar, iswhitespace t = CSTParser.Tokenize.Lexers.next_token(ts) while !Tokenize.Lexers.eof(ts) A lot of these have docstrings already. Some of them might be missing a docstring, or have a self-documenting name. Some of them should be rewritten to use the exposed interface (for example, |
The one that is documented.
They use internals of the package that have been found by looking at the source code. Standard caveats apply. |
You mean e.g. the functions that have docstrings? |
No, I mean the ones that are in the documentation (the README). There can be a discussion about exposing more inside Tokenize.jl as public but I just want to be clear that anything that is not listed in the documentation is considered internal and subject to change. |
Okay, perfect. There are already docstrings on not-exposed functions. Why are they there? And assuming that we're not going to remove them, why not generate the "documentation" (not the README) that displays them nicely? It's been discussed elsewhere a few times that in the Julia ecosystem there's often a conflation among:
Is that fair to say? And if so, this PR is just about 1, so it's not immediately relevant that this package has a small exposed interface (2 and 3). EDIT: I re-ordered 1-3 to reflect the nesting relationship that I expect. 3 should be a subset of 2, which should be a subset of 1. To me, that means anything can be documented without consequence. I don't know of a way in Julia itself ("in band") to distinguish between 1 and 2. I think it's fine if this package uses the README ("out of band") to delineate items in 2. |
I did not add a badge (
) to the README since there isn't much documentation yet. But with this scaffolding in place, it will be easier to add some documentation gradually.