You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Use a more standard lexer/parser architecture. It has a similar CommentScanner/TODOScanner architecture but this could be cleaner.
Don't use regex to parse TODO comments. Instead generate lexemes from a lexer that can then be put together into full todos by a parser.
Rob Pike's idea to have states be functions was neat but I'm not sure I like that when a state has to hold data. Instead maybe make it a simple interface with a Run method. This would allow states to more easily hold data.
typestateinterface {
Run() state
}
Consider building a generic lexer/parser package using generics.
I can also perhaps just read from a byte reader and check the individual bytes match the starting characters for comments strings etc. This is because pretty much all languages use ASCII characters for these which are represented as bytes. I can then just scan to the end of the line or to the end of a multi-line comment to get the comment bytes and convert them to utf8. That way I don't have to convert all bytes in every file.
Rob Pike had a good talk on the subject.
https://www.youtube.com/watch?v=HxaD_trXwRE
https://go.dev/talks/2011/lex.slide
Some thoughts:
CommentScanner
/TODOScanner
architecture but this could be cleaner.Run
method. This would allow states to more easily hold data.Some alternative implementations
The text was updated successfully, but these errors were encountered: