Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Status tags #55

Open
tbreloff opened this issue Feb 2, 2016 · 8 comments
Open

Status tags #55

tbreloff opened this issue Feb 2, 2016 · 8 comments

Comments

@tbreloff
Copy link
Contributor

tbreloff commented Feb 2, 2016

Continuing the discussion from a884fe9. Here are some proposed formats:

Example 1. + SGP4.jl
{status_tag}:: Julia wrapper for the SGP4 satellite propagation model.

Example 2. + SGP4.jl :: Julia
wrapper for the SGP4 satellite propagation model. {status_tag}

Example 3. + SGP4.jl :: Julia
wrapper for the SGP4 satellite propagation model. {STATUS_TAG_CAPS_ON}

I think example 2 is best. The question is what is best to put in the brackets. I think there are maybe 3 categories that we could consider tracking:

  • Usability (does it do what it says it does? is it easy to figure out? good documentation? should a 3rd party bother with it, or is it really only intended to be used by the package author?)
  • Robustness (is it well tested? are there lots of bugs? can it be used in production environments?)
  • Activity (is it well maintained? are issues/PRs responded to? does it keep up with new Julia releases?)

I think that a 1-5 numeric scale (5 is better) for each category would go a long way to describe the state of the package, and help to quickly narrow down a package search when users are browsing the lists.

Here's a sample (I'll create a PR when we decide on format):

+ [Blox.jl](https://github.com/tbreloff/Blox.jl) :: Views of concatenated AbstractArrays in Julia.  {Usable: 2, Robust: 2, Active: 1}
...
+ [QuickStructs.jl](https://github.com/tbreloff/QuickStructs.jl) :: Several data structures with goals of O(1) for important operations. {Usable: 5, Robust: 4, Active: 1}
...
+ [OnlineAI.jl](https://github.com/tbreloff/OnlineAI.jl) :: Machine learning for sequential/streaming data.  {Usable: 3, Robust: 3, Active: 3}
...
+ [VisualRegressionTests.jl](https://github.com/tbreloff/VisualRegressionTests.jl) :: Automated integrated regression tests for graphics libraries. {Usable: 3, Robust: 3, Active: 1}
...
+ [AtariAlgos.jl](https://github.com/tbreloff/AtariAlgos.jl) :: Models/algorithms for use with the Arcade Learning Environment (ALE).  {Usable: 5, Robust: 4, Active: 2}
...
+ [Plots.jl](https://github.com/tbreloff/Plots.jl) :: An API/interface and wrapper that sits above other plotting packages (backends) and gives the user simple, consistent, and flexible plotting commands. {Usable: 5, Robust: 4, Active: 5}
...
+ [Qwt.jl](https://github.com/tbreloff/Qwt.jl) :: 2D plotting, drawing, and GUIs using Qwt and Qt.  {Usable: 4, Robust: 3, Active: 1}
...
+ [Unums.jl](https://github.com/tbreloff/Unums.jl) :: Unum (Universal Number) types and operations.  {Usable: 1, Robust: 1, Active: 1}
...
+ [CTechCommon.jl](https://github.com/tbreloff/CTechCommon.jl) :: Some functionality to be shared among packages.  {Usable: 4, Robust: 4, Active: 1}
...
+ [Qt5.jl](https://github.com/tbreloff/Qt5.jl) :: A wrapper around C++ library `Qt5`.  {Usable: 1, Robust: 1, Active: 1}

Let me know what you think!

@svaksha
Copy link
Owner

svaksha commented Feb 2, 2016

@tbreloff looks good. We should also have a legend that explains these tags and rating scale, else people other than the author wont grok them :)
Feel free to ask other authors on the users/dev list to tag their packages. For those that dont want to, its fine by me.

@svaksha
Copy link
Owner

svaksha commented Feb 2, 2016

I'll push the legend and scale tags a bit later today. Thanks

@svaksha
Copy link
Owner

svaksha commented Feb 2, 2016

How about adding another tag for Licenses? Sometimes people simply dump code sans any license which, strictly speaking, is interpreted as non-free. Any code repository without a software license is basically not freely re-usable, except in private - Technically you cannot fork it ...etc.. Github had a handy link somewhere but I cant seem to find it.

With regards to the tags, can we replace the term 'Robust' with either 'Tests' or 'QA' (or simply 'Quality') which conveys the message directly imho.

@svaksha
Copy link
Owner

svaksha commented Feb 3, 2016

Please see this sub-section that I updated with our discussion.
I thought some more and the Usability and Activity tags seem to overlap as far as the features go. What do you think?

@tbreloff
Copy link
Contributor Author

tbreloff commented Feb 3, 2016

Ah thanks for adding this section. It seems like you switched some of my
notes among the categories, so yes they seem to overlap now.

My thinking with the 3 categories is this:

Usable: can I practically use this package today? (I.e. I can figure out
how to use the repo and it actually does something useful)
Robust: can I depend on this package for mission critical projects? (I.e.
It is well tested)
Active: can I expect this repo to improve in the future and keep up with
changes in the language? (I.e. The package author(s) have time to maintain
and improve in the future, or there's expectations of someone continuing
that work, and there's reasonable expectation of someone responding to
issues and PRs)

Do you like this separation?

On Tuesday, February 2, 2016, SVAKSHA [email protected] wrote:

Please see this sub-section https://github.com/svaksha/Julia.jl#status
that I updated with our discussion.
I thought some more and the Usability and Activity tags seem to overlap
as far as the features go. What di you think?


Reply to this email directly or view it on GitHub
#55 (comment).

@svaksha
Copy link
Owner

svaksha commented Feb 3, 2016

On Wed, Feb 3, 2016 at 12:56 AM, Tom Breloff [email protected] wrote:

Ah thanks for adding this section. It seems like you switched some of my
notes among the categories, so yes they seem to overlap now.

On first read, some questions sounded similar but the explanation
below clarifies it.

My thinking with the 3 categories is this:

Usable: can I practically use this package today? (I.e. I can figure out
how to use the repo and it actually does something useful)

Would package stability and release cycles be taken into account?

Robust: can I depend on this package for mission critical projects? (I.e.
It is well tested)

By well-tested do you refer to

  1. the unit-tests for the package or
  2. it being tested and maintained for different versions of julia releases? or
  3. It being tested and maintained for various platforms (Linux/Win/OSX) ?

Active: can I expect this repo to improve in the future and keep up with
changes in the language? (I.e. The package author(s) have time to maintain
and improve in the future, or there's expectations of someone continuing
that work, and there's reasonable expectation of someone responding to
issues and PRs)

I suspect this will be upbeat but the ground reality is that its a
dismal figure for non-org maintained packages. Oh well, we can only
try to get a honest shot at understanding the expectations I suppose.

Do you like this separation?

Much more clear. thanks, Hopefully, my questions are geared towards
fine-tuning the tag grading.

SVAKSHA ॥ http://about.me/svaksha

@tbreloff
Copy link
Contributor Author

tbreloff commented Feb 3, 2016

Would package stability and release cycles be taken into account?

Hmm... In my head this falls more into the "Active" category, but I suppose if it falls behind too much it is eventually "unusable". Just a thought... it would be nice to store the date the tag was updated, as that's helpful to know how usable a package might be...

  1. it being tested and maintained for different versions of julia releases? or
  2. It being tested and maintained for various platforms (Linux/Win/OSX) ?

These are either the "Active" category, or maybe a 4th category yet-to-be-named? When I think "Robust", I think "I can trust this thing to behave as expected... no surprises". It might only work on one system with specific versions of every library, but it will work. My background is in algorithmic trading. I sometimes kept systems unchanged for many months to be absolutely sure I wasn't going to be surprised (and lose lots of money in the process).

I suppose the Active category could be split into "keep it working" vs "add features", as some projects will not improve, but it takes active maintenance to keep up with changes to Julia itself. However, I think it's asking a bit much for that much specificity. You can assume that a 5 means it'll work in the future, and a 1 means it won't.

Oh well, we can only try to get a honest shot at understanding the expectations

Exactly... it's ok if most of these repos get a 1 in the active category. The important thing is that someone is able to look through the list and know immediately that some package probably won't suit their needs.

I've had the thought that all of this info belongs in a database (or json, csv, etc) so that one could possibly filter/sort the lists. It might be possible to programmatically scrape the existing markdown to put all this in one place. If I get a chance in the next couple of days I might take a pass at that. A bonus is that we could populate a Google doc with the table and let people fill in tags and make notes without going through the PR process.

@svaksha
Copy link
Owner

svaksha commented Feb 10, 2016

Hi,

Sorry, I've been busy and still am, but quickly...

On Wed, Feb 3, 2016 at 1:46 AM, Tom Breloff [email protected] wrote:

Hmm... In my head this falls more into the "Active" category, but I suppose if it falls behind too much it is eventually "unusable". Just a thought... it would be nice to store the date the tag was updated, as that's helpful to know how usable a package might be...

Do we know if package authors are willing to do this in such detail?

These are either the "Active" category, or maybe a 4th category yet-to-be-named? When I think "Robust", I think "I can trust this thing to behave as expected... no surprises". It might only work on one system with specific versions of every library, but it will work. My background is in algorithmic trading. I sometimes kept systems unchanged for many months to be absolutely sure I wasn't going to be surprised (and lose lots of money in the process).

I suppose the Active category could be split into "keep it working" vs "add features", as some projects will not improve, but it takes active maintenance to keep up with changes to Julia itself. However, I think it's asking a bit much for that much specificity. You can assume that a 5 means it'll work in the future, and a 1 means it won't.

Oh well, we can only try to get a honest shot at understanding the expectations

IMO, it will never be exact. Software changes all the time (as it
should, else we will be out of jobs) so its harder to unofficially
label with too much specific detail unless the author or core-dev is
willing to provide the status.

Exactly... it's ok if most of these repos get a 1 in the active category. The important thing is that someone is able to look through the list and know immediately that some package probably won't suit their needs.

I've had the thought that all of this info belongs in a database (or json, csv, etc) so that one could possibly filter/sort the lists. It might be possible to programmatically scrape the existing markdown to put all this in one place. If I get a chance in the next couple of days I might take a pass at that. A bonus is that we could populate a Google doc with the table and let people fill in tags and make notes without going through the PR process.

Please feel free to go ahead with this. If you announce it on the
users list, authors who wish to tag their package can provide the
status data too.

Best, SVAKSHA ॥ http://about.me/svaksha

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants