Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Contributing.md file & edit related files #11

Open
4 tasks
jackiekazil opened this issue Nov 16, 2022 · 4 comments
Open
4 tasks

Add Contributing.md file & edit related files #11

jackiekazil opened this issue Nov 16, 2022 · 4 comments

Comments

@jackiekazil
Copy link
Member

jackiekazil commented Nov 16, 2022

We need to add a contributing.md file to this repo. This is an extension of this ticket: projectmesa/mesa#963 (comment), but it can be broken out separately.

AC:

  • A contributing.md file that helps people contribute
  • Update main Mesa docs as well as contributing.md in the main mesa & mesa-geo repos to remove anything that might be confusing and also add whatever needs to be added for clarity.
  • include stuff related to the outcome of discussion on pinning models - Examples: To pin mesa version or not to pin? mesa#1530
  • Add some text about writing a readme for an example should be formatted / what it should include
@wang-boyu
Copy link
Member

Perhaps the contributing.md file could be different from that in Mesa & Mesa-Geo? As mentioned in #61, both AnyLogic and Stella has two types of example models: one from the official team, and the other from the community.

We could encourage community to contribute their Mesa models into this repository (maybe from a class of students as part of their homework assignments), by lowering the standard for code quality (e.g., algorithmic complexities don't have to be optimized), so long as the models can run and produce expected behaviors. We could require documentation contain certain mandatory sections (e.g., short summary, what are the agents, etc).

@jackiekazil
Copy link
Member Author

@wang-boyu I like that idea a lot!
Do you have any thoughts on marking / flag officially "endorsed" ones?

And to that matter, I will also throw in another thought in this -- what are the purposes of our examples?
The initial purpose was to demonstrate features. Now I need help to easily tell you where to a feature.

Thinking out loud - maybe we should list our user goals, then try to figure out the solution for those.
.. your post brings up -- Given a minimal set of requirements, a user's model is accepted. Examples that are "official" are held to a more rigorous level of acceptance.

@wang-boyu
Copy link
Member

Do you have any thoughts on marking / flag officially "endorsed" ones?

We may simple put them into different folders? For instance a community or 3rdparty folder in this repository and another team_mesa or official folder for our examples. I am terrible at naming things.

what are the purposes of our examples?

Demonstrating features is definitely a primary goal of having examples. Prospective users may be attracted by these loads of examples we have and get impressed by what Mesa can offer.

Another possible purpose is to illustrate how models can be developed for certain use cases. If one wants to develop a specific model, then he/she can download a relevant example and directly start from there.

Yet another purpose could be to promote community (and especially educational) engagement? In this discussion projectmesa/mesa#1577 I mentioned some university courses and project assignments. Maybe what students manage to achieve in these classes can also contribute to our project in the form of community examples, if of course, we are open to this and provide explicit instructions in how to do so.

@rht
Copy link
Contributor

rht commented Nov 24, 2023

A relevant read would be https://www.inet.ox.ac.uk/files/JEL-v2.0.pdf, in "Section IV I Challenge and Opportunity: How to Create ABM Community Models?".

I'd say, in addition to demonstrate features, the repo should harbor classic examples that have been peer reviewed. The Sugarscape {G1, M, T} has had eyeballs from, @tpike3, Rob Axtell, and students from Complexity Explorer, is a sound starting point for people, analogous to how people would use scipy.integrate.odeint, knowing it has had numerous bugfixes (most recent one that caused crash was in 2018). Additionally, this would be the training data for language models.

Regarding with the peer review process, one model would be how Scholarpedia does it, e.g. a wiki article on Game of Life co-written by John Conway himself. But I find this to be problematic, as it is an appeal to authority. I don't have the capacity to ascertain the soundness of projectmesa/mesa#1057, because I haven't implemented it myself, nor have I run the code under various conditions that can be written as tests that confirm the finding in the paper. Tests should be the way to decentralize the audit/review process, in that soon, machines could write them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants