-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a troubleshooting page and add a menu item in menu => help
#99
Comments
Now there is an issue regarding the design of the label scheme. There are several design options for the labels:
Personally, I lean towards option 2, which is |
I have questions about the prerequisite
Why not query various labels separately to collect all related issues? Suppose we have 50 labels(it would be too many to manage, in my opinion), we can request 50 times to construct a catalog. |
This is a method similar to option 4, where the number of requests made during a full collection is related to the number of labels and issues (with a maximum of 100 issues returned per request). We may need to perform a full collection in various scenarios, such as multiple runs during a deployment process (e.g., getStaticProps and getStaticPaths for different pages, which could potentially be optimized), or when triggering index uploads. On the other hand, if we base the solution on the "troubleshoot" label, the number of requests made during a full collection will only be related to the number of issues. I expect it to be around 1 to 3 requests. By adopting this approach, we can have a sufficient number of remaining requests to prevent exceeding GitHub's request limits during frequent deployments. |
There is an important point that I forgot to mention: the |
100 issues/topics are quite enough in our case, I believe. And suppose the issue list is fixed, the count of requests to cover the catalog won't fluctuate too much if the labels are not overlapped too many. It's similar to
1 request to 4 requests Besides, the catalog could be cached to avoid requesting issue list repeatly With the cache, we can only update the issue list on label change event and update the catalog without scraping issue list. |
We won't have so many articles. And the requests could be reduced by Incremental Static Generation(https://nextjs.org/docs/pages/building-your-application/data-fetching/incremental-static-regeneration) We can use algolia to trigger page generation progressively |
IMO, we won't have too many issues to be collected, and full-text scraping is enough |
Considering the possibility of having multiple submenus, there would likely be more than just a few in number.
I agree with your assessment. That's exactly why we need to consider the frequency of requests. If we use webhook + storage, we don't need to worry about this issue since it would generate only a minimal number of requests.
My plan is to use storage only when our number of issues becomes large enough to necessitate it.
I have implemented ISR, but considering that this project will be deployed together with the official website project, we still need to consider the possibility of request limitations during frequent deployments. |
I would suggest starting from the simple assumption "we have limited articles to update", and consider the rate limit when requests are rejected for it. |
Alright, let's consider this issue once we encounter request rejections. @Keith-CY So we use a single label like |
You've mentioned
What if label this article with |
In fact, I'll prefer to move |
Currently, a simple demo has been implemented: https://neuron-troubleshooting.vercel.app/ Next, it is necessary to improve the data source (issues in https://github.com/nervosnetwork/neuron) for troubleshooting and assign them specific labels. This will require the participation of @Keith-CY and @Danie0918. This is the currently proposed menu and label structure: https://github.com/Magickbase/neuron-troubleshooting/blob/a170da4eb3758f4b04b19b99cf6fecfa81ce508b/src/utils/posts.ts#L12-L39 |
Since we have not received a reply from the Docsearch team, I plan to refer to https://github.com/algolia/docsearch-scraper to write a TypeScript version of the Docsearch indexer. Alternatively, we could set up a machine to run Docsearch-scraper periodically. The advantage of implementing it in TypeScript is that we can avoid the crawler part and do not need an additional server. Instead, we can provide the markdown to HTML results directly during deployment to quickly create the index. |
What's the problem with scraping the page? I remember that algolia will scan pages daily to update the database by setting an API token |
The DocSearch team needs to approve our application before they can provide us with this service. Otherwise, we will need to deploy it ourselves or implement the indexer on our own. |
BTW, their application processing procedure is not public and there is no way to check the progress except for waiting for their email reply. Perhaps we can consider using a self-deployed Docker version for a few weeks during this waiting period, and if it still hasn't been approved, then we can consider implementing it in TypeScript? |
The Docsearch application has been approved and is currently integrated. I have implemented the mobile version of the article details page and now need to implement the mobile version of other pages as well. https://neuron.magickbase.com/ Recording.2023-08-31.110557.mp4 |
Yes, this may be an issues that would be dealt with last. Because i18n can try to translate all content at once with some tools at the end, and the search UI may take more time and have lower returns, it can also be postponed. |
A Chinese version of the Beginner's Guide has been completed, please check it out.The content of the newbie guide is categorized into three categories: About Neuron, Usage Tutorials, and Advanced Functions. If there are no problems, I will upload the English version of the newissue later for the service to grab. URL:[Google Docs] |
Please authorize us to add comments |
Already adapted. |
Basically it's completed for now. There are still some known minor issues:
I think these issues should not block the launch and can be fixed afterwards. |
The FAQ document has been polished FAQ. @Danie0918 @WhiteMinds @Keith-CY . |
The issues have been created as required here . |
@Sven-TBD The initial version of the menu hierarchy was as follows: Now it seems that a new Document is to be added under FAQ. |
Here'are the labels used in FAQ:
|
Okay, currently the |
Is this the latest testing website :https://neuron.magickbase.com/ ? |
Bug issues link:#339 |
add a troubleshooting page as https://support.google.com/chrome/?p=help&ctx=menu#topic=7439538, better to be supported by https://www.algolia.com/
add an entry to the troubleshooting page as
A troubleshooting page is desired because many users don't visit GitHub. We've received many FAQs since we added
contact us
inmenu => help
, and many of them are pinned in the issue list. But users are not familiar with GitHub, so they still seek help from us directly. A troubleshooting page supported by algolia should ameliorate it.The page will be an entry of neuron in the future, including download and user manual
The text was updated successfully, but these errors were encountered: