-
Notifications
You must be signed in to change notification settings - Fork 870
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] Code examples for each pipeline in the docs #134
Comments
100% agree! I've been working to add example code when I make example applications. For example: In fact, there's already discussion on Discord about this (cc @KiterWork).
👍 Good plan :)
Sure thing :) Here's the link to the demo and the code for it is here.
This is a bug with HF's doc builder, which I raised 2 weeks ago (huggingface/doc-builder#376). The current workaround is to copy the URL, close the tab, and open in a new tab. Hopefully we can fix this soon too.
In addition to a standard URL, you should also be able to use a blob URL, or a data URI. |
0ecf960 Fixes the "jump to content" issue. 👍 |
* Add example code for zero shot image classification * Add example code for text classification pipeline * Fix links to custom usage from pipelines docs Reported on discord https://discord.com/channels/879548962464493619/1142943169068154950/1142943169068154950 * Fix relative links * Rename .mdx -> .md GitHub recently changed how mdx files are displayed, breaking a lot of the formatting. So, we just use .md now (same as transformers) * Add example code for token classification pipeline * Add example code for fill-mask pipeline * Add text2text and summarization pipeline examples * Add example code for image segmentation pipeline * Remove redundant `@extends Pipeline` * Add example code for image-to-text pipeline * Cleanup example code outputs * Cleanup JSDoc * Cleanup pipeline example code * Update codegen example
Finally got around to adding code examples for each pipeline! 👍 So, closing the issue! Link to docs: https://huggingface.co/docs/transformers.js/api/pipelines |
@xenova Nice!! Can a link be added to each example in the "Supported tasks/models" table? Something like this: The existing links in the first column point to a page with Python code examples, and I think the actual pipelines doc page itself is a bit intimidating to navigate for newbies. The support table is a nice "starting point" for people to explore the library I think. I can submit a pull request with this if you like. Alternatively (or in addition to the above), it'd be very cool if the existing links - e.g. https://huggingface.co/tasks/conversational had a toggle/tab on the code snippets to change switch from Python to JS. |
Yeah, this is unfortunately due to how the JSDocs are auto-generated 😅
That would be appreciated! 🤗
I think this would be a good idea later down the line when the library has support for all tasks 😇 ... hopefully soon! |
Reason for request
I've spent the last 15 minutes trying to find out how to use the
zero-shot-image-classification
, and have come to the conclusion that there should be simple example snippets for each pipeline. This is probably already on the roadmap, but there wasn't an issue for it so I'm adding this just in case.I think the best way to do this would be to add another column to the "Supported tasks/models" table which links to a simple "getting started" code snippet for each task. There would ideally be a list of Hugging Face models that are supported too.
Additional context
This is likely not necessary, but regarding my specific search, just to give you some perspective (on the user experience of a 1x developer 😅), I first found this via the Supported tasks/models, but this gives a Python example:
I wasn't sure how to pass in the images (e.g. url vs blob), or how the named parameters maps to transformers.js's style (my default assumption is that all python named parameters should be within an object/POJO as the final parameter, after the positional parameters).
So then I thought - ah, I'll just go to the web demo and inspect the code, but I couldn't find a link to the web demo in the github anymore (can this be added back?). While Googling about I did manage to find this example of zero-shot text classification, which was helpful.
So I eventually managed to find this page:
But the
._call(images, candidate_labels, options)
is not clickable, so I'm still not sure what the images array (I assume it's an array) should contain.My next step is to guess (try blob, url), otherwise I'll dig into the code here on github. Edit: So, it does indeed take a URL, so long as CORS headers allow, of course.
The text was updated successfully, but these errors were encountered: