Replies: 3 comments 2 replies
-
The unit test should be self-explanatory for this case. If you run tabby locally with repository context enabled, you could find the final prompt being feed to LLM in side completion events of |
Beta Was this translation helpful? Give feedback.
-
Hello there, Thanks for your reply. Will do. ZD |
Beta Was this translation helpful? Give feedback.
-
Hi, followed the describtion above, I found the demo prompt template in the Q1: It says that the prefix is only the code before the cursor, but during my test,I found some repo level codes are also added to the fim_prefix texts, e.g: the code below is what I copy from the # import torch
# from transformers import (
# AutoModelForCausalLM,
# AutoTokenizer,
# HfArgumentParser,
# Trainer,
# TrainingArguments,
# )
# from datasets import Dataset, load_dataset
#
#
# class ConstantLengthDataset:
# """
# Iterable dataset that returns constant length chunks of tokens from stream of text files.
# Args:
from torch.utils.data import IterableDataset
from torch.utils.data.sampler import Sampler
class EvalDataset(IterableDataset): |
Beta Was this translation helpful? Give feedback.
-
Hello, community,
From the source code, the language and code snippet are two major inputs to build the prompt. I have trouble reading rust code. So I am looking for help with the format of the prompt sent to the model. I want to check how language is used in building the prompt.
Related code block:
tabby/crates/tabby/src/services/completion/completion_prompt.rs
Line 48 in 99d49a9
Any help will be appreciated!
Thanks,
ZD
Beta Was this translation helpful? Give feedback.
All reactions