
google-t5/t5-small - Hugging Face
With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input.
google/flan-t5-small - Hugging Face
Flan-T5 is fine-tuned on a large corpus of text data that was not filtered for explicit content or assessed for existing biases. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
optimum/t5-small - Hugging Face
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format. For more information, please take a look at the original paper.
T5 - Hugging Face
>>> from transformers import T5Tokenizer, T5ForConditionalGeneration >>> import torch >>> tokenizer = T5Tokenizer.from_pretrained("google-t5/t5-small") >>> model = T5ForConditionalGeneration.from_pretrained("google-t5/t5-small") >>> # the following 2 hyperparameters are task-specific >>> max_source_length = 512 >>> max_target_length = …
amazon/chronos-t5-small - Hugging Face
2024年11月27日 · Chronos-T5 (Small) 🚀 Update Feb 14, 2025: Chronos-Bolt & original Chronos models are now available on Amazon SageMaker JumpStart! Check out the tutorial notebook to learn how to deploy Chronos endpoints for production use in a few lines of code.
paust/pko-t5-small - Hugging Face
pko-t5-small Source Code. pko-t5 는 한국어 전용 데이터로 학습한 t5 v1.1 모델입니다. 한국어를 tokenize 하기 위해서 sentencepiece 대신 OOV 가 없는 BBPE 를 사용했으며 한국어 데이터 (나무위키, 위키피디아, 모두의말뭉치 등..)
google-t5/t5-small at main - Hugging Face
t5. text2text-generation. summarization. text-generation-inference. arxiv: 8 papers. License: apache-2.0. Model card Files Files and versions. xet. Community 30. Train Deploy Use this model main t5-small. 15 contributors; History: 32 commits. fxmarty Xenova HF staff. Add ONNX weights . df1b051 almost 2 years ago. onnx.
google/t5-efficient-tiny - Hugging Face
T5-Efficient-TINY is a variation of Google's original T5 following the T5 model architecture. It is a pretrained-only checkpoint and was released with the paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers by Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani ...
README.md · google-t5/t5-small at main - Hugging Face
With T5, we propose reframing all NLP tasks into a unified text-to-text-format where the input and output are always text strings, in contrast to BERT-style models that can only output either a class label or a span of the input.
google/t5-efficient-small-el16 - Hugging Face
T5-Efficient-SMALL-EL16 is a variation of Google's original T5 following the T5 model architecture. It is a pretrained-only checkpoint and was released with the paper Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers by Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang ...