BLOOMZ
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
Filter by model family
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
COYO-700M is a large-scale dataset consisting of 747 million image-text pairs. It was developed by Kakao Brain.
No publisher provided
No model type provided
No description provided.
BigScience Workshop
No model type provided
No description provided.
BigScience Workshop
No model type provided
No description provided.
Text-to-text Transformer-based Language Model
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
No description provided.
No publisher provided
No model type provided
Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
Meta
Large Language Model
No description provided.
No publisher provided
No model type provided
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
Text-to-text Transformer-based Language Model
Science QA is a dataset designed for evaluating and training models in the task of answering science-related questions.
LMSYS Org
Chatbot
No description provided.
No publisher provided
No model type provided
Over 250 billion pages spanning 17 years. Free and open corpus since 2007.
No publisher provided
No model type provided
No description provided.
No publisher provided
Contrastive Model
Large-scale Artificial Intelligence Open Network (LAION) organization
No publisher provided
No model type provided
LLaVA utilized GPT-4 to generate multimodal instruction-following data, resulting in 158,000 unique samples. This dataset includes 58,000 conversational samples, 23,000 detailed descriptions, and 77,000 complex reasoning samples.
OpenAI
Multi-Modal LLM
No description provided.
LMSYS ORG
Language Model
No description provided.
Large Language Model
400 million image-text pairs collected from various sources on the internet used to train CLIP.
No publisher provided
No model type provided
This is the 70B variant.
Meta
Large Language Model
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
No description provided.
No publisher provided
Contrastive Model
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
BigScience Workshop
No model type provided
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
BigScience Workshop
No model type provided
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
No description provided.
BigScience Workshop
Transformer-based Language Model
CC3M, short for Conceptual Captions 3 Million, is a large-scale image-caption dataset designed for training and evaluating image captioning models.
No publisher provided
No model type provided
No description provided.
No model type provided
No description provided.
Text-to-text Transformer-based Language Model
208,214 pathology image-text pairs from pathology Twitter hashtags, from strict data quality filtering used to train PLIP.
No publisher provided
No model type provided
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
No publisher provided
No model type provided
No description provided.
Text-to-text Transformer-based Language Model
70K user-shared ChatGPT conversations were used for fine-tuning. GPT-4 used for preliminary evaluation to benchmark against ChatGPT and Google Bard.
LMSYS Org
Chatbot
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
No publisher provided
No model type provided
Science QA is a dataset designed for evaluating and training models in the task of answering science-related questions.
No publisher provided
No model type provided
The COCO (Common Objects in Context) dataset is a large-scale object detection, segmentation, and captioning dataset.
No publisher provided
No model type provided
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
BigScience Workshop
No model type provided
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
The LAION-400M dataset is an open dataset consisting of 400 million image-text pairs. Filtering of NSFW content based on CLIP embeddings.
No publisher provided
No model type provided
70K user-shared ChatGPT conversations were used for fine-tuning. GPT-4 used for preliminary evaluation to benchmark against ChatGPT and Google Bard.
OpenAI
Multi-Modal LLM
No description provided.
BigScience Workshop
Multitask Fine-tuned Language Model
No description provided.
Text-to-text Transformer-based Language Model
Meta developed and released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
Meta
Large Language Model
This is the 7B variant.
Meta
Large Language Model
No description provided.
Text-to-text Transformer-based Language Model
No description provided.
OpenAI
Multi-Modal LLM
No description provided.
BigScience Workshop
No model type provided
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
BigScience Workshop
No model type provided
No description provided.
No publisher provided
No model type provided
No description provided.
BigScience Workshop
Transformer-based Language Model
No description provided.
Beijing Academy of Artificial Intelligence (BAAI)
Contrastive Model
No description provided.
No publisher provided
No model type provided
No description provided.
Anthropic
Large Language Model