Unified Model Records

bloomz-1b1

Type: model

Publisher: BigScience Workshop Released: 2022-11-17 v.1.0.0

Metadata

General information.

name
bloomz-1b1
version
1.0.0
publisher
BigScience Workshop
release date
2022-11-17
model type
Multitask Fine-tuned Language Model

Relations

Relationship Graph

Relationship Graph for bloomz-1b1

Intended Use

  • Natural language processing tasks, including but not limited to translation, sentiment analysis, and question answering.
  • Cross-lingual understanding and generation tasks.
  • Instruction-based prompt generation for a wide range of languages.
  • Zero-shot and few-shot learning applications.
  • Exploratory data analysis and research in multilingual language model capabilities.

Factors

  • Language support and proficiency across a broad spectrum of languages.
  • The clarity and specificity of instruction prompts.
  • Model scalability and performance across different sizes from 300M to 176B parameters.
  • Generalization abilities to unseen tasks and languages.
  • Accessibility and ease of use for researchers and developers with different levels of resources.

Evaluation Data

  • Description: A diverse set of evaluation tasks covering coreference resolution, natural language inference, sentence completion, and program synthesis across multiple languages.
  • Description: Datasets from the Winogrande, ANLI, XNLI, and HumanEval evaluations, allowing for an extensive assessment of model performance in both seen and unseen languages.
  • Description: Validation and test splits are utilized from the respective datasets to ensure unbiased evaluation.
  • Description: Multilingual task evaluation employing prompts in both English and the respective native languages to gauge cross-lingual transfer capabilities.
  • Description: Benchmarking against existing models like XGLM, T0, and GPT to understand the competitive landscape.

Training Data

  • Description: The model utilizes the BIG-bench xP3 dataset for training, promoting a wide coverage of tasks and languages.
  • Description: Incorporation of code and programming languages alongside natural languages to enhance the model's versatility.
  • Description: Utilized datasets such as BIG-bench, ROOTS, and a subset of the mC4 corpus to provide rich, diverse linguistic and task coverage.
  • Description: Finetuning approach on xP3, xP3mt, and P3 datasets to enable cross-lingual generalization and effective prompt-based task performance.
  • Description: Leverages both pretrained (BLOOM, mT5) and bespoke large language models across various sizes for targeted task learning.

Additional Information

  • The project is conducted under the BigScience initiative, allowing for open collaboration and research.
  • Models are released under RAIL and Apache 2.0 licenses for wide accessibility and use.
  • Fine-tuned models incorporate biases towards short answers, affecting performance on generative tasks.
  • Language contamination analysis in the pretraining corpus shows unintentional learning from 'unseen' languages.
  • Recommendations include using a specific prompting format and considering model size according to task requirements.

Recommendations

  • Employment of early stopping, addition of long tasks, and minimum generation length forcing for improved generative task performance.
  • Fine-tuning with both English and machine-translated multilingual prompts for enhanced cross-lingual abilities.
  • Utilization of the model in research to explore and expand the boundaries of zero-shot learning across languages.
  • Adoption of ethical and fair use practices, considering the model's broad linguistic capabilities.
  • Engagement with the BigScience community for collaborative research and development efforts.