Unified Model Records

Llama 2 13B

Type: model

Tags: opensource Publisher: Meta Released: 2023-07-19 v.1.0.0

Model details

Blackbox External Model Access
Capabilities demonstration
Capabilities description -
Centralized model documentation
Evaluation of capabilities
External model access protocol
External reproducibility of capabilities evaluation
External reproducibility of intentional harm evaluation -
External reproducibility of mitigations evaluation -
External reproducibility of trustworthiness evaluation -
External reproducibility of unintentional harm evaluation
Full external model access
Inference compute evaluation -
Inference duration evaluation
Input modality
Intentional harm evaluation -
Limitations demonstration -
Limitations description
Mitigations demonstration
Mitigations description
Mitigations evaluation
Model architecture
Asset license
Model components
Model size
Output modality
Risks demonstration
Risks description
Third party capabilities evaluation -
Third party evaluation of limitations
Third party mitigations evaluation -
Third party risks evaluation -
Trustworthiness evaluation -
Unintentional harm evaluation

Intended use

  • Llama 2 is intended for commercial and research use in English.
  • Tuned models are intended for assistant-like chat.
  • Pretrained models can be adapted for a variety of natural language generation tasks.
  • Developers may fine-tune Llama 2 models for languages beyond English provided they comply with the Llama 2 Community License and the Acceptable Use Policy.
  • Use in any manner that violates applicable laws or regulations is out-of-scope.

Dependencies

Metrics

No metrics specified.

Environmental

Source: https://github.com/meta-llama/llama/blob/main/MODEL_CARD.md#hardware-and-software

Carbon emitted (tCO2eq): 1

Energy usage: 1 w

Compute usage: 0

Ethical considerations

  • Llama 2's potential outputs cannot be predicted in advance, and it may sometimes produce inaccurate, biased or other objectionable responses.
  • Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB; estimated total emissions were offset by Meta’s sustainability program.
  • Pretraining data includes a mix of publicly available online sources without user data from Meta's products or services.
  • Mitigation efforts such as supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) were utilized to align the models with human preferences for helpfulness and safety.
  • Red teaming and further analyses are conducted to continuously improve and understand model limitations and safety.

Recommendations

  • Before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to specific applications.
  • Consult the Responsible Use Guide available on Meta AI's website.
  • Regular updating and fine-tuning with newer data and community feedback is recommended to improve model safety and effectiveness.
  • Consider language variations and cultural contexts when adapting Llama 2 models for languages beyond English.
  • Stay informed about updates to model versions and licenses.