Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

FlagAlpha
/
Atom-7B

Question Answering
Transformers
Safetensors
Chinese
English
llama
text-generation
custom_code
text-generation-inference
Model card Files Files and versions
xet
Community
3

Instructions to use FlagAlpha/Atom-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use FlagAlpha/Atom-7B with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("question-answering", model="FlagAlpha/Atom-7B", trust_remote_code=True)
    # Load model directly
    from transformers import AutoTokenizer, AutoModelForCausalLM
    
    tokenizer = AutoTokenizer.from_pretrained("FlagAlpha/Atom-7B", trust_remote_code=True)
    model = AutoModelForCausalLM.from_pretrained("FlagAlpha/Atom-7B", trust_remote_code=True)
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

在 prompt 中要求使用中文输出,但是有时会输出英文回答。

#3 opened over 2 years ago by
wangruiai2023

什么时候会开源atom-13b?

#2 opened over 2 years ago by
yuyijiong

Why legacy tokenizer?

#1 opened over 2 years ago by
yuyijiong
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs