Instructions to use Seriki/Lmlm with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Seriki/Lmlm with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Seriki/Lmlm", filename="gpt-oss-safeguard-120b-MXFP4-00001-of-00002.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Seriki/Lmlm with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Seriki/Lmlm # Run inference directly in the terminal: llama-cli -hf Seriki/Lmlm
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Seriki/Lmlm # Run inference directly in the terminal: llama-cli -hf Seriki/Lmlm
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Seriki/Lmlm # Run inference directly in the terminal: ./llama-cli -hf Seriki/Lmlm
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Seriki/Lmlm # Run inference directly in the terminal: ./build/bin/llama-cli -hf Seriki/Lmlm
Use Docker
docker model run hf.co/Seriki/Lmlm
- LM Studio
- Jan
- Ollama
How to use Seriki/Lmlm with Ollama:
ollama run hf.co/Seriki/Lmlm
- Unsloth Studio new
How to use Seriki/Lmlm with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Seriki/Lmlm to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Seriki/Lmlm to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Seriki/Lmlm to start chatting
- Pi new
How to use Seriki/Lmlm with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Seriki/Lmlm
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Seriki/Lmlm" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Seriki/Lmlm with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Seriki/Lmlm
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Seriki/Lmlm
Run Hermes
hermes
- Docker Model Runner
How to use Seriki/Lmlm with Docker Model Runner:
docker model run hf.co/Seriki/Lmlm
- Lemonade
How to use Seriki/Lmlm with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Seriki/Lmlm
Run and chat with the model
lemonade run user.Lmlm-{{QUANT_TAG}}List all available models
lemonade list
base_model: openai/gpt-oss-safeguard-120b
license: apache-2.0
tags:
- gguf
datasets:
- markov-ai/computer-use-large
- qubuhub/LMLM-pretrain-dwiki6.1M
language:
- en
pipeline_tag: any-to-any
library_name: fastai, adapter-transformers, nlp, mlx, lmlm, allenlp, lmkm, llama, gpt
💫 Community Model> gpt-oss-safeguard-120b by openai
👾 LM Studio Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on Discord.
Use in LM Studio with gpt-oss-safeguard.
Model creator: openai
Original model: gpt-oss-safeguard-120b
GGUF quantization: provided by LM Studio team using llama.cpp release b6866
gpt-oss-safeguard-120b
gpt-oss-safeguard-120b is a safety reasoning model by OpenAI, built-upon their original gpt-oss release. With these models, you can classify text content based on safety policies that you provide and perform a suite of foundational safety tasks. These models are intended for safety use cases. For other applications, we recommend using gpt-oss.
This 120b variant is designed for production, general purpose, high reasoning use cases that fits into a single H100 GPU (117B parameters with 5.1B active parameters).
This model is released under a permissive Apache 2.0 license and it features configurable reasoning effort—low, medium, or high, so users can balance output quality and latency based on their needs. The model offers full chain-of-thought visibility to support easier debugging and increased trust, though this output is not intended for end users.
This model supports a context length of 131k.
Special thanks
🙏 Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.
Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.