| --- |
| language: |
| - en |
| license: apache-2.0 |
| base_model: alpindale/Mistral-7B-v0.2-hf |
| datasets: |
| - cognitivecomputations/dolphin |
| - cognitivecomputations/dolphin-coder |
| - cognitivecomputations/samantha-data |
| - jondurbin/airoboros-2.2.1 |
| - teknium/openhermes-2.5 |
| - m-a-p/Code-Feedback |
| - m-a-p/CodeFeedback-Filtered-Instruction |
| model-index: |
| - name: dolphin-2.8-mistral-7b-v02 |
| results: |
| - task: |
| type: text-generation |
| dataset: |
| name: HumanEval |
| type: openai_humaneval |
| metrics: |
| - type: pass@1 |
| value: 0.469 |
| name: pass@1 |
| verified: false |
| pipeline_tag: text-generation |
| --- |
| |
| # hyperspaceai/hyperEngine |
| This model was converted to MLX format from [`cognitivecomputations/dolphin-2.8-mistral-7b-v02`]() using mlx-lm version **0.9.0**. |
| Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02) for more details on the model. |
| ## Use with mlx |
|
|
| ```bash |
| pip install mlx-lm |
| ``` |
|
|
| ```python |
| from mlx_lm import load, generate |
| |
| model, tokenizer = load("hyperspaceai/hyperEngine") |
| response = generate(model, tokenizer, prompt="hello", verbose=True) |
| ``` |