Spaces:
Running
Running
Add models and naming convention to API docs and project description
Browse files- README.md +2 -0
- docs/generated/slidedeckai.cli.rst +1 -1
- docs/generated/slidedeckai.core.SlideDeckAI.rst +1 -0
- docs/index.rst +1 -0
- docs/models.md +39 -0
README.md
CHANGED
|
@@ -87,6 +87,8 @@ SlideDeck AI stands out by supporting a wide array of LLMs from several online p
|
|
| 87 |
|
| 88 |
Most supported service providers also offer generous free usage tiers, meaning you can often start building without immediate billing concerns.
|
| 89 |
|
|
|
|
|
|
|
| 90 |
Based on several experiments, SlideDeck AI generally recommends the use of Gemini Flash and GPT-4o to generate the best-quality slide decks.
|
| 91 |
|
| 92 |
The supported LLMs offer different styles of content generation. Use one of the following LLMs along with relevant API keys/access tokens, as appropriate, to create the content of the slide deck:
|
|
|
|
| 87 |
|
| 88 |
Most supported service providers also offer generous free usage tiers, meaning you can often start building without immediate billing concerns.
|
| 89 |
|
| 90 |
+
Model names in SlideDeck AI are specified in the `[code]model-name` format. It begins with a two-character prefix code in square brackets to indicate the provider, for example, `[oa]` for OpenAI, `[gg]` for Google Gemini, and so on. Following the code, the model name is specified, for example, `gemini-2.0-flash` or `gpt-4o`. So, to use Google Gemini 2.0 Flash Lite, the model name would be `[gg]gemini-2.0-flash-lite`.
|
| 91 |
+
|
| 92 |
Based on several experiments, SlideDeck AI generally recommends the use of Gemini Flash and GPT-4o to generate the best-quality slide decks.
|
| 93 |
|
| 94 |
The supported LLMs offer different styles of content generation. Use one of the following LLMs along with relevant API keys/access tokens, as appropriate, to create the content of the slide deck:
|
docs/generated/slidedeckai.cli.rst
CHANGED
|
@@ -2,7 +2,7 @@
|
|
| 2 |
===============
|
| 3 |
===================================
|
| 4 |
|
| 5 |
-
.. currentmodule:: slidedeckai
|
| 6 |
|
| 7 |
.. automodule:: slidedeckai.cli
|
| 8 |
:noindex:
|
|
|
|
| 2 |
===============
|
| 3 |
===================================
|
| 4 |
|
| 5 |
+
.. currentmodule:: slidedeckai
|
| 6 |
|
| 7 |
.. automodule:: slidedeckai.cli
|
| 8 |
:noindex:
|
docs/generated/slidedeckai.core.SlideDeckAI.rst
CHANGED
|
@@ -17,6 +17,7 @@ slidedeckai.core.SlideDeckAI
|
|
| 17 |
~SlideDeckAI.generate
|
| 18 |
~SlideDeckAI.reset
|
| 19 |
~SlideDeckAI.revise
|
|
|
|
| 20 |
~SlideDeckAI.set_template
|
| 21 |
|
| 22 |
|
|
|
|
| 17 |
~SlideDeckAI.generate
|
| 18 |
~SlideDeckAI.reset
|
| 19 |
~SlideDeckAI.revise
|
| 20 |
+
~SlideDeckAI.set_model
|
| 21 |
~SlideDeckAI.set_template
|
| 22 |
|
| 23 |
|
docs/index.rst
CHANGED
|
@@ -12,6 +12,7 @@ Please select a section below or choose a version in the bottom-left corner.
|
|
| 12 |
|
| 13 |
installation.md
|
| 14 |
usage.md
|
|
|
|
| 15 |
|
| 16 |
.. toctree::
|
| 17 |
:maxdepth: 2
|
|
|
|
| 12 |
|
| 13 |
installation.md
|
| 14 |
usage.md
|
| 15 |
+
models.md
|
| 16 |
|
| 17 |
.. toctree::
|
| 18 |
:maxdepth: 2
|
docs/models.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Models
|
| 2 |
+
|
| 3 |
+
This section provides an overview of the large language models (LLMs) supported by SlideDeck AI for generating slide decks. SlideDeck AI leverages various LLMs to create high-quality presentations based on user inputs.
|
| 4 |
+
|
| 5 |
+
## Naming Convention
|
| 6 |
+
|
| 7 |
+
SlideDeck AI uses LiteLLM. However, the models here follow a different naming syntax. For example, to use Google Gemini 2.0 Flash Lite in SlideDeck AI, the model name would be `[gg]gemini-2.0-flash-lite`. This is automatically taken care of in the SlideDeck AI app when users choose any model. However, when using Python API, this naming convention needs to be followed.
|
| 8 |
+
|
| 9 |
+
In particular, model names in SlideDeck AI are specified in the `[code]model-name` format.
|
| 10 |
+
- The first two-character prefix code in square brackets indicates the provider, for example, `[oa]` for OpenAI, `[gg]` for Google Gemini, and so on.
|
| 11 |
+
- Following the code, the model name is specified, for example, `gemini-2.0-flash` or `gpt-4o`.
|
| 12 |
+
|
| 13 |
+
Note that not every LLM may be suitable for slide generation tasks. SlideDeck AI generally works best with the Gemini models. Some models can generate short contents. So, it is recommended to try out a few different models to see which one works best for your specific use case.
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
## Supported Models
|
| 17 |
+
|
| 18 |
+
SlideDeck AI supports the following online LLMs:
|
| 19 |
+
|
| 20 |
+
| LLM | Provider (code) | Requires API key | Characteristics |
|
| 21 |
+
|:------------------------------------|:-------------------------|:-------------------------------------------------------------------------------------------------------------------------|:-------------------------|
|
| 22 |
+
| Claude Haiku 4.5 | Anthropic (`an`) | Mandatory; [get here](https://platform.claude.com/settings/keys) | Faster, detailed |
|
| 23 |
+
| Gemini 2.0 Flash | Google Gemini API (`gg`) | Mandatory; [get here](https://aistudio.google.com/apikey) | Faster, longer content |
|
| 24 |
+
| Gemini 2.0 Flash Lite | Google Gemini API (`gg`) | Mandatory; [get here](https://aistudio.google.com/apikey) | Fastest, longer content |
|
| 25 |
+
| Gemini 2.5 Flash | Google Gemini API (`gg`) | Mandatory; [get here](https://aistudio.google.com/apikey) | Faster, longer content |
|
| 26 |
+
| Gemini 2.5 Flash Lite | Google Gemini API (`gg`) | Mandatory; [get here](https://aistudio.google.com/apikey) | Fastest, longer content |
|
| 27 |
+
| GPT-4.1-mini | OpenAI (`oa`) | Mandatory; [get here](https://platform.openai.com/settings/organization/api-keys) | Faster, medium content |
|
| 28 |
+
| GPT-4.1-nano | OpenAI (`oa`) | Mandatory; [get here](https://platform.openai.com/settings/organization/api-keys) | Faster, shorter content |
|
| 29 |
+
| GPT-5 | OpenAI (`oa`) | Mandatory; [get here](https://platform.openai.com/settings/organization/api-keys) | Slow, shorter content |
|
| 30 |
+
| GPT | Azure OpenAI (`az`) | Mandatory; [get here](https://ai.azure.com/resource/playground) NOTE: You need to have your subscription/billing set up | Faster, longer content |
|
| 31 |
+
| Command R+ | Cohere (`co`) | Mandatory; [get here](https://dashboard.cohere.com/api-keys) | Shorter, simpler content |
|
| 32 |
+
| Gemini-2.0-flash-001 | OpenRouter (`or`) | Mandatory; [get here](https://openrouter.ai/settings/keys) | Faster, longer content |
|
| 33 |
+
| GPT-3.5 Turbo | OpenRouter (`or`) | Mandatory; [get here](https://openrouter.ai/settings/keys) | Faster, longer content |
|
| 34 |
+
| DeepSeek-V3.1-Terminus | SambaNova (`sn`) | Mandatory; [get here](https://cloud.sambanova.ai/apis) | Fast, detailed content |
|
| 35 |
+
| Llama-3.3-Swallow-70B-Instruct-v0.4 | SambaNova (`sn`) | Mandatory; [get here](https://cloud.sambanova.ai/apis) | Fast, shorter |
|
| 36 |
+
| DeepSeek V3-0324 | Together AI (`to`) | Mandatory; [get here](https://api.together.ai/settings/api-keys) | Slower, medium-length |
|
| 37 |
+
| Llama 3.3 70B Instruct Turbo | Together AI (`to`) | Mandatory; [get here](https://api.together.ai/settings/api-keys) | Slower, detailed |
|
| 38 |
+
| Llama 3.1 8B Instruct Turbo 128K | Together AI (`to`) | Mandatory; [get here](https://api.together.ai/settings/api-keys) | Faster, shorter |
|
| 39 |
+
|