text
stringlengths 5
631k
| id
stringlengths 14
178
| metadata
dict | __index_level_0__
int64 0
647
|
|---|---|---|---|
[
{
"question": "Which of the following best describes a Large Language Model (LLM)?",
"answer_a": "A model specializing in language recognition",
"answer_b": "A massive neural network that understands and generates human language",
"answer_c": "A model exclusively used for language data tasks like summarization or classification",
"answer_d": "A rule-based chatbot used for conversations",
"correct_answer": "B"
}
]
|
agents-course/quiz/data/unit_1.json/0
|
{
"file_path": "agents-course/quiz/data/unit_1.json",
"repo_id": "agents-course",
"token_count": 154
}
| 0
|
# Build Your Own Pokémon Battle Agent
Now that you’ve explored the potential and limitations of Agentic AI in games, it’s time to get hands-on. In this section, you’ll **build your very own AI Agent to battle in Pokémon-style turn-based combat**, using everything you’ve learned throughout the course.
We’ll break the system into four key building blocks:
- **Poke-env:** A Python library designed to train rule-based or reinforcement learning Pokémon bots.
- **Pokémon Showdown:** An online battle simulator where your agent will fight.
- **LLMAgentBase:** A custom Python class we’ve built to connect your LLM with the Poke-env battle environment.
- **TemplateAgent:** A starter template you’ll complete to create your own unique battle agent.
Let’s explore each of these components in more detail.
## 🧠 Poke-env

[Poke-env](https://github.com/hsahovic/poke-env) is a Python interface originally built for training reinforcement learning bots by [Haris Sahovic](https://huggingface.co/hsahovic), but we’ve repurposed it for Agentic AI.
It allows your agent to interact with Pokémon Showdown through a simple API.
It provides a `Player` class from which your Agent will inherit, covering everything needed to communicate with the graphical interface.
**Documentation**: [poke-env.readthedocs.io](https://poke-env.readthedocs.io/en/stable/)
**Repository**: [github.com/hsahovic/poke-env](https://github.com/hsahovic/poke-env)
## ⚔️ Pokémon Showdown
[Pokémon Showdown](https://pokemonshowdown.com/) is an [open-source](https://github.com/smogon/Pokemon-Showdown) battle simulator where your agent will play live Pokémon battles.
It provides a full interface to simulate and display battles in real time. In our challenge, your bot will act just like a human player, choosing moves turn by turn.
We’ve deployed a server that all participants will use to battle. Let’s see who builds the best AI battle Agent!
**Repository**: [github.com/smogon/Pokemon-Showdown](https://github.com/smogon/Pokemon-Showdown)
**Website**: [pokemonshowdown.com](https://pokemonshowdown.com/)
## 🔌 LLMAgentBase
`LLMAgentBase` is a Python class that extends the `Player` class from **Poke-env**.
It serves as the bridge between your **LLM** and the **Pokémon battle simulator**, handling input/output formatting and maintaining battle context.
This base agent provides a set of tools (defined in `STANDARD_TOOL_SCHEMA`) to interact with the environment, including:
- `choose_move`: for selecting an attack during battle
- `choose_switch`: for switching Pokémon
The LLM should use these tools to make decisions during a match.
### 🧠 Core Logic
- `choose_move(battle: Battle)`: This is the main method invoked each turn. It takes a `Battle` object and returns an action string based on the LLM’s output.
### 🔧 Key Internal Methods
- `_format_battle_state(battle)`: Converts the current battle state into a string, making it suitable for sending to the LLM.
- `_find_move_by_name(battle, move_name)`: Finds a move by name, used in LLM responses that call `choose_move`.
- `_find_pokemon_by_name(battle, pokemon_name)`: Locates a specific Pokémon to switch into, based on the LLM’s switch command.
- `_get_llm_decision(battle_state)`: This method is abstract in the base class. You’ll need to implement it in your own agent (see next section), where you define how to query the LLM and parse its response.
Here’s an excerpt showing how that decision-making works:
```python
STANDARD_TOOL_SCHEMA = {
"choose_move": {
...
},
"choose_switch": {
...
},
}
class LLMAgentBase(Player):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.standard_tools = STANDARD_TOOL_SCHEMA
self.battle_history = []
def _format_battle_state(self, battle: Battle) -> str:
active_pkmn = battle.active_pokemon
active_pkmn_info = f"Your active Pokemon: {active_pkmn.species} " \
f"(Type: {'/'.join(map(str, active_pkmn.types))}) " \
f"HP: {active_pkmn.current_hp_fraction * 100:.1f}% " \
f"Status: {active_pkmn.status.name if active_pkmn.status else 'None'} " \
f"Boosts: {active_pkmn.boosts}"
opponent_pkmn = battle.opponent_active_pokemon
opp_info_str = "Unknown"
if opponent_pkmn:
opp_info_str = f"{opponent_pkmn.species} " \
f"(Type: {'/'.join(map(str, opponent_pkmn.types))}) " \
f"HP: {opponent_pkmn.current_hp_fraction * 100:.1f}% " \
f"Status: {opponent_pkmn.status.name if opponent_pkmn.status else 'None'} " \
f"Boosts: {opponent_pkmn.boosts}"
opponent_pkmn_info = f"Opponent's active Pokemon: {opp_info_str}"
available_moves_info = "Available moves:\n"
if battle.available_moves:
available_moves_info += "\n".join(
[f"- {move.id} (Type: {move.type}, BP: {move.base_power}, Acc: {move.accuracy}, PP: {move.current_pp}/{move.max_pp}, Cat: {move.category.name})"
for move in battle.available_moves]
)
else:
available_moves_info += "- None (Must switch or Struggle)"
available_switches_info = "Available switches:\n"
if battle.available_switches:
available_switches_info += "\n".join(
[f"- {pkmn.species} (HP: {pkmn.current_hp_fraction * 100:.1f}%, Status: {pkmn.status.name if pkmn.status else 'None'})"
for pkmn in battle.available_switches]
)
else:
available_switches_info += "- None"
state_str = f"{active_pkmn_info}\n" \
f"{opponent_pkmn_info}\n\n" \
f"{available_moves_info}\n\n" \
f"{available_switches_info}\n\n" \
f"Weather: {battle.weather}\n" \
f"Terrains: {battle.fields}\n" \
f"Your Side Conditions: {battle.side_conditions}\n" \
f"Opponent Side Conditions: {battle.opponent_side_conditions}"
return state_str.strip()
def _find_move_by_name(self, battle: Battle, move_name: str) -> Optional[Move]:
normalized_name = normalize_name(move_name)
# Prioritize exact ID match
for move in battle.available_moves:
if move.id == normalized_name:
return move
# Fallback: Check display name (less reliable)
for move in battle.available_moves:
if move.name.lower() == move_name.lower():
print(f"Warning: Matched move by display name '{move.name}' instead of ID '{move.id}'. Input was '{move_name}'.")
return move
return None
def _find_pokemon_by_name(self, battle: Battle, pokemon_name: str) -> Optional[Pokemon]:
normalized_name = normalize_name(pokemon_name)
for pkmn in battle.available_switches:
# Normalize the species name for comparison
if normalize_name(pkmn.species) == normalized_name:
return pkmn
return None
async def choose_move(self, battle: Battle) -> str:
battle_state_str = self._format_battle_state(battle)
decision_result = await self._get_llm_decision(battle_state_str)
print(decision_result)
decision = decision_result.get("decision")
error_message = decision_result.get("error")
action_taken = False
fallback_reason = ""
if decision:
function_name = decision.get("name")
args = decision.get("arguments", {})
if function_name == "choose_move":
move_name = args.get("move_name")
if move_name:
chosen_move = self._find_move_by_name(battle, move_name)
if chosen_move and chosen_move in battle.available_moves:
action_taken = True
chat_msg = f"AI Decision: Using move '{chosen_move.id}'."
print(chat_msg)
return self.create_order(chosen_move)
else:
fallback_reason = f"LLM chose unavailable/invalid move '{move_name}'."
else:
fallback_reason = "LLM 'choose_move' called without 'move_name'."
elif function_name == "choose_switch":
pokemon_name = args.get("pokemon_name")
if pokemon_name:
chosen_switch = self._find_pokemon_by_name(battle, pokemon_name)
if chosen_switch and chosen_switch in battle.available_switches:
action_taken = True
chat_msg = f"AI Decision: Switching to '{chosen_switch.species}'."
print(chat_msg)
return self.create_order(chosen_switch)
else:
fallback_reason = f"LLM chose unavailable/invalid switch '{pokemon_name}'."
else:
fallback_reason = "LLM 'choose_switch' called without 'pokemon_name'."
else:
fallback_reason = f"LLM called unknown function '{function_name}'."
if not action_taken:
if not fallback_reason:
if error_message:
fallback_reason = f"API Error: {error_message}"
elif decision is None:
fallback_reason = "LLM did not provide a valid function call."
else:
fallback_reason = "Unknown error processing LLM decision."
print(f"Warning: {fallback_reason} Choosing random action.")
if battle.available_moves or battle.available_switches:
return self.choose_random_move(battle)
else:
print("AI Fallback: No moves or switches available. Using Struggle/Default.")
return self.choose_default_move(battle)
async def _get_llm_decision(self, battle_state: str) -> Dict[str, Any]:
raise NotImplementedError("Subclasses must implement _get_llm_decision")
```
**Full source code**: [agents.py](https://huggingface.co/spaces/Jofthomas/twitch_streaming/blob/main/agents.py)
## 🧪 TemplateAgent
Now comes the fun part! With LLMAgentBase as your foundation, it’s time to implement your own agent, with your own strategy to climb the leaderboard.
You’ll start from this template and build your own logic. We’ve also provided three [complete examples](https://huggingface.co/spaces/Jofthomas/twitch_streaming/blob/main/agents.py) using **OpenAI**, **Mistral**, and **Gemini** models to guide you.
Here’s a simplified version of the template:
```python
class TemplateAgent(LLMAgentBase):
"""Uses Template AI API for decisions."""
def __init__(self, api_key: str = None, model: str = "model-name", *args, **kwargs):
super().__init__(*args, **kwargs)
self.model = model
self.template_client = TemplateModelProvider(api_key=...)
self.template_tools = list(self.standard_tools.values())
async def _get_llm_decision(self, battle_state: str) -> Dict[str, Any]:
"""Sends state to the LLM and gets back the function call decision."""
system_prompt = (
"You are a ..."
)
user_prompt = f"..."
try:
response = await self.template_client.chat.completions.create(
model=self.model,
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
],
)
message = response.choices[0].message
return {"decision": {"name": function_name, "arguments": arguments}}
except Exception as e:
print(f"Unexpected error during call: {e}")
return {"error": f"Unexpected error: {e}"}
```
This code won’t run out of the box, it’s a blueprint for your custom logic.
With all the pieces ready, it’s your turn to build a competitive agent. In the next section, we’ll show how to deploy your agent to our server and battle others in real-time.
Let the battle begin! 🔥
|
agents-course/units/en/bonus-unit3/building_your_pokemon_agent.mdx/0
|
{
"file_path": "agents-course/units/en/bonus-unit3/building_your_pokemon_agent.mdx",
"repo_id": "agents-course",
"token_count": 5276
}
| 1
|
# Introduction to Agents
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Thumbnail"/>
Welcome to this first unit, where **you'll build a solid foundation in the fundamentals of AI Agents** including:
- **Understanding Agents**
- What is an Agent, and how does it work?
- How do Agents make decisions using reasoning and planning?
- **The Role of LLMs (Large Language Models) in Agents**
- How LLMs serve as the “brain” behind an Agent.
- How LLMs structure conversations via the Messages system.
- **Tools and Actions**
- How Agents use external tools to interact with the environment.
- How to build and integrate tools for your Agent.
- **The Agent Workflow:**
- *Think* → *Act* → *Observe*.
After exploring these topics, **you’ll build your first Agent** using `smolagents`!
Your Agent, named Alfred, will handle a simple task and demonstrate how to apply these concepts in practice.
You’ll even learn how to **publish your Agent on Hugging Face Spaces**, so you can share it with friends and colleagues.
Finally, at the end of this Unit, you'll take a quiz. Pass it, and you'll **earn your first course certification**: the 🎓 Certificate of Fundamentals of Agents.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/certificate-example.jpg" alt="Certificate Example"/>
This Unit is your **essential starting point**, laying the groundwork for understanding Agents before you move on to more advanced topics.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-no-check.jpg" alt="Unit 1 planning"/>
It's a big unit, so **take your time** and don’t hesitate to come back to these sections from time to time.
Ready? Let’s dive in! 🚀
|
agents-course/units/en/unit1/introduction.mdx/0
|
{
"file_path": "agents-course/units/en/unit1/introduction.mdx",
"repo_id": "agents-course",
"token_count": 530
}
| 2
|
# Test Your Understanding of LangGraph
Let's test your understanding of `LangGraph` with a quick quiz! This will help reinforce the key concepts we've covered so far.
This is an optional quiz and it's not graded.
### Q1: What is the primary purpose of LangGraph?
Which statement best describes what LangGraph is designed for?
<Question
choices={[
{
text: "A framework to build control flows for applications containing LLMs",
explain: "LangGraph is specifically designed to help build and manage the control flow of applications that use LLMs.",
correct: true
},
{
text: "A library that provides interfaces to interact with different LLM models",
explain: "This better describes LangChain's role, which provides standard interfaces for model interaction. LangGraph focuses on control flow.",
},
{
text: "An Agent library for tool calling",
explain: "While LangGraph works with agents, the main purpose of langGraph is 'Ochestration'.",
}
]}
/>
---
### Q2: In the context of the "Control vs Freedom" trade-off, where does LangGraph stand?
Which statement best characterizes LangGraph's approach to agent design?
<Question
choices={[
{
text: "LangGraph maximizes freedom, allowing LLMs to make all decisions independently",
explain: "LangGraph actually focuses more on control than freedom, providing structure for LLM workflows.",
},
{
text: "LangGraph provides strong control over execution flow while still leveraging LLM capabilities for decision making",
explain: "LangGraph shines when you need control over your agent's execution, providing predictable behavior through structured workflows.",
correct: true
},
]}
/>
---
### Q3: What role does State play in LangGraph?
Choose the most accurate description of State in LangGraph.
<Question
choices={[
{
text: "State is the latest generation from the LLM",
explain: "State is a user-defined class in LangGraph, not LLM generated. It's fields are user defined, the values can be LLM filled",
},
{
text: "State is only used to track errors during execution",
explain: "State has a much broader purpose than just error tracking. But that's still usefull.",
},
{
text: "State represents the information that flows through your agent application",
explain: "State is central to LangGraph and contains all the information needed for decision-making between steps. You provide the fields than you need to compute and the nodes can alter the values to decide on a branching.",
correct: true
},
{
text: "State is only relevant when working with external APIs",
explain: "State is fundamental to all LangGraph applications, not just those working with external APIs.",
}
]}
/>
### Q4: What is a Conditional Edge in LangGraph?
Select the most accurate description.
<Question
choices={[
{
text: "An edge that determines which node to execute next based on evaluating a condition",
explain: "Conditional edges allow your graph to make dynamic routing decisions based on the current state, creating branching logic in your workflow.",
correct: true
},
{
text: "An edge that is only followed when a specific condition occurs",
explain: "Conditional edges control the flow of the application on it's outputs, not on the input.",
},
{
text: "An edge that requires user confirmation before proceeding",
explain: "Conditional edges are based on programmatic conditions, not user interaction requirements.",
}
]}
/>
---
### Q5: How does LangGraph help address the hallucination problem in LLMs?
Choose the best answer.
<Question
choices={[
{
text: "LangGraph eliminates hallucinations entirely by limiting LLM responses",
explain: "No framework can completely eliminate hallucinations from LLMs, LangGraph is no exception.",
},
{
text: "LangGraph provides structured workflows that can validate and verify LLM outputs",
explain: "By creating structured workflows with validation steps, verification nodes, and error handling paths, LangGraph helps reduce the impact of hallucinations.",
correct: true
},
{
text: "LangGraph has no effect on hallucinations",
explain: "LangGraph's structured approach to workflows can help significantly in mitigating hallucinations at the cost of speed.",
}
]}
/>
Congratulations on completing the quiz! 🎉 If you missed any questions, consider reviewing the previous sections to strengthen your understanding. Next, we'll explore more advanced features of LangGraph and see how to build more complex agent workflows.
|
agents-course/units/en/unit2/langgraph/quiz1.mdx/0
|
{
"file_path": "agents-course/units/en/unit2/langgraph/quiz1.mdx",
"repo_id": "agents-course",
"token_count": 1169
}
| 3
|
<CourseFloatingBanner
classNames="absolute z-10 right-0 top-0"
notebooks={[
{label: "Google Colab", value: "https://colab.research.google.com/#fileId=https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/multiagent_notebook.ipynb"},
]}
askForHelpUrl="http://hf.co/join/discord" />
# Multi-Agent Systems
Multi-agent systems enable **specialized agents to collaborate on complex tasks**, improving modularity, scalability, and robustness. Instead of relying on a single agent, tasks are distributed among agents with distinct capabilities.
In **smolagents**, different agents can be combined to generate Python code, call external tools, perform web searches, and more. By orchestrating these agents, we can create powerful workflows.
A typical setup might include:
- A **Manager Agent** for task delegation
- A **Code Interpreter Agent** for code execution
- A **Web Search Agent** for information retrieval
The diagram below illustrates a simple multi-agent architecture where a **Manager Agent** coordinates a **Code Interpreter Tool** and a **Web Search Agent**, which in turn utilizes tools like the `DuckDuckGoSearchTool` and `VisitWebpageTool` to gather relevant information.
<img src="https://mermaid.ink/img/pako:eNp1kc1qhTAQRl9FUiQb8wIpdNO76eKubrmFks1oRg3VSYgjpYjv3lFL_2hnMWQOJwn5sqgmelRWleUSKLAtFs09jqhtoWuYUFfFAa6QA9QDTnpzamheuhxn8pt40-6l13UtS0ddhtQXj6dbR4XUGQg6zEYasTF393KjeSDGnDJKNxzj8I_7hLW5IOSmP9CH9hv_NL-d94d4DVNg84p1EnK4qlIj5hGClySWbadT-6OdsrL02MI8sFOOVkciw8zx8kaNspxnrJQE0fXKtjBMMs3JA-MpgOQwftIE9Bzj14w-cMznI_39E9Z3p0uFoA?type=png" style='background: white;'>
## Multi-Agent Systems in Action
A multi-agent system consists of multiple specialized agents working together under the coordination of an **Orchestrator Agent**. This approach enables complex workflows by distributing tasks among agents with distinct roles.
For example, a **Multi-Agent RAG system** can integrate:
- A **Web Agent** for browsing the internet.
- A **Retriever Agent** for fetching information from knowledge bases.
- An **Image Generation Agent** for producing visuals.
All of these agents operate under an orchestrator that manages task delegation and interaction.
## Solving a complex task with a multi-agent hierarchy
<Tip>
You can follow the code in <a href="https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/multiagent_notebook.ipynb" target="_blank">this notebook</a> that you can run using Google Colab.
</Tip>
The reception is approaching! With your help, Alfred is now nearly finished with the preparations.
But now there's a problem: the Batmobile has disappeared. Alfred needs to find a replacement, and find it quickly.
Fortunately, a few biopics have been done on Bruce Wayne's life, so maybe Alfred could get a car left behind on one of the movie sets, and re-engineer it up to modern standards, which certainly would include a full self-driving option.
But this could be anywhere in the filming locations around the world - which could be numerous.
So Alfred wants your help. Could you build an agent able to solve this task?
> 👉 Find all Batman filming locations in the world, calculate the time to transfer via boat to there, and represent them on a map, with a color varying by boat transfer time. Also represent some supercar factories with the same boat transfer time.
Let's build this!
This example needs some additional packages, so let's install them first:
```bash
pip install 'smolagents[litellm]' plotly geopandas shapely kaleido -q
```
### We first make a tool to get the cargo plane transfer time.
```python
import math
from typing import Optional, Tuple
from smolagents import tool
@tool
def calculate_cargo_travel_time(
origin_coords: Tuple[float, float],
destination_coords: Tuple[float, float],
cruising_speed_kmh: Optional[float] = 750.0, # Average speed for cargo planes
) -> float:
"""
Calculate the travel time for a cargo plane between two points on Earth using great-circle distance.
Args:
origin_coords: Tuple of (latitude, longitude) for the starting point
destination_coords: Tuple of (latitude, longitude) for the destination
cruising_speed_kmh: Optional cruising speed in km/h (defaults to 750 km/h for typical cargo planes)
Returns:
float: The estimated travel time in hours
Example:
>>> # Chicago (41.8781° N, 87.6298° W) to Sydney (33.8688° S, 151.2093° E)
>>> result = calculate_cargo_travel_time((41.8781, -87.6298), (-33.8688, 151.2093))
"""
def to_radians(degrees: float) -> float:
return degrees * (math.pi / 180)
# Extract coordinates
lat1, lon1 = map(to_radians, origin_coords)
lat2, lon2 = map(to_radians, destination_coords)
# Earth's radius in kilometers
EARTH_RADIUS_KM = 6371.0
# Calculate great-circle distance using the haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = (
math.sin(dlat / 2) ** 2
+ math.cos(lat1) * math.cos(lat2) * math.sin(dlon / 2) ** 2
)
c = 2 * math.asin(math.sqrt(a))
distance = EARTH_RADIUS_KM * c
# Add 10% to account for non-direct routes and air traffic controls
actual_distance = distance * 1.1
# Calculate flight time
# Add 1 hour for takeoff and landing procedures
flight_time = (actual_distance / cruising_speed_kmh) + 1.0
# Format the results
return round(flight_time, 2)
print(calculate_cargo_travel_time((41.8781, -87.6298), (-33.8688, 151.2093)))
```
### Setting up the agent
For the model provider, we use Together AI, one of the new [inference providers on the Hub](https://huggingface.co/blog/inference-providers)!
The GoogleSearchTool uses the [Serper API](https://serper.dev) to search the web, so this requires either having setup env variable `SERPAPI_API_KEY` and passing `provider="serpapi"` or having `SERPER_API_KEY` and passing `provider=serper`.
If you don't have any Serp API provider setup, you can use `DuckDuckGoSearchTool` but beware that it has a rate limit.
```python
import os
from PIL import Image
from smolagents import CodeAgent, GoogleSearchTool, InferenceClientModel, VisitWebpageTool
model = InferenceClientModel(model_id="Qwen/Qwen2.5-Coder-32B-Instruct", provider="together")
```
We can start by creating a simple agent as a baseline to give us a simple report.
```python
task = """Find all Batman filming locations in the world, calculate the time to transfer via cargo plane to here (we're in Gotham, 40.7128° N, 74.0060° W), and return them to me as a pandas dataframe.
Also give me some supercar factories with the same cargo plane transfer time."""
```
```python
agent = CodeAgent(
model=model,
tools=[GoogleSearchTool("serper"), VisitWebpageTool(), calculate_cargo_travel_time],
additional_authorized_imports=["pandas"],
max_steps=20,
)
```
```python
result = agent.run(task)
```
```python
result
```
In our case, it generates this output:
```python
| | Location | Travel Time to Gotham (hours) |
|--|------------------------------------------------------|------------------------------|
| 0 | Necropolis Cemetery, Glasgow, Scotland, UK | 8.60 |
| 1 | St. George's Hall, Liverpool, England, UK | 8.81 |
| 2 | Two Temple Place, London, England, UK | 9.17 |
| 3 | Wollaton Hall, Nottingham, England, UK | 9.00 |
| 4 | Knebworth House, Knebworth, Hertfordshire, UK | 9.15 |
| 5 | Acton Lane Power Station, Acton Lane, Acton, UK | 9.16 |
| 6 | Queensboro Bridge, New York City, USA | 1.01 |
| 7 | Wall Street, New York City, USA | 1.00 |
| 8 | Mehrangarh Fort, Jodhpur, Rajasthan, India | 18.34 |
| 9 | Turda Gorge, Turda, Romania | 11.89 |
| 10 | Chicago, USA | 2.68 |
| 11 | Hong Kong, China | 19.99 |
| 12 | Cardington Studios, Northamptonshire, UK | 9.10 |
| 13 | Warner Bros. Leavesden Studios, Hertfordshire, UK | 9.13 |
| 14 | Westwood, Los Angeles, CA, USA | 6.79 |
| 15 | Woking, UK (McLaren) | 9.13 |
```
We could already improve this a bit by throwing in some dedicated planning steps, and adding more prompting.
Planning steps allow the agent to think ahead and plan its next steps, which can be useful for more complex tasks.
```python
agent.planning_interval = 4
detailed_report = agent.run(f"""
You're an expert analyst. You make comprehensive reports after visiting many websites.
Don't hesitate to search for many queries at once in a for loop.
For each data point that you find, visit the source url to confirm numbers.
{task}
""")
print(detailed_report)
```
```python
detailed_report
```
In our case, it generates this output:
```python
| | Location | Travel Time (hours) |
|--|--------------------------------------------------|---------------------|
| 0 | Bridge of Sighs, Glasgow Necropolis, Glasgow, UK | 8.6 |
| 1 | Wishart Street, Glasgow, Scotland, UK | 8.6 |
```
Thanks to these quick changes, we obtained a much more concise report by simply providing our agent a detailed prompt, and giving it planning capabilities!
The model's context window is quickly filling up. So **if we ask our agent to combine the results of detailed search with another, it will be slower and quickly ramp up tokens and costs**.
➡️ We need to improve the structure of our system.
### ✌️ Splitting the task between two agents
Multi-agent structures allow to separate memories between different sub-tasks, with two great benefits:
- Each agent is more focused on its core task, thus more performant
- Separating memories reduces the count of input tokens at each step, thus reducing latency and cost.
Let's create a team with a dedicated web search agent, managed by another agent.
The manager agent should have plotting capabilities to write its final report: so let us give it access to additional imports, including `plotly`, and `geopandas` + `shapely` for spatial plotting.
```python
model = InferenceClientModel(
"Qwen/Qwen2.5-Coder-32B-Instruct", provider="together", max_tokens=8096
)
web_agent = CodeAgent(
model=model,
tools=[
GoogleSearchTool(provider="serper"),
VisitWebpageTool(),
calculate_cargo_travel_time,
],
name="web_agent",
description="Browses the web to find information",
verbosity_level=0,
max_steps=10,
)
```
The manager agent will need to do some mental heavy lifting.
So we give it the stronger model [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), and add a `planning_interval` to the mix.
```python
from smolagents.utils import encode_image_base64, make_image_url
from smolagents import OpenAIServerModel
def check_reasoning_and_plot(final_answer, agent_memory):
multimodal_model = OpenAIServerModel("gpt-4o", max_tokens=8096)
filepath = "saved_map.png"
assert os.path.exists(filepath), "Make sure to save the plot under saved_map.png!"
image = Image.open(filepath)
prompt = (
f"Here is a user-given task and the agent steps: {agent_memory.get_succinct_steps()}. Now here is the plot that was made."
"Please check that the reasoning process and plot are correct: do they correctly answer the given task?"
"First list reasons why yes/no, then write your final decision: PASS in caps lock if it is satisfactory, FAIL if it is not."
"Don't be harsh: if the plot mostly solves the task, it should pass."
"To pass, a plot should be made using px.scatter_map and not any other method (scatter_map looks nicer)."
)
messages = [
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt,
},
{
"type": "image_url",
"image_url": {"url": make_image_url(encode_image_base64(image))},
},
],
}
]
output = multimodal_model(messages).content
print("Feedback: ", output)
if "FAIL" in output:
raise Exception(output)
return True
manager_agent = CodeAgent(
model=InferenceClientModel("deepseek-ai/DeepSeek-R1", provider="together", max_tokens=8096),
tools=[calculate_cargo_travel_time],
managed_agents=[web_agent],
additional_authorized_imports=[
"geopandas",
"plotly",
"shapely",
"json",
"pandas",
"numpy",
],
planning_interval=5,
verbosity_level=2,
final_answer_checks=[check_reasoning_and_plot],
max_steps=15,
)
```
Let us inspect what this team looks like:
```python
manager_agent.visualize()
```
This will generate something like this, helping us understand the structure and relationship between agents and tools used:
```python
CodeAgent | deepseek-ai/DeepSeek-R1
├── ✅ Authorized imports: ['geopandas', 'plotly', 'shapely', 'json', 'pandas', 'numpy']
├── 🛠️ Tools:
│ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
│ ┃ Name ┃ Description ┃ Arguments ┃
│ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ │ calculate_cargo_travel_time │ Calculate the travel time for a cargo │ origin_coords (`array`): Tuple of │
│ │ │ plane between two points on Earth │ (latitude, longitude) for the │
│ │ │ using great-circle distance. │ starting point │
│ │ │ │ destination_coords (`array`): Tuple │
│ │ │ │ of (latitude, longitude) for the │
│ │ │ │ destination │
│ │ │ │ cruising_speed_kmh (`number`): │
│ │ │ │ Optional cruising speed in km/h │
│ │ │ │ (defaults to 750 km/h for typical │
│ │ │ │ cargo planes) │
│ │ final_answer │ Provides a final answer to the given │ answer (`any`): The final answer to │
│ │ │ problem. │ the problem │
│ └─────────────────────────────┴───────────────────────────────────────┴───────────────────────────────────────┘
└── 🤖 Managed agents:
└── web_agent | CodeAgent | Qwen/Qwen2.5-Coder-32B-Instruct
├── ✅ Authorized imports: []
├── 📝 Description: Browses the web to find information
└── 🛠️ Tools:
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Name ┃ Description ┃ Arguments ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ web_search │ Performs a google web search for │ query (`string`): The search │
│ │ your query then returns a string │ query to perform. │
│ │ of the top search results. │ filter_year (`integer`): │
│ │ │ Optionally restrict results to a │
│ │ │ certain year │
│ visit_webpage │ Visits a webpage at the given url │ url (`string`): The url of the │
│ │ and reads its content as a │ webpage to visit. │
│ │ markdown string. Use this to │ │
│ │ browse webpages. │ │
│ calculate_cargo_travel_time │ Calculate the travel time for a │ origin_coords (`array`): Tuple of │
│ │ cargo plane between two points on │ (latitude, longitude) for the │
│ │ Earth using great-circle │ starting point │
│ │ distance. │ destination_coords (`array`): │
│ │ │ Tuple of (latitude, longitude) │
│ │ │ for the destination │
│ │ │ cruising_speed_kmh (`number`): │
│ │ │ Optional cruising speed in km/h │
│ │ │ (defaults to 750 km/h for typical │
│ │ │ cargo planes) │
│ final_answer │ Provides a final answer to the │ answer (`any`): The final answer │
│ │ given problem. │ to the problem │
└─────────────────────────────┴───────────────────────────────────┴───────────────────────────────────┘
```
```python
manager_agent.run("""
Find all Batman filming locations in the world, calculate the time to transfer via cargo plane to here (we're in Gotham, 40.7128° N, 74.0060° W).
Also give me some supercar factories with the same cargo plane transfer time. You need at least 6 points in total.
Represent this as spatial map of the world, with the locations represented as scatter points with a color that depends on the travel time, and save it to saved_map.png!
Here's an example of how to plot and return a map:
import plotly.express as px
df = px.data.carshare()
fig = px.scatter_map(df, lat="centroid_lat", lon="centroid_lon", text="name", color="peak_hour", size=100,
color_continuous_scale=px.colors.sequential.Magma, size_max=15, zoom=1)
fig.show()
fig.write_image("saved_image.png")
final_answer(fig)
Never try to process strings using code: when you have a string to read, just print it and you'll see it.
""")
```
I don't know how that went in your run, but in mine, the manager agent skilfully divided tasks given to the web agent in `1. Search for Batman filming locations`, then `2. Find supercar factories`, before aggregating the lists and plotting the map.
Let's see what the map looks like by inspecting it directly from the agent state:
```python
manager_agent.python_executor.state["fig"]
```
This will output the map:

## Resources
- [Multi-Agent Systems](https://huggingface.co/docs/smolagents/main/en/examples/multiagents) – Overview of multi-agent systems.
- [What is Agentic RAG?](https://weaviate.io/blog/what-is-agentic-rag) – Introduction to Agentic RAG.
- [Multi-Agent RAG System 🤖🤝🤖 Recipe](https://huggingface.co/learn/cookbook/multiagent_rag_system) – Step-by-step guide to building a multi-agent RAG system.
|
agents-course/units/en/unit2/smolagents/multi_agent_systems.mdx/0
|
{
"file_path": "agents-course/units/en/unit2/smolagents/multi_agent_systems.mdx",
"repo_id": "agents-course",
"token_count": 9133
}
| 4
|
# Conclusion
**Congratulations on finishing the Agents Course!**
Through perseverance and dedication, you’ve built a solid foundation in the world of AI Agents.
But finishing this course is **not the end of your journey**. It’s just the beginning: don’t hesitate to explore the next section where we share curated resources to help you continue learning, including advanced topics like **MCPs** and beyond.
**Thank you** for being part of this course. **We hope you liked this course as much as we loved writing it**.
And don’t forget: **Keep Learning, Stay Awesome 🤗**
|
agents-course/units/en/unit4/conclusion.mdx/0
|
{
"file_path": "agents-course/units/en/unit4/conclusion.mdx",
"repo_id": "agents-course",
"token_count": 142
}
| 5
|
# De LLMs a Agentes de IA
Aprendimos en la [primera unidad](https://huggingface.co/learn/agents-course/unit1/introduction) del curso que los Agentes de IA son capaces de planificar y tomar decisiones.
Y aunque los LLMs han permitido interacciones más naturales con los NPCs, la IA Agéntica va un paso más allá al permitir que los personajes tomen decisiones, planifiquen acciones y se adapten a entornos cambiantes.
Para ilustrar la diferencia, piensa en un NPC clásico de RPG:
- Con un LLM: el NPC podría responder a tus preguntas de una manera más natural y variada. Es genial para el diálogo, pero el NPC permanece estático, no actuará a menos que tú hagas algo primero.
- Con IA Agéntica: el NPC puede decidir ir a buscar ayuda, poner una trampa o evitarte por completo, incluso si no estás interactuando directamente con él.
Este pequeño cambio lo cambia todo. Estamos pasando de respondedores con guion a actores autónomos dentro del mundo del juego.
Este cambio significa que los NPCs ahora pueden interactuar directamente con su entorno a través de comportamientos dirigidos a objetivos, lo que finalmente conduce a una jugabilidad más dinámica e impredecible.
La IA Agéntica empodera a los NPCs con:
- **Autonomía**: Tomar decisiones independientes basadas en el estado del juego.
- **Adaptabilidad**: Ajustar estrategias en respuesta a las acciones del jugador.
- **Persistencia**: Recordar interacciones pasadas para informar el comportamiento futuro.
Esto transforma a los NPCs de entidades reactivas (reaccionando a tus entradas) en participantes proactivos en el mundo del juego, abriendo la puerta a una jugabilidad innovadora.
## La gran limitación de los Agentes: **es lento** (por ahora)
Sin embargo, no seamos demasiado optimistas todavía. A pesar de su potencial, la IA Agéntica actualmente enfrenta desafíos en aplicaciones en tiempo real.
Los procesos de razonamiento y planificación pueden introducir latencia, haciéndola menos adecuada para juegos de ritmo rápido como *Doom* o *Super Mario Bros.*
Toma el ejemplo de [_Claude Plays Pokémon_](https://www.twitch.tv/claudeplayspokemon). Si consideras la cantidad de tokens necesarios para **pensar**, más los tokens necesarios para **actuar**, queda claro que necesitaríamos estrategias de decodificación completamente diferentes para que el juego en tiempo real sea factible.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/bonus-unit3/claude-plays-pokemon.png" alt="Claude plays Pokémon"/>
La mayoría de los juegos necesitan ejecutarse a unos 30 FPS, lo que significa que un agente de IA en tiempo real necesitaría actuar 30 veces por segundo, lo que actualmente no es factible con los LLMs agénticos de hoy en día.
Sin embargo, los juegos por turnos como *Pokémon* son candidatos ideales, ya que le dan a la IA tiempo suficiente para deliberar y tomar decisiones estratégicas.
Es por eso que en la próxima sección, construirás tu propio Agente de IA para luchar en combates por turnos al estilo Pokémon, e incluso desafiarlo tú mismo. ¡Manos a la obra!
|
agents-course/units/es/bonus-unit3/from-llm-to-agents.mdx/0
|
{
"file_path": "agents-course/units/es/bonus-unit3/from-llm-to-agents.mdx",
"repo_id": "agents-course",
"token_count": 1073
}
| 6
|
# Observar: Integrando Retroalimentación para Reflexionar y Adaptarse
Las observaciones son **cómo un Agente percibe las consecuencias de sus acciones**.
Proporcionan información crucial que alimenta el proceso de pensamiento del Agente y guía acciones futuras.
Son **señales del entorno**—ya sean datos de una API, mensajes de error o registros del sistema—que guían el siguiente ciclo de pensamiento.
En la fase de observación, el agente:
- **Recopila Retroalimentación:** Recibe datos o confirmación de que su acción fue exitosa (o no).
- **Añade Resultados:** Integra la nueva información en su contexto existente, actualizando efectivamente su memoria.
- **Adapta su Estrategia:** Utiliza este contexto actualizado para refinar pensamientos y acciones subsiguientes.
Por ejemplo, si una API del clima devuelve los datos *"parcialmente nublado, 15°C, 60% de humedad"*, esta observación se añade a la memoria del agente (al final del prompt).
El Agente luego la utiliza para decidir si se necesita información adicional o si está listo para proporcionar una respuesta final.
Esta **incorporación iterativa de retroalimentación asegura que el agente permanezca dinámicamente alineado con sus objetivos**, aprendiendo y ajustándose constantemente basado en resultados del mundo real.
Estas observaciones **pueden tomar muchas formas**, desde leer texto de páginas web hasta monitorear la posición de un brazo robótico. Esto puede verse como "registros" de Herramientas que proporcionan retroalimentación textual de la ejecución de la Acción.
| Tipo de Observación | Ejemplo |
|---------------------|---------------------------------------------------------------------------|
| Retroalimentación del Sistema | Mensajes de error, notificaciones de éxito, códigos de estado |
| Cambios de Datos | Actualizaciones de base de datos, modificaciones del sistema de archivos, cambios de estado |
| Datos Ambientales | Lecturas de sensores, métricas del sistema, uso de recursos |
| Análisis de Respuesta | Respuestas de API, resultados de consultas, salidas de cómputo |
| Eventos Basados en Tiempo | Plazos alcanzados, tareas programadas completadas |
## ¿Cómo Se Añaden los Resultados?
Después de realizar una acción, el framework sigue estos pasos en orden:
1. **Analiza la acción** para identificar la(s) función(es) a llamar y el/los argumento(s) a utilizar.
2. **Ejecuta la acción.**
3. **Añade el resultado** como una **Observación**.
---
Ahora hemos aprendido el Ciclo de Pensamiento-Acción-Observación del Agente.
Si algunos aspectos todavía parecen un poco confusos, no te preocupes—revisaremos y profundizaremos estos conceptos en Unidades futuras.
Ahora, ¡es hora de poner tu conocimiento en práctica codificando tu primer Agente!
|
agents-course/units/es/unit1/observations.mdx/0
|
{
"file_path": "agents-course/units/es/unit1/observations.mdx",
"repo_id": "agents-course",
"token_count": 1208
}
| 7
|
# Índice de Contenidos
Este marco de trabajo de LlamaIndex es parte de la unidad 2 del curso. Puedes acceder a la unidad 2 sobre LlamaIndex en hf.co/learn <a href="https://hf.co/learn/agents-course/unit2/llama-index/introduction">aquí</a>
| Título | Descripción |
| --- | --- |
| [Introducción](introduction.mdx) | Introducción a LlamaIndex |
| [LlamaHub](llama-hub.mdx) | LlamaHub: un registro de integraciones, agentes y herramientas |
| [Componentes](components.mdx) | Componentes: los bloques de construcción de workflows |
| [Herramientas](tools.mdx) | Herramientas: cómo construir herramientas en LlamaIndex |
| [Cuestionario 1](quiz1.mdx) | Cuestionario 1 |
| [Agentes](agents.mdx) | Agentes: cómo construir agentes en LlamaIndex |
| [Flujos de Trabajo](workflows.mdx) | Flujos de Trabajo: una secuencia de pasos, eventos compuestos por componentes que se ejecutan en orden |
| [Cuestionario 2](quiz2.mdx) | Cuestionario 2 |
| [Conclusión](conclusion.mdx) | Conclusión |
|
agents-course/units/es/unit2/llama-index/README.md/0
|
{
"file_path": "agents-course/units/es/unit2/llama-index/README.md",
"repo_id": "agents-course",
"token_count": 382
}
| 8
|
# Pequeño Quiz (no calificado) [[quiz2]]
Es hora de poner a prueba tu comprensión de las secciones *Agentes de Código*, *Agentes de Llamada a Herramientas* y *Herramientas*. Este quiz es opcional y no está calificado.
---
### P1: ¿Cuál es la diferencia clave entre crear una herramienta con el decorador `@tool` versus crear una subclase de `Tool` en smolagents?
¿Qué afirmación describe mejor la distinción entre estos dos enfoques para definir herramientas?
<Question
choices={[
{
text: "El uso del decorador <code>@tool</code> es obligatorio para herramientas basadas en recuperación, mientras que las subclases de <code>Tool</code> son solo para tareas de generación de texto",
explain: "Ambos enfoques pueden usarse para cualquier tipo de herramienta, incluidas las basadas en recuperación o generación de texto.",
},
{
text: "El decorador <code>@tool</code> se recomienda para herramientas simples basadas en funciones, mientras que las subclases de <code>Tool</code> ofrecen más flexibilidad para funcionalidades complejas o metadatos personalizados",
explain: "Esto es correcto. El enfoque del decorador es más simple, pero la subclasificación permite un comportamiento más personalizado.",
correct: true
},
{
text: "<code>@tool</code> solo puede usarse en sistemas multi-agente, mientras que crear una subclase de <code>Tool</code> es para escenarios de un solo agente",
explain: "Todos los agentes (individuales o múltiples) pueden usar cualquiera de los enfoques para definir herramientas; no existe tal restricción.",
},
{
text: "Decorar una función con <code>@tool</code> reemplaza la necesidad de un docstring, mientras que las subclases no deben incluir docstrings",
explain: "Ambos métodos se benefician de docstrings claros. El decorador no los reemplaza, y una subclase también puede tener docstrings.",
}
]}
/>
---
### P2: ¿Cómo maneja un CodeAgent tareas de múltiples pasos utilizando el enfoque ReAct (Reason + Act)?
¿Qué afirmación describe correctamente cómo el CodeAgent ejecuta una serie de pasos para resolver una tarea?
<Question
choices={[
{
text: "Pasa cada paso a un agente diferente en un sistema multi-agente, luego combina los resultados",
explain: "Aunque los sistemas multi-agente pueden distribuir tareas, el CodeAgent por sí mismo puede manejar múltiples pasos usando ReAct.",
},
{
text: "Almacena cada acción en JSON para facilitar el análisis antes de ejecutarlas todas a la vez",
explain: "Este comportamiento coincide con el enfoque basado en JSON de ToolCallingAgent, no con CodeAgent.",
},
{
text: "Cicla entre escribir pensamientos internos, generar código Python, ejecutar el código y registrar los resultados hasta llegar a una respuesta final",
explain: "Correcto. Esto describe el patrón ReAct que usa CodeAgent, incluyendo razonamiento iterativo y ejecución de código.",
correct: true
},
{
text: "Se basa en un módulo de visión para validar la salida del código antes de continuar con el siguiente paso",
explain: "Las capacidades de visión son compatibles en smolagents, pero no son un requisito predeterminado para CodeAgent o el enfoque ReAct.",
}
]}
/>
---
### P3: ¿Cuál de las siguientes es una ventaja principal de compartir una herramienta en Hugging Face Hub?
Selecciona la mejor razón por la que un desarrollador podría subir y compartir su herramienta personalizada.
<Question
choices={[
{
text: "Integra automáticamente la herramienta con un MultiStepAgent para generación aumentada por recuperación",
explain: "Compartir una herramienta no configura automáticamente la lógica de recuperación o de múltiples pasos. Solo hace que la herramienta esté disponible.",
},
{
text: "Permite que otros descubran, reutilicen e integren tu herramienta en sus smolagents sin configuración adicional",
explain: "Sí. Compartir en el Hub hace que las herramientas sean accesibles para que cualquiera (incluido tú mismo) las descargue y reutilice rápidamente.",
correct: true
},
{
text: "Garantiza que solo los CodeAgents puedan invocar la herramienta mientras que los ToolCallingAgents no pueden",
explain: "Tanto los CodeAgents como los ToolCallingAgents pueden invocar herramientas compartidas. No hay restricción por tipo de agente.",
},
{
text: "Convierte tu herramienta en una función completamente capaz de visión para procesamiento de imágenes",
explain: "Compartir herramientas no altera la funcionalidad de la herramienta ni agrega capacidades de visión automáticamente.",
}
]}
/>
---
### P4: ToolCallingAgent difiere de CodeAgent en cómo ejecuta acciones. ¿Qué afirmación es correcta?
Elige la opción que describe con precisión cómo funciona ToolCallingAgent.
<Question
choices={[
{
text: "ToolCallingAgent solo es compatible con un sistema multi-agente, mientras que CodeAgent puede ejecutarse solo",
explain: "Cualquiera de los agentes puede usarse solo o como parte de un sistema multi-agente.",
},
{
text: "ToolCallingAgent delega todo el razonamiento a un agente de recuperación separado, luego devuelve una respuesta final",
explain: "ToolCallingAgent todavía usa un LLM principal para el razonamiento; no depende únicamente de agentes de recuperación.",
},
{
text: "ToolCallingAgent genera instrucciones JSON que especifican llamadas a herramientas y argumentos, que luego se analizan y ejecutan",
explain: "Esto es correcto. ToolCallingAgent utiliza el enfoque JSON para definir llamadas a herramientas.",
correct: true
},
{
text: "ToolCallingAgent está destinado solo para tareas de un solo paso y se detiene automáticamente después de llamar a una herramienta",
explain: "ToolCallingAgent puede realizar múltiples pasos si es necesario, al igual que CodeAgent.",
}
]}
/>
---
### P5: ¿Qué se incluye en la caja de herramientas predeterminada de smolagents y por qué podrías usarla?
¿Qué afirmación captura mejor el propósito y el contenido de la caja de herramientas predeterminada en smolagents?
<Question
choices={[
{
text: "Proporciona un conjunto de herramientas de uso común como la búsqueda de DuckDuckGo, PythonInterpreterTool y una herramienta de respuesta final para prototipos rápidos",
explain: "Correcto. La caja de herramientas predeterminada contiene estas herramientas listas para usar para una fácil integración al construir agentes.",
correct: true
},
{
text: "Solo admite tareas basadas en visión como clasificación de imágenes u OCR por defecto",
explain: "Aunque smolagents puede integrar características basadas en visión, la caja de herramientas predeterminada no está exclusivamente orientada a la visión.",
},
{
text: "Está destinada únicamente para sistemas multi-agente y es incompatible con un solo CodeAgent",
explain: "La caja de herramientas predeterminada puede ser utilizada por cualquier tipo de agente, configuraciones de agente único o múltiple por igual.",
},
{
text: "Agrega funcionalidad avanzada basada en recuperación para responder preguntas a gran escala desde un almacén de vectores",
explain: "Si bien puedes construir herramientas de recuperación, la caja de herramientas predeterminada no proporciona automáticamente características avanzadas de RAG.",
}
]}
/>
---
¡Felicidades por completar este quiz! 🎉 Si alguna pregunta te dio problemas, revisa las secciones *Agentes de Código*, *Agentes de Llamada a Herramientas* o *Herramientas* para fortalecer tu comprensión. Si lo has hecho bien, ¡estás en buen camino para construir aplicaciones robustas con smolagents!
|
agents-course/units/es/unit2/smolagents/quiz2.mdx/0
|
{
"file_path": "agents-course/units/es/unit2/smolagents/quiz2.mdx",
"repo_id": "agents-course",
"token_count": 2768
}
| 9
|
# Reclama tu Certificado 🎓
Si obtuviste una puntuación **superior al 30%, ¡felicitaciones! 👏 Ahora eres elegible para reclamar tu certificado oficial.**
Sigue los pasos a continuación para recibirlo:
1. Visita la [página del certificado](https://huggingface.co/spaces/agents-course/Unit4-Final-Certificate).
2. **Inicia sesión** con tu cuenta de Hugging Face usando el botón proporcionado.
3. **Ingresa tu nombre completo**. Este es el nombre que aparecerá en tu certificado.
4. Haz clic en **“Obtener Mi Certificado”** para verificar tu puntuación y descargar tu certificado.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit4/congrats.png" alt="¡Felicidades!" />
Una vez que tengas tu certificado, siéntete libre de:
- Agregarlo a tu **perfil de LinkedIn** 🧑💼
- Compartirlo en **X**, **Bluesky**, etc. 🎉
**No olvides etiquetar a [@huggingface](https://huggingface.co/huggingface). ¡Estaremos súper orgullosos y nos encantaría animarte! 🤗**
|
agents-course/units/es/unit4/get-your-certificate.mdx/0
|
{
"file_path": "agents-course/units/es/unit4/get-your-certificate.mdx",
"repo_id": "agents-course",
"token_count": 387
}
| 10
|
# Introduction
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/bonus-unit3/pokemon_thumbnail.png" alt="Bonus Unit 3 AI in Games"/>
🎶Je veux être le meilleur... 🎶
Bienvenue dans cette **unité bonus**, où vous explorerez l'intersection passionnante entre **les agents et les jeux vidéos** ! 🎮🤖
Imaginez un jeu où les personnages non-joueurs (PNJ) ne suivent pas simplement des lignes scriptées, mais tiennent plutôt des conversations dynamiques, s'adaptent à vos stratégies et évoluent au fur et à mesure que l'histoire se déroule. C'est le pouvoir de combiner **les LLM et le comportement agentique dans les jeux** : cela ouvre la porte à **une narration et un *gameplay* émergents comme jamais auparavant**.
Dans cette unité bonus, vous allez :
- Apprendre comment construire un agent pouvant faire des **combats au tour par tour dans le style de Pokémon**
- Jouer contre lui, ou même défier d'autres agents en ligne
Nous avons déjà vu [quelques](https://www.anthropic.com/research/visible-extended-thinking) [exemples](https://www.twitch.tv/gemini_plays_pokemon) de la communauté IA pour jouer à Pokémon en utilisant des LLM. Dans cette unité vous apprendrez comment vous pouvez répliquer cela en utilisant votre propre agent avec les idées que vous avez apprises à travers le cours.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/bonus-unit3/claude-plays-pokemon.png" alt="Claude plays Pokémon"/>
## Vous voulez aller plus loin ?
- 🎓 **Maîtrisez les LLM dans les jeux vidéos** : Plongez plus profondément dans le développement de jeux avec notre cours complet [*Machine Learning for Games Course*](https://hf.co/learn/ml-games-course) (en anglais).
- 📘 **Obtenez le *Playbook*** : Découvrez des informations, idées et conseils pratiques dans le [*AI Playbook for Game Developers*](https://thomassimonini.substack.com/) de Thomas Simonini, où l'avenir de la conception de jeux intelligents est exploré.
Mais avant de construire cela, voyons comment les LLM sont déjà utilisés dans les jeux avec **quatre exemples du monde réel**.
|
agents-course/units/fr/bonus-unit3/introduction.mdx/0
|
{
"file_path": "agents-course/units/fr/bonus-unit3/introduction.mdx",
"repo_id": "agents-course",
"token_count": 757
}
| 11
|
# Quiz rapide 1 [[quiz1]]
---
### Q1 : Qu'est-ce qu'un agent ?
Laquelle des propositions suivantes décrit le mieux un agent en IA ?
<Question
choices={[
{
text: "Un système qui ne traite que du texte statique et n'interagit jamais avec son environnement.",
explain: "Un agent doit être capable de prendre une action et d'interagir avec son environnement.",
},
{
text: "Un modèle capable de raisonner, de planifier et d'utiliser des outils pour interagir avec son environnement afin d'atteindre un objectif précis.",
explain: "Cette définition saisit les caractéristiques essentielles d'un agent.",
correct: true
},
{
text: "Un chatbot qui répond aux questions sans aucune capacité à exécuter des actions.",
explain: "Un tel chatbot manque de la capacité à agir, ce qui le distingue d'un agent.",
},
{
text: "Une encyclopédie numérique qui fournit des informations mais qui ne peut pas accomplir de tâches.",
explain: "Un agent interagit activement avec son environnement au lieu de se contenter de fournir des informations statiques.",
}
]}
/>
---
### Q2 : Quel est le rôle de la planification chez un agent ?
Pourquoi un agent a-t-il besoin de planifier avant d'agir ?
<Question
choices={[
{
text: "Pour mémoriser les interactions précédentes.",
explain: "La planification consiste à déterminer les actions futures, pas à stocker les interactions passées.",
},
{
text: "Pour décider de la séquence d'actions et sélectionner les outils appropriés nécessaires pour satisfaire la demande de l'utilisateur.",
explain: "La planification aide l'agent à déterminer les meilleures étapes et les outils à utiliser pour accomplir une tâche.",
correct: true
},
{
text: "Pour générer des actions aléatoires sans aucun but.",
explain: "La planification garantit que les actions de l'agent sont intentionnelles et non aléatoires.",
},
{
text: "Pour traduire du texte sans raisonnement supplémentaire.",
explain: "La planification concerne la structuration des actions et non la simple conversion de texte.",
}
]}
/>
---
### Q3 : Comment les outils améliorent-ils les capacités d'un agent ?
Pourquoi les outils sont-ils essentiels pour un agent ?
<Question
choices={[
{
text: "Les outils sont des composants redondants qui n'affectent pas les performances de l'agent.",
explain: "Les outils étendent les capacités d'un agent en lui permettant d'exécuter des actions au-delà de la génération de texte.",
},
{
text: "Les outils offrent à l'agent la capacité d'exécuter des actions qu'un modèle de génération de texte ne peut pas réaliser nativement, comme préparer un café ou générer des images.",
explain: "Les outils permettent aux agents d'interagir avec le monde réel et d'accomplir des tâches.",
correct: true
},
{
text: "Les outils sont utilisés uniquement pour stocker la mémoire.",
explain: "Les outils servent principalement à exécuter des actions, et non pas simplement à stocker des données.",
},
{
text: "Les outils limitent l'Agent aux réponses textuelles uniquement.",
explain: "Au contraire, les outils permettent aux agents d'aller au-delà des réponses textuelles.",
}
]}
/>
---
### Q4 : Quelle est la principale différence entre actions et outils ?
Quelle est la principale différence entre les actions et les outils ?
<Question
choices={[
{
text: "Les actions sont les étapes que l'Agent suit, tandis que les outils sont des ressources externes que l'agent peut utiliser pour exécuter ces actions.",
explain: "Les actions représentent des objectifs de niveau supérieur, tandis que les outils sont des fonctions spécifiques que l'Agent peut invoquer.",
correct: true
},
{
text: "Les actions et les Outils sont la même chose et peuvent être utilisés de manière interchangeable.",
explain: "Non, les actions sont des objectifs ou des tâches, tandis que les outils sont des utilitaires spécifiques que l'agent utilise pour les atteindre.",
},
{
text: "Les outils sont généraux, tandis que les actions sont réservées aux interactions physiques uniquement.",
explain: "Pas nécessairement. Les actions peuvent impliquer des tâches à la fois numériques et physiques.",
},
{
text: "Les actions nécessitent des LLM, tandis que les outils non.",
explain: "Bien que les LLM aident à déterminer les actions, celles-ci ne dépendent pas elles-mêmes des LLM.",
}
]}
/>
---
### Q5 : Quel rôle jouent les *Large Language Models* (LLM) dans les agents ?
Comment les LLM contribuent-ils aux fonctionnalités d'un agent ?
<Question
choices={[
{
text: "Les LLM sont utilisés comme des bases de données statiques qui stockent des informations sans traiter d'entrées.",
explain: "Les LLM traitent activement les entrées textuelles et génèrent des réponses, plutôt que de se contenter de stocker des informations.",
},
{
text: "Les LLM servent de « cerveau » de raisonnement pour l'agent, traitant les entrées textuelles pour comprendre les instructions et planifier les actions.",
explain: "Les LLM permettent à l'agent d'interpréter, de planifier et de décider des prochaines étapes.",
correct: true
},
{
text: "Les LLM ne sont utilisés que pour le traitement d'images et non pour le texte.",
explain: "Les LLM fonctionnent principalement avec le texte, bien qu'ils puissent parfois interagir avec des entrées multimodales.",
},
{
text: "Les LLM ne sont pas utilisés.",
explain: "Les LLM constituent un composant essentiel des agents modernes.",
}
]}
/>
---
### Q6 : Lequel des exemples suivants illustre le mieux un agent ?
Quel exemple concret illustre le mieux un agent en action ?
<Question
choices={[
{
text: "Une page FAQ statique sur un site web.",
explain: "Une page FAQ statique n'interagit pas de manière dynamique avec les utilisateurs et n'exécute aucune action.",
},
{
text: "Un assistant virtuel comme Siri ou Alexa, capable de comprendre des commandes vocales, d'en raison et d'exécuter des tâches comme définir des rappels ou envoyer des messages.",
explain: "Cet exemple intègre le raisonnement, la planification et l'interaction avec l'environnement.",
correct: true
},
{
text: "Une calculatrice basique qui effectue des opérations arithmétiques.",
explain: "Une calculatrice suit des règles fixes sans raisonner ni planifier, donc ce n'est pas un agent.",
},
{
text: "Un PNJ de jeu vidéo qui suit un ensemble de réponses préprogrammées.",
explain: "À moins que le PNJ ne puisse raisonner, planifier et utiliser des outils, il ne fonctionne pas comme un agent.",
}
]}
/>
---
Félicitations pour avoir terminé ce Quiz 🥳 ! Si certains éléments vous ont échappé, prenez le temps de relire le chapitre pour renforcer vos connaissances. Si vous le réussissez, vous êtes prêt à plonger plus en profondeur dans le « cerveau des agents » : les LLM.
|
agents-course/units/fr/unit1/quiz1.mdx/0
|
{
"file_path": "agents-course/units/fr/unit1/quiz1.mdx",
"repo_id": "agents-course",
"token_count": 2303
}
| 12
|
# Utiliser les agents dans LlamaIndex
Vous vous souvenez d'Alfred, notre agent majordome serviable d'avant ? Eh bien, il va recevoir une mise à niveau !
Maintenant que nous comprenons les outils disponibles dans LlamaIndex, nous pouvons lui donner de nouvelles capacités pour mieux nous servir.
Mais avant de continuer, rappelons-nous ce qui fait fonctionner un agent comme Alfred.
Dans l'Unité 1, nous avons appris que :
> Un agent est un système qui exploite un modèle d'IA pour interagir avec son environnement afin d'atteindre un objectif défini par l'utilisateur. Il combine le raisonnement, la planification et l'exécution d'actions (souvent via des outils externes) pour accomplir des tâches.
LlamaIndex prend en charge **trois types principaux d'agents avec raisonnement** :

1. `Function Calling Agents` : Ceux-ci fonctionnent avec des modèles qui peuvent appeler des fonctions spécifiques.
2. `ReAct Agents` : Ceux-ci peuvent fonctionner avec n'importe quel modèle qui fait du *chat* ou des *endpoints* de texte et traiter des tâches de raisonnement complexes.
3. `Advanced Custom Agents` : Ceux-ci utilisent des méthodes plus complexes pour traiter des tâches et *workflows* plus complexes.
<Tip>Trouvez plus d'informations sur les agents avancés sur <a href="https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/agent/workflow/base_agent.py"><i>BaseWorkflowAgent</i></a>.</Tip>
## Initialiser les agents
<Tip>
Vous pouvez suivre le code dans <a href="https://huggingface.co/agents-course/notebooks/blob/main/fr/unit2/llama-index/agents.ipynb" target="_blank">ce <i>notebook</i></a> que vous pouvez exécuter avec Google Colab.
</Tip>
Pour créer un agent, nous commençons par lui fournir un **ensemble de fonctions/outils qui définissent ses capacités**.
Regardons comment créer un agent avec quelques outils de base. Au moment de la rédaction, l'agent utilisera automatiquement l'API d'appel de fonctions (si disponible), ou une boucle d'agent ReAct standard.
Les LLM prennant en charge une API outils/fonctions sont relativement nouveaux, mais ils fournissent un moyen puissant d'appeler des outils en évitant de devoir utiliser un *prompt* spécifique et permettant au LLM de créer des appels d'outils basés sur des schémas fournis.
Les agents ReAct sont également bons pour les tâches de raisonnement complexes et peuvent fonctionner avec n'importe quel LLM qui a des capacités de chat ou de complétion de texte. Ils sont plus verbeux et montrent le raisonnement derrière certaines actions qu'ils prennent.
```python
from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI
from llama_index.core.agent.workflow import AgentWorkflow
from llama_index.core.tools import FunctionTool
# define example de Tool -- type annotations, noms de fonctions, et docstrings, sont tous inclus dans les schémas analysés !
def multiply(a: int, b: int) -> int:
"""Multiplies two integers and returns the resulting integer"""
return a * b
# initialisation du llm
llm = HuggingFaceInferenceAPI(model_name="Qwen/Qwen2.5-Coder-32B-Instruct")
# initialisation de l'agent
agent = AgentWorkflow.from_tools_or_functions(
[FunctionTool.from_defaults(multiply)],
llm=llm
)
```
**Les agents sont sans état par défaut**, ajouter la mémorisation des interactions passées est optionnel en utilisant un objet `Context`.
Cela pourrait être utile si vous voulez utiliser un agent qui a besoin de se souvenir des interactions précédentes, comme un *chatbot* qui maintient le contexte à travers plusieurs messages ou un gestionnaire de tâches qui a besoin de suivre les progrès au fil du temps.
```python
# sans état
response = await agent.run("What is 2 times 2?")
# se souvenir de l'état
from llama_index.core.workflow import Context
ctx = Context(agent)
response = await agent.run("My name is Bob.", ctx=ctx)
response = await agent.run("What was my name again?", ctx=ctx)
```
Vous remarquerez que les agents dans `LlamaIndex` sont asynchrones car ils utilisent l'opérateur `await` de Python. Si vous débuté avec le code asynchrone en Python, ou avez besoin d'un rappel, LlamaIndex dispose d'un [excellent guide sur le sujet](https://docs.llamaindex.ai/en/stable/getting_started/async_python/).
Maintenant que nous avons les bases, jetons un coup d'œil à comment nous pouvons utiliser des outils plus complexes dans nos agents.
## Créer des agents de RAG avec des *QueryEngineTools*
**Le RAG agentique est un moyen puissant d'utiliser des agents pour répondre à des questions sur vos données.** Nous pouvons passer divers outils à Alfred pour l'aider à répondre aux questions.
Cependant, au lieu de répondre automatiquement à la question au-dessus des documents, Alfred peut décider d'utiliser n'importe quel autre outil ou flux pour répondre à la question.

Il est facile d'**envelopper `QueryEngine` comme un outil** pour un agent.
Ce faisant, nous devons **définir un nom et une description**. Le LLM utilisera ces informations pour utiliser correctement l'outil.
Voyons comment charger un `QueryEngineTool` en utilisant le `QueryEngine` que nous avons créé dans la [section des *components*](components).
```python
from llama_index.core.tools import QueryEngineTool
query_engine = index.as_query_engine(llm=llm, similarity_top_k=3) # comme indiqué dans la section Composants de LlamaIndex
query_engine_tool = QueryEngineTool.from_defaults(
query_engine=query_engine,
name="name",
description="a specific description",
return_direct=False,
)
query_engine_agent = AgentWorkflow.from_tools_or_functions(
[query_engine_tool],
llm=llm,
system_prompt="You are a helpful assistant that has access to a database containing persona descriptions."
)
```
## Créer des systèmes multi-agents
La classe `AgentWorkflow` prend également en charge directement les systèmes multi-agents. En donnant à chaque agent un nom et une description, le système maintient un seul orateur actif, chaque agent ayant la capacité de passer le relais à un autre agent.
En rétrécissant la portée de chaque agent, nous pouvons aider à augmenter leur précision générale lors de la réponse aux messages des utilisateurs.
**Les agents dans LlamaIndex peuvent également être directement utilisés comme outils** pour d'autres agents, pour des scénarios plus complexes et personnalisés.
```python
from llama_index.core.agent.workflow import (
AgentWorkflow,
FunctionAgent,
ReActAgent,
)
# Définir quelques outils
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
def subtract(a: int, b: int) -> int:
"""Subtract two numbers."""
return a - b
# Créer les configurations de l'agent
# NOTE : nous pouvons utiliser FunctionAgent ou ReActAgent ici.
# FunctionAgent fonctionne pour les LLM avec une API d'appel de fonction.
# ReActAgent fonctionne pour n'importe quel LLM.
calculator_agent = ReActAgent(
name="calculator",
description="Performs basic arithmetic operations",
system_prompt="You are a calculator assistant. Use your tools for any math operation.",
tools=[add, subtract],
llm=llm,
)
query_agent = ReActAgent(
name="info_lookup",
description="Looks up information about XYZ",
system_prompt="Use your tool to query a RAG system to answer information about XYZ",
tools=[query_engine_tool],
llm=llm
)
# Créer et exécuter le workflow
agent = AgentWorkflow(
agents=[calculator_agent, query_agent], root_agent="calculator"
)
# Exécuter le système
response = await agent.run(user_msg="Can you add 5 and 3?")
```
<Tip>Vous n'avez pas encore assez appris ? Il y a beaucoup plus à découvrir sur les agents et les outils dans LlamaIndex dans l'<a href="https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/">Introduction de base à <i>AgentWorkflow</i></a> ou le <a href="https://docs.llamaindex.ai/en/stable/understanding/agent/">Guide d'apprentissage sur les agents</a>, où vous pouvez lire plus sur le <i>streaming</i>, la sérialisation de contexte, et l'humain dans la boucle !</Tip>
Maintenant que nous comprenons les bases des agents et des outils dans LlamaIndex, voyons comment nous pouvons utiliser LlamaIndex pour **créer des *workflows* configurables et gérables !**
|
agents-course/units/fr/unit2/llama-index/agents.mdx/0
|
{
"file_path": "agents-course/units/fr/unit2/llama-index/agents.mdx",
"repo_id": "agents-course",
"token_count": 2938
}
| 13
|
<CourseFloatingBanner
classNames="absolute z-10 right-0 top-0"
notebooks={[
{label: "Google Colab", value: "https://colab.research.google.com/#fileId=https://huggingface.co/agents-course/notebooks/blob/main/fr/unit2/smolagents/retrieval_agents.ipynb"},
]}
askForHelpUrl="http://hf.co/join/discord" />
# Construction de systèmes de RAG agentiques
<Tip>
Vous pouvez suivre le code dans <a href="https://huggingface.co/agents-course/notebooks/blob/main/fr/unit2/smolagents/retrieval_agents.ipynb" target="_blank">ce <i>notebook</i></a> que vous pouvez exécuter avec Google Colab.
</Tip>
Les systèmes de RAG (*Retrieval Augmented Generation*) combinent les capacités de récupération de données et de modèles de génération pour fournir des réponses contextuelles. Par exemple, la requête d'un utilisateur est transmise à un moteur de recherche puis les résultats récupérés sont fournis au LLM avec la requête. Le modèle génère ensuite une réponse basée sur la requête et les informations récupérées.
Le RAG agentique (*Agentic RAG*) étend les systèmes de RAG traditionnels en **combinant des agents autonomes avec une récupération dynamique des connaissances**.
Alors que les systèmes de RAG traditionnels utilisent un LLM pour répondre aux requêtes basées sur des données récupérées, le RAG agentique **permet un contrôle intelligent des processus de récupération et de génération**, améliorant l'efficacité et la précision.
Les systèmes de RAG traditionnels font face à des limitations clés, telles que **s'appuyer sur une seule étape de récupération** et se concentrer sur la similarité sémantique directe avec la requête de l'utilisateur, ce qui peut négliger des informations pertinentes.
Le RAG agentique résout ces problèmes en permettant à l'agent de formuler de manière autonome des requêtes, de critiquer les résultats récupérés et de mener plusieurs étapes de récupération pour une sortie plus adaptée et complète.
## Récupération de base avec DuckDuckGo
Construisons un agent simple qui peut rechercher sur le web en utilisant DuckDuckGo. Cet agent récupérera des informations et synthétisera des réponses pour répondre aux requêtes. Avec le RAG agentique, l'agent d'Alfred peut :
* Rechercher des dernières tendances en matière de fêtes de super-héros
* Affiner les résultats pour inclure des éléments luxueux
* Synthétiser les informations en un plan complet
Voici comment l'agent d'Alfred peut y parvenir :
```python
from smolagents import CodeAgent, DuckDuckGoSearchTool, InferenceClientModel
# Initialiser l'outil de recherche
search_tool = DuckDuckGoSearchTool()
# Initialiser le modèle
model = InferenceClientModel()
agent = CodeAgent(
model=model,
tools=[search_tool],
)
# Exemple d'utilisation
response = agent.run(
"Search for luxury superhero-themed party ideas, including decorations, entertainment, and catering."
)
print(response)
```
L'agent suit ce processus :
1. **Analyse la requête :** identifie les éléments clés de la requête - organisation de fêtes de luxe sur le thème des super-héros, en mettant l'accent sur la décoration, les divertissements et la restauration.
2. **Effectue la récupération :** exploite DuckDuckGo pour rechercher les informations les plus pertinentes et à jour, en s'assurant qu'elles correspondent aux préférences d'Alfred pour un événement luxueux.
3. **Synthétise l'information :** après avoir rassemblé les résultats, l'agent les traite en un plan cohérent et actionnable pour Alfred, couvrant tous les aspects de la fête.
4. **Stocke pour référence future :** stocke les informations récupérées pour un accès facile lors de la planification d'événements futurs, optimisant l'efficacité des tâches ultérieures.
## Outil de base de connaissances personnalisé
Pour des tâches spécialisées, une base de connaissances personnalisée peut être inestimable. Créons un outil qui interroge une base de données vectorielle de documentation technique ou de connaissances spécialisées. En utilisant la recherche sémantique, l'agent peut trouver les informations les plus pertinentes pour les besoins d'Alfred.
Une base de données vectorielle stocke des représentations numériques (*embeddings*) de texte ou d'autres données, créées par des modèles d'apprentissage automatique. Elle permet la recherche sémantique en identifiant des significations similaires dans un espace de haute dimension.
Cette approche combine des connaissances prédéfinies avec une recherche sémantique pour fournir des solutions contextuelles pour la planification d'événements. Avec un accès à des connaissances spécialisées, Alfred peut perfectionner chaque détail de la fête.
Dans cet exemple, nous allons créer un outil qui récupère des idées de planification de fête à partir d'une base de connaissances personnalisée. Nous utiliserons un modèle BM25 pour rechercher dans la base de connaissances et retourner les meilleurs résultats, et `RecursiveCharacterTextSplitter` pour diviser les documents en morceaux plus petits pour une recherche plus efficace.
```python
from langchain.docstore.document import Document
from langchain.text_splitter import RecursiveCharacterTextSplitter
from smolagents import Tool
from langchain_community.retrievers import BM25Retriever
from smolagents import CodeAgent, InferenceClientModel
class PartyPlanningRetrieverTool(Tool):
name = "party_planning_retriever"
description = "Utilise la recherche sémantique pour trouver des idées pertinentes pour l'organisation de la fête d'Alfred au Manoir Wayne sur le thème des super-héros."
inputs = {
"query": {
"type": "string",
"description": "La requête à effectuer. Celle-ci doit être liée à l'organisation de fêtes ou à des thèmes de super-héros.",
}
}
output_type = "string"
def __init__(self, docs, **kwargs):
super().__init__(**kwargs)
self.retriever = BM25Retriever.from_documents(
docs, k=5 # Récupérer les 5 meilleurs documents
)
def forward(self, query: str) -> str:
assert isinstance(query, str), "Votre requête doit être une chaîne de caractères"
docs = self.retriever.invoke(
query,
)
return "\nIdées récupérées :\n" + "".join(
[
f"\n\n===== Idée {str(i)} =====\n" + doc.page_content
for i, doc in enumerate(docs)
]
)
# Simuler une base de connaissances sur la planification de la fête
party_ideas = [
{"text": "Un bal masqué sur le thème des super-héros avec un décor luxueux, notamment des accents dorés et des rideaux de velours.", "source": "Idées de fête 1"},
{"text": "Engagez un DJ professionnel qui peut jouer de la musique sur le thème des super-héros comme Batman et Wonder Woman.", "source": "Idées de divertissement"},
{"text": "Pour la restauration, servez des plats portant le nom de super-héros, comme 'Le smoothie vert de Hulk' et 'Le steak de puissance d'Iron Man'", "source": "Idées de traiteur"},
{"text": "Décorez le lieu avec des logos de super-héros emblématiques et des projections de Gotham et d'autres villes de super-héros.", "source": "Idées de décoration"},
{"text": "Expériences interactives avec la VR où les invités peuvent participer à des simulations de super-héros ou à des jeux à thème.", "source": "Idées de divertissement"}
]
source_docs = [
Document(page_content=doc["text"], metadata={"source": doc["source"]})
for doc in party_ideas
]
# Découper les documents en morceaux plus petits pour une recherche plus efficace
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500,
chunk_overlap=50,
add_start_index=True,
strip_whitespace=True,
separators=["\n\n", "\n", ".", " ", ""],
)
docs_processed = text_splitter.split_documents(source_docs)
# Créer l'outil de récupération
party_planning_retriever = PartyPlanningRetrieverTool(docs_processed)
# Initialiser l'agent
agent = CodeAgent(tools=[party_planning_retriever], model=InferenceClientModel())
# Exemple d'utilisation
response = agent.run(
"Trouver des idées pour une fête de luxe sur le thème des super-héros, y compris des options de divertissement, de restauration et de décoration."
)
print(response)
```
Cet agent amélioré peut :
1. D'abord vérifier la documentation pour des informations pertinentes
2. Combiner les informations de la base de connaissances
3. Maintenir le contexte de conversation en mémoire
## Capacités de récupération améliorées
Lors de la construction de systèmes de RAG agentiques, l'agent peut employer des stratégies sophistiquées comme :
1. **La reformulation de requête :** Au lieu d'utiliser la requête brute de l'utilisateur, l'agent peut élaborer des termes de recherche optimisés qui correspondent mieux aux documents cibles
2. **La décomposition de requête :** Au lieu d'utiliser directement la requête de l'utilisateur, si elle contient plusieurs éléments d'information à interroger, elle peut être décomposée en plusieurs requêtes
3. **L'expansion de requête :** Similaire à la reformulation de requête mais effectuée plusieurs fois pour formuler la requête de plusieurs façons et les interroger toutes
4. **Le reclassement :** Utiliser des [*Cross-Encoders*](https://huggingface.co/models?pipeline_tag=text-ranking&sort=trending) pour attribuer des scores de pertinence sémantique plus complets entre les documents récupérés et la requête
5. **La récupération multi-étapes :** L'agent peut effectuer plusieurs recherches, en utilisant les résultats initiaux pour informer les requêtes suivantes
6. **L'intégration de sources :** Les informations peuvent être combinées à partir de plusieurs sources comme la recherche web et la documentation locale
7. **La validation des résultats :** Le contenu récupéré peut être analysé pour sa pertinence et son exactitude avant d'être inclus dans les réponses
Les systèmes de RAG agentiques efficaces nécessitent une considération attentive de plusieurs aspects clés. L'agent **devrait sélectionner entre les outils disponibles en fonction du type de requête et du contexte**. Les systèmes de mémoire aident à maintenir l'historique de conversation et évitent les récupérations répétitives. Avoir des stratégies de secours garantit que le système peut toujours fournir de la valeur même lorsque les méthodes de récupération principales échouent. De plus, l'implémentation d'étapes de validation aide à assurer l'exactitude et la pertinence des informations récupérées.
## Ressources
- [Agentic RAG : boostez votre RAG avec la reformulation de requête et l'auto-requête ! 🚀](https://huggingface.co/learn/cookbook/agent_rag) - Recette pour développer un système de RAG agentique en utilisant `smolagents`.
|
agents-course/units/fr/unit2/smolagents/retrieval_agents.mdx/0
|
{
"file_path": "agents-course/units/fr/unit2/smolagents/retrieval_agents.mdx",
"repo_id": "agents-course",
"token_count": 3755
}
| 14
|
# Introduction à l'unité finale [[introduction]]
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit4/thumbnail.jpg" alt="AI Agents Course thumbnail" width="100%"/>
Bienvenue dans l'unité finale du cours ! 🎉
Jusqu'à présent, vous avez **acquis de solides connaissances sur les agents**, depuis la compréhension de leurs composants jusqu'à la création de vos propres agents. Avec ces connaissances, vous êtes maintenant prêt à en **construire des puissants** et à rester à jour avec les dernières avancées dans ce domaine en rapide évolution.
Cette unité consiste entièrement à appliquer ce que vous avez appris. C'est votre **projet pratique final** et le compléter est votre ticket pour obtenir le **certificat du cours**.
## Quel est le défi ?
Vous allez créer votre propre agent et **évaluer ses performances en utilisant un sous-ensemble du [*benchmark* GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard)**.
Pour réussir le cours, votre agent doit obtenir un score de **30% ou plus** sur le *benchmark*. Atteignez cet objectif, et vous obtiendrez votre **Certificat de Réussite**, reconnaissant officiellement votre expertise. 🏅
De plus, voyez comment vous vous classez face à vos pairs ! Un **[Classement des Étudiants](https://huggingface.co/spaces/agents-course/Students_leaderboard)** dédié est disponible pour que vous puissiez soumettre vos scores et voir les progrès de la communauté.
> ** 🚨 Attention : Unité Avancée et Pratique**
>
> Veuillez noter que cette unité adopte une approche plus pratique. La réussite dans cette section nécessitera **des connaissances en programmation plus avancées** et reposera sur le fait que vous naviguerez dans des tâches avec **moins de conseils explicites** que dans les parties précédentes du cours.
Cela vous semble excitant ? Commençons ! 🚀
|
agents-course/units/fr/unit4/introduction.mdx/0
|
{
"file_path": "agents-course/units/fr/unit4/introduction.mdx",
"repo_id": "agents-course",
"token_count": 648
}
| 15
|
# 셀프 체크! (업데이트됨) [[quiz2]]
뭐라고요?! 또 퀴즈라고요? 우리도 알아요... 😅 하지만 걱정 마세요! 이 퀴즈는 **방금 배운 핵심 개념을 확실히 이해**하는 데 도움을 주기 위해 준비되었습니다.
이번 퀴즈에서는 대규모 언어 모델(LLM), 메시지 시스템, 도구(tool) 등 AI 에이전트를 이해하고 구축하는 데 필수적인 요소들을 다룹니다.
### Q1: AI 도구(tool)를 가장 잘 설명하는 것은 무엇인가요? [[q1-which-of-the-following-best-describes-an-ai-tool]]
<Question
choices={[
{
text: "텍스트 응답만 생성하는 프로세스",
explain: "",
},
{
text: " 에이전트가 특정 작업을 수행하고 외부 환경과 상호작용할 수 있도록 하는 실행 가능한 프로세스 또는 외부 API",
explain: "도구는 에이전트가 특정 작업을 수행하고 외부 환경과 상호작용할 수 있도록 해주는 기능입니다.",
correct: true
},
{
text: "에이전트의 대화를 저장하는 기능",
explain: "",
}
]}
/>
---
### Q2: AI 에이전트는 환경에서 "행동(act)"하기 위해 도구를 어떻게 활용하나요? [[q2-how-do-ai-agents-use-tools-as-a-form-of-acting-in-an-environment]]
<Question
choices={[
{
text: "사용자의 명령을 수동적으로 기다린다",
explain: "",
},
{
text: "미리 프로그래밍된 응답만 사용한다",
explain: "",
},
{
text: "LLM이 적절할 때 도구 호출 코드를 생성하도록 요청하고, 모델을 대신하여 도구를 실행한다",
explain: "에이전트는 도구를 호출하고, 이를 통해 얻은 정보를 바탕으로 계획을 세우거나 재조정할 수 있습니다.",
correct: true
}
]}
/>
---
### Q3: 대규모 언어 모델(LLM)이란? [[q3-what-is-a-large-language-model-llm]]
<Question
choices={[
{
text: "사전 정의된 답변을 제공하는 단순한 챗봇",
explain: "",
},
{
text: "방대한 텍스트 데이터로 학습된 딥러닝 모델로, 인간과 유사한 언어를 이해하고 생성할 수 있다",
explain: "",
correct: true
},
{
text: "엄격하게 사전 정의된 명령만 따르는 규칙 기반 AI",
explain: "",
}
]}
/>
---
### Q4: LLM에서 특수 토큰(special tokens)의 역할을 가장 잘 설명하는 것은 무엇인가요? [[q4-which-of-the-following-best-describes-the-role-of-special-tokens-in-llms]]
<Question
choices={[
{
text: "텍스트 생성 품질을 향상시키기 위해 모델의 어휘에 추가된 단어들이다",
explain: "",
},
{
text: "문장 종료(EOS) 표시나 챗봇 모델에서 서로 다른 메시지 역할을 구분하는 기능을 한다",
explain: "",
correct: true
},
{
text: "응답의 다양성을 높이기 위해 무작위로 삽입되는 토큰이다",
explain: "",
}
]}
/>
---
### Q5: AI 챗봇 모델은 사용자 메시지를 내부적으로 어떻게 처리하나요? [[q5-how-do-ai-chat-models-process-user-messages-internally]]
<Question
choices={[
{
text: "사용자 메시지를 변형 없이 구조화된 명령으로 직접 해석한다",
explain: "",
},
{
text: "시스템 메시지, 사용자 메시지, 어시스턴트 메시지를 구조화된 하나의 프롬프트로 변환하여 처리한다",
explain: "",
correct: true
},
{
text: "이전 대화를 기반으로 무작위로 응답을 생성한다",
explain: "",
}
]}
/>
---
이해되셨나요? 좋습니다! 이제 **전체 에이전트의 흐름을 살펴보고, 직접 AI 에이전트를 만들어 봅시다!**
|
agents-course/units/ko/unit1/quiz2.mdx/0
|
{
"file_path": "agents-course/units/ko/unit1/quiz2.mdx",
"repo_id": "agents-course",
"token_count": 2638
}
| 16
|
# Что такое LLM?
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-check-1.jpg" alt="Unit 1 planning"/>
В предыдущем разделе мы узнали, что каждый агент нуждается в ** AI Модели как в ядре**, и что LLM являются наиболее распространенным типом AI моделей использующихся для этой цели.
Теперь мы узнаем, что такое LLM и как они наделяют агентов мощью.
В этом разделе представлено краткое техническое объяснение использования LLM. Если вы хотите погрузиться глубже, вы можете ознакомиться с нашим <a href="https://huggingface.co/learn/nlp-course/chapter1/1" target="_blank">бесплатным курсом по Обработке Естественного Языка (Natural Language Processing).</a>.
## ## Что такое Большая Языковая Модель?
Большая Языковая Модель (Large Language Model, LLM) - это тип AI модели, которая превосходно работает с **пониманием и генерированием человеческого языка**. Они обучаются на огромных объемах текстовых данных, что позволяет им изучать шаблоны, структуру и даже нюансы языка. Эти модели обычно состоят из многих миллионов параметров.
Большинство LLM в настоящее время **построены на архитектуре Transformer** - архитектуре глубокого обучения, основанной на алгоритме «Внимания» («Attention» algorithm), который стал вызывать значительный интерес после выхода BERT от Google в 2018 году.
<figure>
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/transformer.jpg" alt="Transformer"/>
<figcaption>Оригинальная архитектура трансформера выглядела следующим образом: слева располагался кодер, справа - декодер.
</figcaption>
</figure>
Существует 3 типа трансформеров:
1. **Энкодеры (кодеры)**
Трансформер на основе кодировщика принимает на вход текст (или другие данные) и выдает плотное векторное представление (или эмбеддинг) этого текста.
- **Пример**: BERT от Google
- **Примеры использования**: классификация текста, семантический поиск, Распознавание Именованных Сущностей (Named Entity Recognition, NER)
- **Типичный размер**: миллионы параметров
2. **Декодеры**.
Трансформер на основе декодера фокусируется **на генерации новых токенов для завершения последовательности, по одному токену за раз**.
- **Пример**: Llama из Meta
- **Примеры использования**: Генерация текста, чат-боты, генерация кода
- **Типичный размер**: Миллиарды (в американском понимании, т.е. 10^9) параметров
3. **Seq2Seq (энкодер-декодер)**.
Трансформер преобразующие последовательности в последовательность (sequence-to-sequence) объединяет в себе энкодер и декодер. Сначала энкодер преобразует входную последовательность в контекстное представление, а затем декодер генерирует выходную последовательность.
- **Пример**: T5, BART
- **Примеры использования**: Перевод, обобщение, перефразирование.
- **Типичный размер**: Миллионы параметров
Хотя Большие Языковые Модели (Large Language Model) бывают разных форм, LLM обычно представляют собой модели на основе декодера с миллиардами параметров. Вот некоторые из наиболее известных LLM:
| **Модель** | **Провайдер** |
|-----------------------------------|-------------------------------------------|
| **Deepseek-R1** | DeepSeek |
| **GPT4** | OpenAI |
| **Llama 3** | Meta (Facebook AI Research) |
| **SmolLM2** | Hugging Face |
| **Gemma** | Google |
| **Mistral** | Mistral |
Принцип, лежащий в основе LLM, прост, но очень эффективен: **его цель - предсказать следующий токен, учитывая последовательность предыдущих токенов**. "Токен" - это единица информации, с которой работает LLM. Вы можете воспринимать "токен" как "слово", но по соображениям эффективности LLM не используют целые слова.
Например, если в английском языке насчитывается около 600 000 слов, то в LLM может быть около 32 000 токенов (как в случае с Llama 2). Токенизация часто работает по подсловам, которые можно комбинировать.
Например, рассмотрим, как токены "interest" и "ing" могут быть объединены в слово "interesting", или "ed" может быть добавлено в слово "interested".
Вы можете поэкспериментировать с различными токенами в интерактивной демонстрации ниже:
<iframe
src="https://agents-course-the-tokenizer-playground.static.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
Каждая LLM имеет несколько **специальных токенов**, специфичных для данной модели. LLM использует эти токены для открытия и закрытия структурированных компонентов своей генерации. Например, чтобы указать начало или конец последовательности, сообщения или ответа. Кроме того, инструкции для ввода (input prompts), которые мы передаем модели, также структурированы с помощью специальных токенов. Наиболее важным из них является токен **Конец последовательности** (EOS).
Формы специальных токенов у разных провайдеров моделей весьма разнообразны.
Таблица ниже иллюстрирует разнообразие специальных токенов.
<table>
<thead>
<tr>
<th><strong>Model</strong></th>
<th><strong>Provider</strong></th>
<th><strong>EOS Token</strong></th>
<th><strong>Functionality</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>GPT4</strong></td>
<td>OpenAI</td>
<td><code><|endoftext|></code></td>
<td>End of message text</td>
</tr>
<tr>
<td><strong>Llama 3</strong></td>
<td>Meta (Facebook AI Research)</td>
<td><code><|eot_id|></code></td>
<td>End of sequence</td>
</tr>
<tr>
<td><strong>Deepseek-R1</strong></td>
<td>DeepSeek</td>
<td><code><|end_of_sentence|></code></td>
<td>End of message text</td>
</tr>
<tr>
<td><strong>SmolLM2</strong></td>
<td>Hugging Face</td>
<td><code><|im_end|></code></td>
<td>End of instruction or message</td>
</tr>
<tr>
<td><strong>Gemma</strong></td>
<td>Google</td>
<td><code><end_of_turn></code></td>
<td>End of conversation turn</td>
</tr>
</tbody>
</table>
<Tip>
Мы не ожидаем, что вы запомните эти специальные токены, но важно оценить их разнообразие и роль, которую они играют в генерации текста LLM. Если вы хотите узнать больше о специальных токенах, вы можете посмотреть конфигурацию модели в ее репозитории на Hugging Face Hub. Например, вы можете найти специальные токены модели SmolLM2 в ее <a href="https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct/blob/main/tokenizer_config.json">tokenizer_config.json</a>.
</Tip>
## Понимание предсказания следующего токена.
Считается, что LLM - это **авторегрессия**, то есть **выход одного прохода становится входом для следующего**. Этот цикл продолжается до тех пор, пока модель не предскажет, что следующим токеном будет токен EOS, на котором модель может остановиться.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AutoregressionSchema.gif" alt="Визуализация процесса авторегрессионного декодирования" width="60%">
Другими словами, LLM будет декодировать текст до тех пор, пока он не достигнет EOS. Но что происходит во время одного цикла декодирования?
Хотя полное описание процесса может быть довольно техническим для целей изучения агентов, вот краткий обзор:
- После того как входной текст **токинизирован**, модель вычисляет представление последовательности, которое содержит информацию о значении и положении каждого токена во входной последовательности.
- Это представление поступает в модель, которая возвращает оценки, оценивающие вероятность для каждого токена из ее словаря быть следующим в последовательности.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/DecodingFinal.gif" alt="Визуализация процесса декодирования" width="60%">
Основываясь на этих оценках, у нас есть несколько стратегий выбора токенов для завершения предложения.
- Самой простой стратегией декодирования будет всегда брать токен с максимальным количеством баллов.
Вы можете самостоятельно взаимодействовать с процессом декодирования с помощью SmolLM2 в этом Пространстве (помните, что она декодирует до достижения токена **EOS**, которым является **<|im_end|>** для этой модели):
<iframe
src="https://agents-course-decoding-visualizer.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
- Но есть и более продвинутые стратегии декодирования. Например, *лучевой поиск (beam search)* исследует несколько последовательностей-кандидатов, чтобы найти ту, которая имеет максимальную общую оценку - даже если некоторые отдельные токены имеют более низкие оценки.
<iframe
src="https://agents-course-beam-search-visualizer.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
Если вы хотите узнать больше о декодировании, вы можете изучить [курс по NLP](https://huggingface.co/learn/nlp-course).
## Внимание - это все, что вам нужно
Ключевым аспектом архитектуры трансформера является **Внимание (Attention)**. При предсказании следующего слова,
не все слова в предложении одинаково важны; такие слова, как "France" и "capital" в предложении *"The capital of France is ..."*, несут наибольшую смысловую нагрузку.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AttentionSceneFinal.gif" alt="Визуализация механизма Внимания" width="60%">
Этот процесс определения наиболее релевантных слов для предсказания следующего токена оказался невероятно эффективным.
Хотя основной принцип работы LLM - предсказание следующего токена - остается неизменным со времен GPT-2, были достигнуты значительные успехи в масштабировании нейронных сетей и обеспечении работы механизма внимания для все более длинных последовательностей.
Если вы взаимодействовали с LLM, вы, вероятно, знакомы с термином *длина контекста (context length)*, который обозначает максимальное количество токенов, которые может обработать LLM, и максимальную _продолжительность внимания (attention span)_, которой она обладает.
## Подсказки для LLM очень важны
Учитывая, что единственная задача LLM - предсказать следующий токен, просматривая каждый входной токен, и выбрать "важные" токены, формулировка вашей входной последовательности очень важна.
Входная последовательность, которую вы передаете LLM, называется _подсказкой (prompt)_. Тщательное проектирование подсказки облегчает **направление генерации LLM к желаемому результату**.
## Как обучаются LLM?
Модели LLM обучаются на больших массивах данных текста, где они учатся предсказывать следующее слово в последовательности с помощью самообучения (self-supervised) или маскированного языкового моделирования (masked language modeling).
В результате такого обучения без учителя модель изучает структуру языка и **основные закономерности в тексте, что позволяет модели обобщать ранее не встречавшиеся данные**.
После такого начального _предварительного_ обучения LLM могут быть дообучены для выполнения конкретных задач методами обучения с учителем. Например, некоторые модели обучаются разговорным структурам или использованию инструментов, в то время как другие сосредоточены на классификации или генерации кода.
## Как я могу использовать LLM?
У вас есть два основных варианта:
1. **Запустить локально** (если у вас достаточно аппаратных ресурсов).
2. **Использовать облако/API** (например, через Hugging Face Serverless Inference API).
На протяжении всего курса мы будем использовать модели через API на Hugging Face Hub. Позже мы изучим, как запустить эти модели локально на вашем оборудовании.
## Как LLM используются в AI Агентах?
LLM являются ключевым компонентом агентов искусственного интеллекта, **обеспечивая основу для понимания и генерации человеческого языка**.
Они могут интерпретировать инструкции пользователя, поддерживать контекст в разговоре, определять план и решать, какие инструменты использовать.
Мы рассмотрим эти шаги более подробно в данном Разделе, а пока вам нужно понять, что LLM - это **мозг агента**.
---
Это был большой объем информации! Мы рассмотрели основы того, что такое LLM, как они функционируют и какова их роль в работе AI агентов.
Если вы хотите еще глубже погрузиться в увлекательный мир языковых моделей и обработки естественного языка, не поленитесь ознакомиться с нашим <a href="https://huggingface.co/learn/nlp-course/chapter1/1" target="_blank">бесплатным курсом по NLP</a>.
Теперь, когда мы поняли, как работают LLM, пришло время увидеть **как LLM структурируют свою генерацию в разговорном контексте**.
Чтобы запустить <a href="https://huggingface.co/agents-course/notebooks/blob/main/unit1/dummy_agent_library.ipynb" target="_blank">этот блокнот</a>, **вам понадобится токен Hugging Face** который вы можете получить из <a href="https://hf.co/settings/tokens" target="_blank">https://hf.co/settings/tokens</a>.
Более подробную информацию о том, как запустить блокноты Jupyter, изучите <a href="https://huggingface.co/docs/hub/notebooks">Блокноты Jupyter на Hugging Face Hub</a>.
Вам также необходимо запросить доступ к <a href="https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct" target="_blank">модели Meta Llama</a>.
|
agents-course/units/ru-RU/unit1/what-are-llms.mdx/0
|
{
"file_path": "agents-course/units/ru-RU/unit1/what-are-llms.mdx",
"repo_id": "agents-course",
"token_count": 11630
}
| 17
|
# Giới thiệu về Agent
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Thumbnail"/>
Chào mừng bạn đến với chương đầu tiên, nơi **bạn sẽ xây dựng nền tảng vững chắc về nguyên lý cơ bản của AI agent** bao gồm:
- **Hiểu về Agent**
- Agent là gì và hoạt động thế nào?
- Cách Agent đưa ra quyết định thông qua lập luận và lập kế hoạch?
- **Vai trò của Mô hình ngôn ngữ lớn (LLM) trong Agent**
- Cách LLM đóng vai trò "bộ não" của Agent.
- Cách LLM tổ chức hội thoại qua hệ thống Messages.
- **Công cụ và hành động**
- Cách Agent sử dụng Công cụ (Tools) bên ngoài để tương tác với môi trường.
- Cách xây dựng và tích hợp Tools cho Agent của bạn.
- **Quy trình hoạt động của Agent:**
- *Tư duy (Thought)* → *Hành động (Action)* → *Quan sát (Observation)*.
Sau khi khám phá các chủ đề này, **bạn sẽ xây dựng Agent đầu tiên** bằng `smolagents`!
Agent của bạn tên Alfred sẽ xử lý một nhiệm vụ đơn giản và minh họa cách áp dụng các khái niệm vào thực tế.
Bạn thậm chí sẽ học cách **đăng Agent lên Hugging Face Spaces** để chia sẻ với bạn bè và đồng nghiệp.
Cuối chương này, bạn sẽ làm một bài Kiểm tra nhanh. Hoàn thành thành công, bạn sẽ **nhận được chứng chỉ đầu tiên**: 🎓 Chứng chỉ Nguyên lý cơ bản về Agent.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/certificate-example.jpg" alt="Certificate Example"/>
Đây là **điểm khởi đầu quan trọng**, đặt nền móng để hiểu về Agent trước khi chuyển sang các chủ đề nâng cao.
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-no-check.jpg" alt="Unit 1 planning"/>
Đây là một chương lớn, vì vậy hãy **dành thời gian** và đừng ngại xem lại các phần này khi cần.
Sẵn sàng chưa? Cùng bắt đầu thôi! 🚀
|
agents-course/units/vi/unit1/introduction.mdx/0
|
{
"file_path": "agents-course/units/vi/unit1/introduction.mdx",
"repo_id": "agents-course",
"token_count": 1317
}
| 18
|
# 简介 (Introduction)

欢迎来到第一个**附加单元**,在这里你将学习如何**为函数调用 (function calling) 微调大语言模型 (Large Language Model, LLM)**。
在大语言模型领域,函数调用正在迅速成为一项*必须掌握*的技术。
这个想法是,不同于我们在第1单元中仅依赖基于提示的方法,函数调用在训练阶段就训练你的模型**采取行动和解释观察结果**,使你的人工智能更加健壮。
> **我应该什么时候学习这个附加单元?**
>
> 这个部分是**可选的**,比第1单元更高级,所以不要犹豫,你可以现在就学习这个单元,或者在通过本课程提高了知识水平后再回来学习。
>
> 但不用担心,这个附加单元设计时包含了你需要的所有信息,所以即使你还没有学习微调的内部工作原理,我们也会带你了解为函数调用微调模型的每个核心概念。
让你能够跟上这个附加单元的最佳方式是:
1. 了解如何使用 Transformers 微调大语言模型,如果你还不了解,[请查看这里](https://huggingface.co/learn/nlp-course/chapter3/1?fw=pt)
2. 了解如何使用 `SFTTrainer` 来微调我们的模型,要了解更多信息,[请查看这份文档](https://huggingface.co/learn/nlp-course/en/chapter11/1)
---
## 你将学到什么
1. **函数调用 (Function Calling)**
现代大语言模型如何有效地构建对话,使它们能够触发**工具 (Tools)**。
2. **LoRA(低秩适应,Low-Rank Adaptation)**
一种**轻量级且高效**的微调方法,减少计算和存储开销。LoRA 使大型模型的训练变得*更快、更便宜、更容易*部署。
3. **函数调用模型中的思考 → 行动 → 观察循环(Thought → Act → Observe Cycle)**
一种简单但强大的方法,用于构建模型如何决定何时(以及如何)调用函数、跟踪中间步骤以及解释来自外部工具或API的结果。
4. **新的特殊词元 (Special Tokens)**
我们将介绍**特殊标记**,帮助模型区分:
- 内部"思维链"推理
- 外部函数调用
- 来自外部工具的响应
---
在完成这个附加单元后,你将能够:
- **理解**工具相关的 API 内部工作原理。
- 使用 LoRA 技术**微调**模型。
- **实现**和**修改**思考 → 行动 → 观察循环,以创建健壮和可维护的函数调用工作流。
- **设计和使用**特殊词元,无缝分离模型的内部推理和外部行动。
而且你将**拥有自己微调的模型来进行函数调用。** 🔥
让我们深入了解**函数调用**吧!
|
agents-course/units/zh-CN/bonus-unit1/introduction.mdx/0
|
{
"file_path": "agents-course/units/zh-CN/bonus-unit1/introduction.mdx",
"repo_id": "agents-course",
"token_count": 1875
}
| 19
|
# 第一单元测验 (Unit 1 Quiz)
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub4DONE.jpg" alt="Unit 1 planning"/>
恭喜你完成第一单元的学习!让我们测试一下你对目前所学关键概念的理解。
通过测验后,请继续下一部分领取你的证书。
祝你好运!
## 测验 (Quiz)
这是一个交互式测验。测验托管在 Hugging Face Hub 的空间中。你将通过一系列选择题来测试你对本单元所学关键概念的理解。完成测验后,你将能够看到你的分数和正确答案的详细分析。
重要提示:**通过测验后不要忘记点击提交 (Submit),否则你的考试分数将不会被保存!**
<iframe
src="https://agents-course-unit-1-quiz.hf.space"
frameborder="0"
width="850"
height="450"
></iframe>
你也可以在这里访问测验 👉 [点击这里](https://huggingface.co/spaces/agents-course/unit_1_quiz)
## 学习认证
恭喜通过测验!**您现在可以获取专属结业证书 🎓**
成功完成本单元测评后,系统将为您生成单元结业认证证书。该证书可下载分享,作为课程进度的官方成就证明。
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub5DONE.jpg" alt="第一单元规划示意图"/>
获得证书后,您可将其添加至LinkedIn个人档案 🧑💼 或分享到X、Bluesky等社交平台。**如果标注@huggingface,我们将非常荣幸并为您送上祝贺**!🤗
|
agents-course/units/zh-CN/unit1/final-quiz.mdx/0
|
{
"file_path": "agents-course/units/zh-CN/unit1/final-quiz.mdx",
"repo_id": "agents-course",
"token_count": 958
}
| 20
|
# 欢迎来到 `LangGraph` 的世界
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/LangGraph.png" alt="Unit 2.3 缩略图"/>
欢迎来到学习旅程的下一站!在本章节中,您将学习如何使用 [`LangGraph`](https://github.com/langchain-ai/langgraph) 框架来构建应用程序,该框架能帮助您组织和编排复杂的 LLM 工作流。
`LangGraph` 是一个通过提供对智能体流程的**控制**工具,帮助您构建**生产就绪**应用程序的框架。
## 模块概览
在本单元中,您将探索:
### 1️⃣ [什么是 LangGraph?何时使用它?](./when_to_use_langgraph)
### 2️⃣ [LangGraph 的构建模块](./building_blocks)
### 3️⃣ [邮件分拣管家 Alfred](./first_graph)
### 4️⃣ [文档分析智能体 Alfred](./document_analysis_agent)
### 5️⃣ [随堂测验](./quizz1)
<Tip warning={true}>
本节示例需要访问强大的 LLM/VLM 模型。我们使用 GPT-4o API 运行这些示例,因为该模型与 LangGraph 具有最佳兼容性。
</Tip>
通过本单元的学习,您将掌握构建健壮、有序且生产就绪的应用程序的能力!
需要说明的是,本节只是 LangGraph 的入门介绍,更多高级主题可以通过 LangChain 学院的免费课程学习:[LangGraph 入门指南](https://academy.langchain.com/courses/intro-to-langgraph)
让我们即刻启程!
## 扩展资源
- [LangGraph 智能体](https://langchain-ai.github.io/langgraph/) - LangGraph 智能体示例
- [LangChain 学院](https://academy.langchain.com/courses/intro-to-langgraph) - 来自 LangChain 的完整 LangGraph 课程
|
agents-course/units/zh-CN/unit2/langgraph/introduction.mdx/0
|
{
"file_path": "agents-course/units/zh-CN/unit2/langgraph/introduction.mdx",
"repo_id": "agents-course",
"token_count": 983
}
| 21
|
# `smolagents` 简介
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/thumbnail.jpg" alt="Unit 2.1 Thumbnail"/>
欢迎来到本模块,在这里你将学习**如何使用 [`smolagents`](https://github.com/huggingface/smolagents) 库构建有效的智能体**,该库提供了一个轻量级框架,用于创建功能强大的AI智能体。
`smolagents` 是 Hugging Face 的一个库;因此,我们非常感谢您通过**加星标**的方式支持 smolagents [`仓库`](https://github.com/huggingface/smolagents):
<img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/star_smolagents.gif" alt="staring smolagents"/>
## 模块概览
本模块提供了使用 `smolagents` 构建智能体的关键概念和实用策略的全面概述。
面对众多可用的开源框架,了解使 `smolagents` 成为有用选择的组件和功能,或确定何时另一种解决方案可能更合适,这一点至关重要。
我们将探索关键的智能体类型,包括为软件开发任务设计的代码智能体(code agents),用于创建模块化、函数驱动工作流的工具调用智能体(tool calling agents),以及访问和综合信息的检索智能体(retrieval agents)。
此外,我们还将讨论多个智能体的编排,以及视觉能力和网络浏览的集成,这为动态和上下文感知应用开辟了新的可能性。
在本单元中,第一单元的智能体阿尔弗雷德(Alfred)回归了。这次,他使用 `smolagents` 框架进行内部运作。我们将一起探索这个框架背后的关键概念,同时阿尔弗雷德将处理各种任务。阿尔弗雷德正在韦恩庄园(Wayne Manor)组织一场派对,趁韦恩家族🦇外出时,他有很多事情要做。跟随我们一起展示他的旅程,看他如何使用 `smolagents` 处理这些任务!
<Tip>
在本单元中,您将学习使用 `smolagents` 库构建AI智能体。您的智能体将能够搜索数据、执行代码并与网页交互。您还将学习如何结合多个智能体来创建更强大的系统。
</Tip>

## 内容
在这个关于 `smolagents` 的单元中,我们涵盖:
### 1️⃣ [为什么使用 smolagents](./why_use_smolagents)
`smolagents` 是众多可用于应用程序开发的开源智能体框架之一。其他选择包括 `LlamaIndex` 和 `LangGraph`,这些在本课程的其他模块中也有涵盖。`smolagents` 提供了几个关键特性,可能使其非常适合特定用例,但在选择框架时,我们应该始终考虑所有选项。我们将探讨使用 `smolagents` 的优势和缺点,帮助您根据项目需求做出明智的决定。
### 2️⃣ [代码智能体](./code_agents)
`CodeAgents`(代码智能体)是 `smolagents` 中的主要智能体类型。这些智能体不是生成 JSON 或文本,而是生成 Python 代码来执行操作。本模块探讨它们的目的、功能以及工作原理,并提供实际例子来展示它们的能力。
### 3️⃣ [工具调用智能体](./tool_calling_agents)
`ToolCallingAgents`(工具调用智能体)是 `smolagents` 支持的第二种智能体类型。与生成 Python 代码的 `CodeAgents` 不同,这些智能体依赖于系统必须解析和解释以执行操作的 JSON/文本块。本模块涵盖它们的功能、与 `CodeAgents` 的主要区别,并提供示例说明其用法。
### 4️⃣ [工具](./tools)
正如我们在第 1 单元中看到的,工具是大语言模型(LLM)可以在智能体系统中使用的函数,它们作为智能体行为的基本构建块。本模块涵盖如何创建工具、它们的结构,以及使用 `Tool` 类或 `@tool` 装饰器的不同实现方法。您还将了解默认工具箱、如何与社区共享工具,以及如何加载社区贡献的工具以在您的智能体中使用。
### 5️⃣ [检索智能体](./retrieval_agents)
检索智能体(Retrieval agents)使模型能够访问知识库,从而可以从多个来源搜索、综合和检索信息。它们利用向量存储(vector stores)进行高效检索,并实现 **检索增强生成(Retrieval-Augmented Generation,RAG)** 模式。这些智能体特别适用于将网络搜索与自定义知识库集成,同时通过记忆系统维持对话上下文。本模块探讨实施策略,包括用于稳健信息检索的回退机制。
### 6️⃣ [多智能体系统](./multi_agent_systems)
有效地编排多个智能体对于构建强大的多智能体系统至关重要。通过组合具有不同能力的智能体(例如,将网络搜索智能体与代码执行智能体结合),您可以创建更复杂的解决方案。本模块专注于设计、实施和管理多智能体系统,以最大限度地提高效率和可靠性。
### 7️⃣ [视觉和浏览器智能体](./vision_agents)
视觉智能体(Vision agents)通过整合 **视觉-语言模型(Vision-Language Models,VLMs)** 扩展了传统智能体的能力,使其能够处理和解释视觉信息。本模块探讨如何设计和集成由 VLM 驱动的智能体,从而解锁诸如基于图像的推理、视觉数据分析和多模态交互等高级功能。我们还将使用视觉智能体构建一个浏览器智能体,能够浏览网络并从中提取信息。
## 资源
- [smolagents 文档](https://huggingface.co/docs/smolagents) - smolagents 库的官方文档
- [构建有效的智能体](https://www.anthropic.com/research/building-effective-agents) - 关于智能体架构的研究论文
- [智能体指南](https://huggingface.co/docs/smolagents/tutorials/building_good_agents) - 构建可靠智能体的最佳实践
- [LangGraph 智能体](https://langchain-ai.github.io/langgraph/) - 智能体实现的其他示例
- [函数调用指南](https://platform.openai.com/docs/guides/function-calling) - 了解大语言模型中的函数调用
- [RAG 最佳实践](https://www.pinecone.io/learn/retrieval-augmented-generation/) - 实施有效 RAG 的指南
|
agents-course/units/zh-CN/unit2/smolagents/introduction.mdx/0
|
{
"file_path": "agents-course/units/zh-CN/unit2/smolagents/introduction.mdx",
"repo_id": "agents-course",
"token_count": 3994
}
| 22
|
# 那现在呢?我应该学习哪些主题?
Agentic AI 是一个快速发展的领域,了解基础协议对于构建智能自主系统至关重要。
你应该熟悉的两个重要标准是:
- **模型上下文协议 (MCP)**
- **代理对代理协议 (A2A)**
## 🔌 模型上下文协议 (MCP)
Anthropic 的 **模型上下文协议 (MCP)** 是一个开放标准,使 AI 模型能够安全无缝地**连接外部工具、数据源和应用程序**,从而使代理更加智能和自主。
可以将 MCP 想象为一个**通用适配器**,就像 USB-C 接口一样,使 AI 模型能够插入各种数字环境**而无需为每一个进行定制集成**。
MCP 正在迅速获得行业关注,开始被OpenAI 和谷歌等大公司所采用它。
📚 了解更多:
- [Anthropic 的官方公告和文档](https://www.anthropic.com/news/model-context-protocol)
- [MCP - 维基百科](https://en.wikipedia.org/wiki/Model_Context_Protocol)
- [MCP - 博客](https://huggingface.co/blog/Kseniase/mcp)
## 🤝 代理对代理 (A2A) 协议
谷歌开发了 **代理对代理 (A2A) 协议**,作为 Anthropic 的模型上下文协议 (MCP) 的补充。
虽然 MCP 连接代理与外部工具,**A2A 则连接代理之间**,为多智能体系统之间的协作铺平道路,使其能够协同工作以解决复杂问题。
📚 深入了解 A2A:
- [谷歌的 A2A 公告](https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/)
|
agents-course/units/zh-CN/unit4/additional-readings.mdx/0
|
{
"file_path": "agents-course/units/zh-CN/unit4/additional-readings.mdx",
"repo_id": "agents-course",
"token_count": 962
}
| 23
|
# Porting a custom kernel
|
candle/candle-book/src/cuda/porting.md/0
|
{
"file_path": "candle/candle-book/src/cuda/porting.md",
"repo_id": "candle",
"token_count": 7
}
| 24
|
//! #A simplified example in Rust of training a neural network and then using it based on the Candle Framework by Hugging Face.
//! Author: Evgeny Igumnov 2023 igumnovnsk@gmail.com
//! This program implements a neural network to predict the winner of the second round of elections based on the results of the first round.
//!
//! ##Basic moments:
//!
//! A multilayer perceptron with two hidden layers is used. The first hidden layer has 4 neurons, the second has 2 neurons.
//! The input is a vector of 2 numbers - the percentage of votes for the first and second candidates in the first stage.
//! The output is the number 0 or 1, where 1 means that the first candidate will win in the second stage, 0 means that he will lose.
//! For training, samples with real data on the results of the first and second stages of different elections are used.
//! The model is trained by backpropagation using gradient descent and the cross-entropy loss function.
//! Model parameters (weights of neurons) are initialized randomly, then optimized during training.
//! After training, the model is tested on a deferred sample to evaluate the accuracy.
//! If the accuracy on the test set is below 100%, the model is considered underfit and the learning process is repeated.
//! Thus, this neural network learns to find hidden relationships between the results of the first and second rounds of voting in order to make predictions for new data.
#[rustfmt::skip]
mod tests {
use candle::{DType, Result, Tensor, D, Device};
use candle_nn::{loss, ops, Linear, Module, VarBuilder, VarMap, Optimizer};
// ANCHOR: book_training_simplified1
const VOTE_DIM: usize = 2;
const RESULTS: usize = 1;
const EPOCHS: usize = 10;
const LAYER1_OUT_SIZE: usize = 4;
const LAYER2_OUT_SIZE: usize = 2;
const LEARNING_RATE: f64 = 0.05;
#[derive(Clone)]
pub struct Dataset {
pub train_votes: Tensor,
pub train_results: Tensor,
pub test_votes: Tensor,
pub test_results: Tensor,
}
struct MultiLevelPerceptron {
ln1: Linear,
ln2: Linear,
ln3: Linear,
}
impl MultiLevelPerceptron {
fn new(vs: VarBuilder) -> Result<Self> {
let ln1 = candle_nn::linear(VOTE_DIM, LAYER1_OUT_SIZE, vs.pp("ln1"))?;
let ln2 = candle_nn::linear(LAYER1_OUT_SIZE, LAYER2_OUT_SIZE, vs.pp("ln2"))?;
let ln3 = candle_nn::linear(LAYER2_OUT_SIZE, RESULTS + 1, vs.pp("ln3"))?;
Ok(Self { ln1, ln2, ln3 })
}
fn forward(&self, xs: &Tensor) -> Result<Tensor> {
let xs = self.ln1.forward(xs)?;
let xs = xs.relu()?;
let xs = self.ln2.forward(&xs)?;
let xs = xs.relu()?;
self.ln3.forward(&xs)
}
}
// ANCHOR_END: book_training_simplified1
// ANCHOR: book_training_simplified3
#[tokio::test]
async fn simplified() -> anyhow::Result<()> {
let dev = Device::cuda_if_available(0)?;
let train_votes_vec: Vec<u32> = vec![
15, 10,
10, 15,
5, 12,
30, 20,
16, 12,
13, 25,
6, 14,
31, 21,
];
let train_votes_tensor = Tensor::from_vec(train_votes_vec.clone(), (train_votes_vec.len() / VOTE_DIM, VOTE_DIM), &dev)?.to_dtype(DType::F32)?;
let train_results_vec: Vec<u32> = vec![
1,
0,
0,
1,
1,
0,
0,
1,
];
let train_results_tensor = Tensor::from_vec(train_results_vec, train_votes_vec.len() / VOTE_DIM, &dev)?;
let test_votes_vec: Vec<u32> = vec![
13, 9,
8, 14,
3, 10,
];
let test_votes_tensor = Tensor::from_vec(test_votes_vec.clone(), (test_votes_vec.len() / VOTE_DIM, VOTE_DIM), &dev)?.to_dtype(DType::F32)?;
let test_results_vec: Vec<u32> = vec![
1,
0,
0,
];
let test_results_tensor = Tensor::from_vec(test_results_vec.clone(), test_results_vec.len(), &dev)?;
let m = Dataset {
train_votes: train_votes_tensor,
train_results: train_results_tensor,
test_votes: test_votes_tensor,
test_results: test_results_tensor,
};
let trained_model: MultiLevelPerceptron;
loop {
println!("Trying to train neural network.");
match train(m.clone(), &dev) {
Ok(model) => {
trained_model = model;
break;
},
Err(e) => {
println!("Error: {}", e);
continue;
}
}
}
let real_world_votes: Vec<u32> = vec![
13, 22,
];
let tensor_test_votes = Tensor::from_vec(real_world_votes.clone(), (1, VOTE_DIM), &dev)?.to_dtype(DType::F32)?;
let final_result = trained_model.forward(&tensor_test_votes)?;
let result = final_result
.argmax(D::Minus1)?
.to_dtype(DType::F32)?
.get(0).map(|x| x.to_scalar::<f32>())??;
println!("real_life_votes: {:?}", real_world_votes);
println!("neural_network_prediction_result: {:?}", result);
Ok(())
}
// ANCHOR_END: book_training_simplified3
// ANCHOR: book_training_simplified2
fn train(m: Dataset, dev: &Device) -> anyhow::Result<MultiLevelPerceptron> {
let train_results = m.train_results.to_device(dev)?;
let train_votes = m.train_votes.to_device(dev)?;
let varmap = VarMap::new();
let vs = VarBuilder::from_varmap(&varmap, DType::F32, dev);
let model = MultiLevelPerceptron::new(vs.clone())?;
let mut sgd = candle_nn::SGD::new(varmap.all_vars(), LEARNING_RATE)?;
let test_votes = m.test_votes.to_device(dev)?;
let test_results = m.test_results.to_device(dev)?;
let mut final_accuracy: f32 = 0.0;
for epoch in 1..EPOCHS + 1 {
let logits = model.forward(&train_votes)?;
let log_sm = ops::log_softmax(&logits, D::Minus1)?;
let loss = loss::nll(&log_sm, &train_results)?;
sgd.backward_step(&loss)?;
let test_logits = model.forward(&test_votes)?;
let sum_ok = test_logits
.argmax(D::Minus1)?
.eq(&test_results)?
.to_dtype(DType::F32)?
.sum_all()?
.to_scalar::<f32>()?;
let test_accuracy = sum_ok / test_results.dims1()? as f32;
final_accuracy = 100. * test_accuracy;
println!("Epoch: {epoch:3} Train loss: {:8.5} Test accuracy: {:5.2}%",
loss.to_scalar::<f32>()?,
final_accuracy
);
if final_accuracy == 100.0 {
break;
}
}
if final_accuracy < 100.0 {
Err(anyhow::Error::msg("The model is not trained well enough."))
} else {
Ok(model)
}
}
// ANCHOR_END: book_training_simplified2
}
|
candle/candle-book/src/simplified.rs/0
|
{
"file_path": "candle/candle-book/src/simplified.rs",
"repo_id": "candle",
"token_count": 2903
}
| 25
|
use crate::benchmarks::{BenchDevice, BenchDeviceHandler};
use candle_core::{
quantized::{self, GgmlDType, QMatMul},
Device, Module, Tensor,
};
use criterion::{black_box, criterion_group, Criterion, Throughput};
use std::time::Instant;
fn run(matmul: &QMatMul, x: &Tensor) {
matmul.forward(x).unwrap();
}
fn run_bench(c: &mut Criterion, device: &Device, dtype: GgmlDType) {
let b = 1;
let m = 1;
let n = 1024;
let k = 1024;
let lhs = (0..(m * k))
.map(|v| v as f32 / (m * k) as f32)
.collect::<Vec<_>>();
let rhs = (0..(k * n))
.map(|v| v as f32 / (n * k) as f32)
.collect::<Vec<_>>();
let lhs = Tensor::from_slice(&lhs, (m, k), device).unwrap();
let rhs = Tensor::from_slice(&rhs, (k, n), device).unwrap();
let qtensor = quantized::QTensor::quantize(&rhs.t().unwrap(), dtype).unwrap();
let matmul = quantized::QMatMul::from_qtensor(qtensor).unwrap();
let flops = b * m * n * k;
let mut group = c.benchmark_group(device.bench_name(format!("qmatmul_{:?}", dtype)));
group.sample_size(200);
group.throughput(Throughput::Bytes(flops as u64));
group.bench_function("iter", move |b| {
b.iter_custom(|iters| {
let start = Instant::now();
for _i in 0..iters {
run(black_box(&matmul), black_box(&lhs));
}
device.sync().unwrap();
start.elapsed()
})
});
group.finish();
}
fn criterion_benchmark(c: &mut Criterion) {
let handler = BenchDeviceHandler::new().unwrap();
for device in handler.devices {
for dtype in [
GgmlDType::F32,
GgmlDType::F16,
GgmlDType::Q4_0,
GgmlDType::Q4_1,
GgmlDType::Q5_0,
GgmlDType::Q5_1,
GgmlDType::Q8_0,
GgmlDType::Q2K,
GgmlDType::Q3K,
GgmlDType::Q4K,
GgmlDType::Q5K,
GgmlDType::Q6K,
] {
run_bench(c, &device, dtype);
}
}
}
criterion_group!(benches, criterion_benchmark);
|
candle/candle-core/benches/benchmarks/qmatmul.rs/0
|
{
"file_path": "candle/candle-core/benches/benchmarks/qmatmul.rs",
"repo_id": "candle",
"token_count": 1085
}
| 26
|
pub trait VecOps: num_traits::NumAssign + Copy {
fn min(self, rhs: Self) -> Self;
fn max(self, rhs: Self) -> Self;
/// Dot-product of two vectors.
///
/// # Safety
///
/// The length of `lhs` and `rhs` have to be at least `len`. `res` has to point to a valid
/// element.
#[inline(always)]
unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) {
*res = Self::zero();
for i in 0..len {
*res += *lhs.add(i) * *rhs.add(i)
}
}
/// Sum of all elements in a vector.
///
/// # Safety
///
/// The length of `xs` must be at least `len`. `res` has to point to a valid
/// element.
#[inline(always)]
unsafe fn vec_reduce_sum(xs: *const Self, res: *mut Self, len: usize) {
*res = Self::zero();
for i in 0..len {
*res += *xs.add(i)
}
}
/// Maximum element in a non-empty vector.
///
/// # Safety
///
/// The length of `xs` must be at least `len` and positive. `res` has to point to a valid
/// element.
#[inline(always)]
unsafe fn vec_reduce_max(xs: *const Self, res: *mut Self, len: usize) {
*res = *xs;
for i in 1..len {
*res = (*res).max(*xs.add(i))
}
}
/// Minimum element in a non-empty vector.
///
/// # Safety
///
/// The length of `xs` must be at least `len` and positive. `res` has to point to a valid
/// element.
#[inline(always)]
unsafe fn vec_reduce_min(xs: *const Self, res: *mut Self, len: usize) {
*res = *xs;
for i in 1..len {
*res = (*res).min(*xs.add(i))
}
}
}
impl VecOps for f32 {
#[inline(always)]
fn min(self, other: Self) -> Self {
Self::min(self, other)
}
#[inline(always)]
fn max(self, other: Self) -> Self {
Self::max(self, other)
}
#[inline(always)]
unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) {
super::vec_dot_f32(lhs, rhs, res, len)
}
#[inline(always)]
unsafe fn vec_reduce_sum(xs: *const Self, res: *mut Self, len: usize) {
super::vec_sum(xs, res, len)
}
}
impl VecOps for half::f16 {
#[inline(always)]
fn min(self, other: Self) -> Self {
Self::min(self, other)
}
#[inline(always)]
fn max(self, other: Self) -> Self {
Self::max(self, other)
}
#[inline(always)]
unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) {
let mut res_f32 = 0f32;
super::vec_dot_f16(lhs, rhs, &mut res_f32, len);
*res = half::f16::from_f32(res_f32);
}
}
impl VecOps for f64 {
#[inline(always)]
fn min(self, other: Self) -> Self {
Self::min(self, other)
}
#[inline(always)]
fn max(self, other: Self) -> Self {
Self::max(self, other)
}
}
impl VecOps for half::bf16 {
#[inline(always)]
fn min(self, other: Self) -> Self {
Self::min(self, other)
}
#[inline(always)]
fn max(self, other: Self) -> Self {
Self::max(self, other)
}
#[inline(always)]
unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) {
let mut res_f32 = 0f32;
super::vec_dot_bf16(lhs, rhs, &mut res_f32, len);
*res = half::bf16::from_f32(res_f32);
}
}
impl VecOps for u8 {
#[inline(always)]
fn min(self, other: Self) -> Self {
<Self as Ord>::min(self, other)
}
#[inline(always)]
fn max(self, other: Self) -> Self {
<Self as Ord>::max(self, other)
}
}
impl VecOps for u32 {
#[inline(always)]
fn min(self, other: Self) -> Self {
<Self as Ord>::min(self, other)
}
#[inline(always)]
fn max(self, other: Self) -> Self {
<Self as Ord>::max(self, other)
}
}
impl VecOps for i64 {
#[inline(always)]
fn min(self, other: Self) -> Self {
<Self as Ord>::min(self, other)
}
#[inline(always)]
fn max(self, other: Self) -> Self {
<Self as Ord>::max(self, other)
}
}
#[inline(always)]
pub fn par_for_each(n_threads: usize, func: impl Fn(usize) + Send + Sync) {
if n_threads == 1 {
func(0)
} else {
rayon::scope(|s| {
for thread_idx in 0..n_threads {
let func = &func;
s.spawn(move |_| func(thread_idx));
}
})
}
}
#[inline(always)]
pub fn par_range(lo: usize, up: usize, n_threads: usize, func: impl Fn(usize) + Send + Sync) {
if n_threads == 1 {
for i in lo..up {
func(i)
}
} else {
rayon::scope(|s| {
for thread_idx in 0..n_threads {
let func = &func;
s.spawn(move |_| {
for i in (thread_idx..up).step_by(n_threads) {
func(i)
}
});
}
})
}
}
|
candle/candle-core/src/cpu/kernels.rs/0
|
{
"file_path": "candle/candle-core/src/cpu/kernels.rs",
"repo_id": "candle",
"token_count": 2456
}
| 27
|
#![allow(dead_code)]
use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT};
use crate::{CpuStorage, DType, Error, Layout, Result, Shape};
#[derive(Debug, Clone)]
pub struct MetalDevice;
#[derive(Debug)]
pub struct MetalStorage;
#[derive(thiserror::Error, Debug)]
pub enum MetalError {
#[error("{0}")]
Message(String),
}
impl From<String> for MetalError {
fn from(e: String) -> Self {
MetalError::Message(e)
}
}
macro_rules! fail {
() => {
unimplemented!("metal support has not been enabled, add `metal` feature to enable.")
};
}
impl crate::backend::BackendStorage for MetalStorage {
type Device = MetalDevice;
fn try_clone(&self, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn dtype(&self) -> DType {
fail!()
}
fn device(&self) -> &Self::Device {
fail!()
}
fn const_set(&mut self, _: crate::scalar::Scalar, _: &Layout) -> Result<()> {
Err(Error::NotCompiledWithMetalSupport)
}
fn to_cpu_storage(&self) -> Result<CpuStorage> {
Err(Error::NotCompiledWithMetalSupport)
}
fn affine(&self, _: &Layout, _: f64, _: f64) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn powf(&self, _: &Layout, _: f64) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn elu(&self, _: &Layout, _: f64) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn reduce_op(&self, _: ReduceOp, _: &Layout, _: &[usize]) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn cmp(&self, _: CmpOp, _: &Self, _: &Layout, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn to_dtype(&self, _: &Layout, _: DType) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn unary_impl<B: UnaryOpT>(&self, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn binary_impl<B: BinaryOpT>(&self, _: &Self, _: &Layout, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn where_cond(&self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn conv1d(
&self,
_: &Layout,
_: &Self,
_: &Layout,
_: &crate::conv::ParamsConv1D,
) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn conv_transpose1d(
&self,
_l: &Layout,
_kernel: &Self,
_kernel_l: &Layout,
_params: &crate::conv::ParamsConvTranspose1D,
) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn conv2d(
&self,
_: &Layout,
_: &Self,
_: &Layout,
_: &crate::conv::ParamsConv2D,
) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn conv_transpose2d(
&self,
_l: &Layout,
_kernel: &Self,
_kernel_l: &Layout,
_params: &crate::conv::ParamsConvTranspose2D,
) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn index_select(&self, _: &Self, _: &Layout, _: &Layout, _: usize) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn gather(&self, _: &Layout, _: &Self, _: &Layout, _: usize) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn scatter_set(
&mut self,
_: &Layout,
_: &Self,
_: &Layout,
_: &Self,
_: &Layout,
_: usize,
) -> Result<()> {
Err(Error::NotCompiledWithMetalSupport)
}
fn scatter_add_set(
&mut self,
_: &Layout,
_: &Self,
_: &Layout,
_: &Self,
_: &Layout,
_: usize,
) -> Result<()> {
Err(Error::NotCompiledWithMetalSupport)
}
fn index_add(
&self,
_: &Layout,
_: &Self,
_: &Layout,
_: &Self,
_: &Layout,
_: usize,
) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn matmul(
&self,
_: &Self,
_: (usize, usize, usize, usize),
_: &Layout,
_: &Layout,
) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn copy_strided_src(&self, _: &mut Self, _: usize, _: &Layout) -> Result<()> {
Err(Error::NotCompiledWithMetalSupport)
}
fn copy2d(
&self,
_: &mut Self,
_: usize,
_: usize,
_: usize,
_: usize,
_: usize,
_: usize,
) -> Result<()> {
Err(Error::NotCompiledWithMetalSupport)
}
fn avg_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn max_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn upsample_nearest1d(&self, _: &Layout, _: usize) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn upsample_nearest2d(&self, _: &Layout, _: usize, _: usize) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
}
impl crate::backend::BackendDevice for MetalDevice {
type Storage = MetalStorage;
fn new(_: usize) -> Result<Self> {
Err(Error::NotCompiledWithMetalSupport)
}
fn set_seed(&self, _: u64) -> Result<()> {
Err(Error::NotCompiledWithMetalSupport)
}
fn location(&self) -> crate::DeviceLocation {
fail!()
}
fn same_device(&self, _: &Self) -> bool {
fail!()
}
fn zeros_impl(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> {
Err(Error::NotCompiledWithMetalSupport)
}
unsafe fn alloc_uninit(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> {
Err(Error::NotCompiledWithMetalSupport)
}
fn storage_from_slice<T: crate::WithDType>(&self, _: &[T]) -> Result<Self::Storage> {
Err(Error::NotCompiledWithMetalSupport)
}
fn storage_from_cpu_storage(&self, _: &CpuStorage) -> Result<Self::Storage> {
Err(Error::NotCompiledWithMetalSupport)
}
fn storage_from_cpu_storage_owned(&self, _: CpuStorage) -> Result<Self::Storage> {
Err(Error::NotCompiledWithMetalSupport)
}
fn rand_uniform(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage> {
Err(Error::NotCompiledWithMetalSupport)
}
fn rand_normal(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage> {
Err(Error::NotCompiledWithMetalSupport)
}
fn synchronize(&self) -> Result<()> {
Ok(())
}
}
|
candle/candle-core/src/dummy_metal_backend.rs/0
|
{
"file_path": "candle/candle-core/src/dummy_metal_backend.rs",
"repo_id": "candle",
"token_count": 3182
}
| 28
|
//! Support for the [GGUF file format](https://github.com/philpax/ggml/blob/gguf-spec/docs/gguf.md).
//!
use super::{GgmlDType, QTensor};
use crate::{Context, Device, Result};
use byteorder::{LittleEndian, ReadBytesExt, WriteBytesExt};
use std::collections::HashMap;
pub const DEFAULT_ALIGNMENT: u64 = 32;
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum Magic {
Gguf,
}
impl TryFrom<u32> for Magic {
type Error = crate::Error;
fn try_from(value: u32) -> Result<Self> {
let magic = match value {
0x46554747 | 0x47475546 => Self::Gguf,
_ => crate::bail!("unknown magic 0x{value:08x}"),
};
Ok(magic)
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum VersionedMagic {
GgufV1,
GgufV2,
GgufV3,
}
impl VersionedMagic {
fn read<R: std::io::Read>(reader: &mut R) -> Result<Self> {
let magic = reader.read_u32::<LittleEndian>()?;
let magic = Magic::try_from(magic)?;
let version = reader.read_u32::<LittleEndian>()?;
let versioned_magic = match (magic, version) {
(Magic::Gguf, 1) => Self::GgufV1,
(Magic::Gguf, 2) => Self::GgufV2,
(Magic::Gguf, 3) => Self::GgufV3,
_ => crate::bail!("gguf: unsupported magic/version {magic:?}/{version}"),
};
Ok(versioned_magic)
}
}
#[derive(Debug)]
pub struct TensorInfo {
pub ggml_dtype: GgmlDType,
pub shape: crate::Shape,
pub offset: u64,
}
impl TensorInfo {
pub fn read<R: std::io::Seek + std::io::Read>(
&self,
reader: &mut R,
tensor_data_offset: u64,
device: &Device,
) -> Result<QTensor> {
let tensor_elems = self.shape.elem_count();
let block_size = self.ggml_dtype.block_size();
if tensor_elems % block_size != 0 {
crate::bail!(
"the number of elements {tensor_elems} is not divisible by the block size {block_size}"
)
}
let size_in_bytes = tensor_elems / block_size * self.ggml_dtype.type_size();
let mut raw_data = vec![0u8; size_in_bytes];
reader.seek(std::io::SeekFrom::Start(tensor_data_offset + self.offset))?;
reader.read_exact(&mut raw_data)?;
super::ggml_file::qtensor_from_ggml(
self.ggml_dtype,
&raw_data,
self.shape.dims().to_vec(),
device,
)
}
}
#[derive(Debug)]
pub struct Content {
pub magic: VersionedMagic,
pub metadata: HashMap<String, Value>,
pub tensor_infos: HashMap<String, TensorInfo>,
pub tensor_data_offset: u64,
}
fn read_string<R: std::io::Read>(reader: &mut R, magic: &VersionedMagic) -> Result<String> {
let len = match magic {
VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize,
VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => {
reader.read_u64::<LittleEndian>()? as usize
}
};
let mut v = vec![0u8; len];
reader.read_exact(&mut v)?;
// GGUF strings are supposed to be non-null terminated but in practice this happens.
while let Some(0) = v.last() {
v.pop();
}
// GGUF strings are utf8 encoded but there are cases that don't seem to be valid.
Ok(String::from_utf8_lossy(&v).into_owned())
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum ValueType {
// The value is a 8-bit unsigned integer.
U8,
// The value is a 8-bit signed integer.
I8,
// The value is a 16-bit unsigned little-endian integer.
U16,
// The value is a 16-bit signed little-endian integer.
I16,
// The value is a 32-bit unsigned little-endian integer.
U32,
// The value is a 32-bit signed little-endian integer.
I32,
// The value is a 64-bit unsigned little-endian integer.
U64,
// The value is a 64-bit signed little-endian integer.
I64,
// The value is a 32-bit IEEE754 floating point number.
F32,
// The value is a 64-bit IEEE754 floating point number.
F64,
// The value is a boolean.
// 1-byte value where 0 is false and 1 is true.
// Anything else is invalid, and should be treated as either the model being invalid or the reader being buggy.
Bool,
// The value is a UTF-8 non-null-terminated string, with length prepended.
String,
// The value is an array of other values, with the length and type prepended.
// Arrays can be nested, and the length of the array is the number of elements in the array, not the number of bytes.
Array,
}
#[derive(Debug, Clone)]
pub enum Value {
U8(u8),
I8(i8),
U16(u16),
I16(i16),
U32(u32),
I32(i32),
U64(u64),
I64(i64),
F32(f32),
F64(f64),
Bool(bool),
String(String),
Array(Vec<Value>),
}
impl Value {
pub fn value_type(&self) -> ValueType {
match self {
Self::U8(_) => ValueType::U8,
Self::I8(_) => ValueType::I8,
Self::U16(_) => ValueType::U16,
Self::I16(_) => ValueType::I16,
Self::U32(_) => ValueType::U32,
Self::I32(_) => ValueType::I32,
Self::U64(_) => ValueType::U64,
Self::I64(_) => ValueType::I64,
Self::F32(_) => ValueType::F32,
Self::F64(_) => ValueType::F64,
Self::Bool(_) => ValueType::Bool,
Self::String(_) => ValueType::String,
Self::Array(_) => ValueType::Array,
}
}
pub fn to_u8(&self) -> Result<u8> {
match self {
Self::U8(v) => Ok(*v),
v => crate::bail!("not a u8 {v:?}"),
}
}
pub fn to_i8(&self) -> Result<i8> {
match self {
Self::I8(v) => Ok(*v),
v => crate::bail!("not a i8 {v:?}"),
}
}
pub fn to_u16(&self) -> Result<u16> {
match self {
Self::U16(v) => Ok(*v),
v => crate::bail!("not a u16 {v:?}"),
}
}
pub fn to_i16(&self) -> Result<i16> {
match self {
Self::I16(v) => Ok(*v),
v => crate::bail!("not a i16 {v:?}"),
}
}
pub fn to_u32(&self) -> Result<u32> {
match self {
Self::U32(v) => Ok(*v),
v => crate::bail!("not a u32 {v:?}"),
}
}
pub fn to_i32(&self) -> Result<i32> {
match self {
Self::I32(v) => Ok(*v),
v => crate::bail!("not a i32 {v:?}"),
}
}
/// This will also automatically upcast any integral types which will not truncate.
pub fn to_u64(&self) -> Result<u64> {
match self {
Self::U64(v) => Ok(*v),
// Autoupcast cases here
Self::U8(v) => Ok(*v as u64),
Self::U16(v) => Ok(*v as u64),
Self::U32(v) => Ok(*v as u64),
Self::Bool(v) => Ok(*v as u64),
v => crate::bail!("not a u64 or upcastable to u64 {v:?}"),
}
}
pub fn to_i64(&self) -> Result<i64> {
match self {
Self::I64(v) => Ok(*v),
v => crate::bail!("not a i64 {v:?}"),
}
}
pub fn to_f32(&self) -> Result<f32> {
match self {
Self::F32(v) => Ok(*v),
v => crate::bail!("not a f32 {v:?}"),
}
}
pub fn to_f64(&self) -> Result<f64> {
match self {
Self::F64(v) => Ok(*v),
v => crate::bail!("not a f64 {v:?}"),
}
}
pub fn to_bool(&self) -> Result<bool> {
match self {
Self::Bool(v) => Ok(*v),
v => crate::bail!("not a bool {v:?}"),
}
}
pub fn to_vec(&self) -> Result<&Vec<Value>> {
match self {
Self::Array(v) => Ok(v),
v => crate::bail!("not a vec {v:?}"),
}
}
pub fn to_string(&self) -> Result<&String> {
match self {
Self::String(v) => Ok(v),
v => crate::bail!("not a string {v:?}"),
}
}
fn read<R: std::io::Read>(
reader: &mut R,
value_type: ValueType,
magic: &VersionedMagic,
) -> Result<Self> {
let v = match value_type {
ValueType::U8 => Self::U8(reader.read_u8()?),
ValueType::I8 => Self::I8(reader.read_i8()?),
ValueType::U16 => Self::U16(reader.read_u16::<LittleEndian>()?),
ValueType::I16 => Self::I16(reader.read_i16::<LittleEndian>()?),
ValueType::U32 => Self::U32(reader.read_u32::<LittleEndian>()?),
ValueType::I32 => Self::I32(reader.read_i32::<LittleEndian>()?),
ValueType::U64 => Self::U64(reader.read_u64::<LittleEndian>()?),
ValueType::I64 => Self::I64(reader.read_i64::<LittleEndian>()?),
ValueType::F32 => Self::F32(reader.read_f32::<LittleEndian>()?),
ValueType::F64 => Self::F64(reader.read_f64::<LittleEndian>()?),
ValueType::Bool => match reader.read_u8()? {
0 => Self::Bool(false),
1 => Self::Bool(true),
b => crate::bail!("unexpected bool value {b}"),
},
ValueType::String => Self::String(read_string(reader, magic)?),
ValueType::Array => {
let value_type = reader.read_u32::<LittleEndian>()?;
let value_type = ValueType::from_u32(value_type)?;
let len = match magic {
VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize,
VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => {
reader.read_u64::<LittleEndian>()? as usize
}
};
let mut vs = Vec::with_capacity(len);
for _ in 0..len {
vs.push(Value::read(reader, value_type, magic)?)
}
Self::Array(vs)
}
};
Ok(v)
}
fn write<W: std::io::Write>(&self, w: &mut W) -> Result<()> {
match self {
&Self::U8(v) => w.write_u8(v)?,
&Self::I8(v) => w.write_i8(v)?,
&Self::U16(v) => w.write_u16::<LittleEndian>(v)?,
&Self::I16(v) => w.write_i16::<LittleEndian>(v)?,
&Self::U32(v) => w.write_u32::<LittleEndian>(v)?,
&Self::I32(v) => w.write_i32::<LittleEndian>(v)?,
&Self::U64(v) => w.write_u64::<LittleEndian>(v)?,
&Self::I64(v) => w.write_i64::<LittleEndian>(v)?,
&Self::F32(v) => w.write_f32::<LittleEndian>(v)?,
&Self::F64(v) => w.write_f64::<LittleEndian>(v)?,
&Self::Bool(v) => w.write_u8(u8::from(v))?,
Self::String(v) => write_string(w, v.as_str())?,
Self::Array(v) => {
// The `Value` type does not enforce that all the values in an Array have the same
// type.
let value_type = if v.is_empty() {
// Doesn't matter, the array is empty.
ValueType::U32
} else {
let value_type: std::collections::HashSet<_> =
v.iter().map(|elem| elem.value_type()).collect();
if value_type.len() != 1 {
crate::bail!("multiple value-types in the same array {value_type:?}")
}
value_type.into_iter().next().context("empty value_type")?
};
w.write_u32::<LittleEndian>(value_type.to_u32())?;
w.write_u64::<LittleEndian>(v.len() as u64)?;
for elem in v.iter() {
elem.write(w)?
}
}
}
Ok(())
}
}
impl ValueType {
fn from_u32(v: u32) -> Result<Self> {
let v = match v {
0 => Self::U8,
1 => Self::I8,
2 => Self::U16,
3 => Self::I16,
4 => Self::U32,
5 => Self::I32,
6 => Self::F32,
7 => Self::Bool,
8 => Self::String,
9 => Self::Array,
10 => Self::U64,
11 => Self::I64,
12 => Self::F64,
v => crate::bail!("unrecognized value-type {v:#08x}"),
};
Ok(v)
}
fn to_u32(self) -> u32 {
match self {
Self::U8 => 0,
Self::I8 => 1,
Self::U16 => 2,
Self::I16 => 3,
Self::U32 => 4,
Self::I32 => 5,
Self::F32 => 6,
Self::Bool => 7,
Self::String => 8,
Self::Array => 9,
Self::U64 => 10,
Self::I64 => 11,
Self::F64 => 12,
}
}
}
impl Content {
pub fn read<R: std::io::Seek + std::io::Read>(reader: &mut R) -> Result<Self> {
let magic = VersionedMagic::read(reader)?;
let tensor_count = match magic {
VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize,
VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => {
reader.read_u64::<LittleEndian>()? as usize
}
};
let metadata_kv_count = match magic {
VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize,
VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => {
reader.read_u64::<LittleEndian>()? as usize
}
};
let mut metadata = HashMap::new();
for _idx in 0..metadata_kv_count {
let key = read_string(reader, &magic)?;
let value_type = reader.read_u32::<LittleEndian>()?;
let value_type = ValueType::from_u32(value_type)?;
let value = Value::read(reader, value_type, &magic)?;
metadata.insert(key, value);
}
let mut tensor_infos = HashMap::new();
for _idx in 0..tensor_count {
let tensor_name = read_string(reader, &magic)?;
let n_dimensions = reader.read_u32::<LittleEndian>()?;
let mut dimensions: Vec<usize> = match magic {
VersionedMagic::GgufV1 => {
let mut dimensions = vec![0; n_dimensions as usize];
reader.read_u32_into::<LittleEndian>(&mut dimensions)?;
dimensions.into_iter().map(|c| c as usize).collect()
}
VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => {
let mut dimensions = vec![0; n_dimensions as usize];
reader.read_u64_into::<LittleEndian>(&mut dimensions)?;
dimensions.into_iter().map(|c| c as usize).collect()
}
};
dimensions.reverse();
let ggml_dtype = reader.read_u32::<LittleEndian>()?;
let ggml_dtype = GgmlDType::from_u32(ggml_dtype)?;
let offset = reader.read_u64::<LittleEndian>()?;
tensor_infos.insert(
tensor_name,
TensorInfo {
shape: crate::Shape::from(dimensions),
offset,
ggml_dtype,
},
);
}
let position = reader.stream_position()?;
let alignment = match metadata.get("general.alignment") {
Some(Value::U8(v)) => *v as u64,
Some(Value::U16(v)) => *v as u64,
Some(Value::U32(v)) => *v as u64,
Some(Value::I8(v)) if *v >= 0 => *v as u64,
Some(Value::I16(v)) if *v >= 0 => *v as u64,
Some(Value::I32(v)) if *v >= 0 => *v as u64,
_ => DEFAULT_ALIGNMENT,
};
let tensor_data_offset = position.div_ceil(alignment) * alignment;
Ok(Self {
magic,
metadata,
tensor_infos,
tensor_data_offset,
})
}
pub fn tensor<R: std::io::Seek + std::io::Read>(
&self,
reader: &mut R,
name: &str,
device: &Device,
) -> Result<QTensor> {
let tensor_info = match self.tensor_infos.get(name) {
Some(tensor_info) => tensor_info,
None => crate::bail!("cannot find tensor info for {name}"),
};
tensor_info.read(reader, self.tensor_data_offset, device)
}
}
fn write_string<W: std::io::Write>(w: &mut W, str: &str) -> Result<()> {
let bytes = str.as_bytes();
w.write_u64::<LittleEndian>(bytes.len() as u64)?;
w.write_all(bytes)?;
Ok(())
}
pub fn write<W: std::io::Seek + std::io::Write>(
w: &mut W,
metadata: &[(&str, &Value)],
tensors: &[(&str, &QTensor)],
) -> Result<()> {
w.write_u32::<LittleEndian>(0x46554747)?;
w.write_u32::<LittleEndian>(2)?; // version 2.
w.write_u64::<LittleEndian>(tensors.len() as u64)?;
w.write_u64::<LittleEndian>(metadata.len() as u64)?;
for (name, value) in metadata.iter() {
write_string(w, name)?;
w.write_u32::<LittleEndian>(value.value_type().to_u32())?;
value.write(w)?;
}
let mut offset = 0usize;
let mut offsets = Vec::with_capacity(tensors.len());
for (name, tensor) in tensors.iter() {
write_string(w, name)?;
let dims = tensor.shape().dims();
w.write_u32::<LittleEndian>(dims.len() as u32)?;
for &dim in dims.iter().rev() {
w.write_u64::<LittleEndian>(dim as u64)?;
}
w.write_u32::<LittleEndian>(tensor.dtype().to_u32())?;
w.write_u64::<LittleEndian>(offset as u64)?;
offsets.push(offset);
let size_in_bytes = tensor.storage_size_in_bytes();
let padding = 31 - (31 + size_in_bytes) % 32;
offset += size_in_bytes + padding;
}
let pos = w.stream_position()? as usize;
let padding = 31 - (31 + pos) % 32;
w.write_all(&vec![0u8; padding])?;
let tensor_start_pos = w.stream_position()? as usize;
for (offset, (_name, tensor)) in offsets.iter().zip(tensors.iter()) {
let pos = w.stream_position()? as usize;
if tensor_start_pos + offset != pos {
crate::bail!(
"internal error, unexpected current position {tensor_start_pos} {offset} {pos}"
)
}
let data = tensor.data()?;
let size_in_bytes = data.len();
w.write_all(&data)?;
let padding = 31 - (31 + size_in_bytes) % 32;
w.write_all(&vec![0u8; padding])?;
}
Ok(())
}
|
candle/candle-core/src/quantized/gguf_file.rs/0
|
{
"file_path": "candle/candle-core/src/quantized/gguf_file.rs",
"repo_id": "candle",
"token_count": 9550
}
| 29
|
use crate::{Result, Tensor};
#[macro_export]
macro_rules! test_device {
// TODO: Switch to generating the two last arguments automatically once concat_idents is
// stable. https://github.com/rust-lang/rust/issues/29599
($fn_name: ident, $test_cpu: ident, $test_cuda: ident, $test_metal: ident) => {
#[test]
fn $test_cpu() -> Result<()> {
$fn_name(&Device::Cpu)
}
#[cfg(feature = "cuda")]
#[test]
fn $test_cuda() -> Result<()> {
$fn_name(&Device::new_cuda(0)?)
}
#[cfg(feature = "metal")]
#[test]
fn $test_metal() -> Result<()> {
$fn_name(&Device::new_metal(0)?)
}
};
}
pub fn assert_tensor_eq(t1: &Tensor, t2: &Tensor) -> Result<()> {
assert_eq!(t1.shape(), t2.shape());
// Default U8 may not be large enough to hold the sum (`t.sum_all` defaults to the dtype of `t`)
let eq_tensor = t1.eq(t2)?.to_dtype(crate::DType::U32)?;
let all_equal = eq_tensor.sum_all()?;
assert_eq!(all_equal.to_scalar::<u32>()?, eq_tensor.elem_count() as u32);
Ok(())
}
pub fn to_vec0_round(t: &Tensor, digits: i32) -> Result<f32> {
let b = 10f32.powi(digits);
let t = t.to_vec0::<f32>()?;
Ok(f32::round(t * b) / b)
}
pub fn to_vec1_round(t: &Tensor, digits: i32) -> Result<Vec<f32>> {
let b = 10f32.powi(digits);
let t = t.to_vec1::<f32>()?;
let t = t.iter().map(|t| f32::round(t * b) / b).collect();
Ok(t)
}
pub fn to_vec2_round(t: &Tensor, digits: i32) -> Result<Vec<Vec<f32>>> {
let b = 10f32.powi(digits);
let t = t.to_vec2::<f32>()?;
let t = t
.iter()
.map(|t| t.iter().map(|t| f32::round(t * b) / b).collect())
.collect();
Ok(t)
}
pub fn to_vec3_round(t: &Tensor, digits: i32) -> Result<Vec<Vec<Vec<f32>>>> {
let b = 10f32.powi(digits);
let t = t.to_vec3::<f32>()?;
let t = t
.iter()
.map(|t| {
t.iter()
.map(|t| t.iter().map(|t| f32::round(t * b) / b).collect())
.collect()
})
.collect();
Ok(t)
}
|
candle/candle-core/src/test_utils.rs/0
|
{
"file_path": "candle/candle-core/src/test_utils.rs",
"repo_id": "candle",
"token_count": 1110
}
| 30
|
use candle_core::{DType, Result, Tensor};
struct TmpFile(std::path::PathBuf);
impl TmpFile {
fn create(base: &str) -> TmpFile {
let filename = std::env::temp_dir().join(format!(
"candle-{}-{}-{:?}",
base,
std::process::id(),
std::thread::current().id(),
));
TmpFile(filename)
}
}
impl std::convert::AsRef<std::path::Path> for TmpFile {
fn as_ref(&self) -> &std::path::Path {
self.0.as_path()
}
}
impl Drop for TmpFile {
fn drop(&mut self) {
std::fs::remove_file(&self.0).unwrap()
}
}
#[test]
fn npy() -> Result<()> {
let npy = Tensor::read_npy("tests/test.npy")?;
assert_eq!(
npy.to_dtype(DType::U8)?.to_vec1::<u8>()?,
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
);
Ok(())
}
#[test]
fn npz() -> Result<()> {
let npz = Tensor::read_npz("tests/test.npz")?;
assert_eq!(npz.len(), 2);
assert_eq!(npz[0].0, "x");
assert_eq!(npz[1].0, "x_plus_one");
assert_eq!(
npz[1].1.to_dtype(DType::U8)?.to_vec1::<u8>()?,
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
);
Ok(())
}
#[test]
fn safetensors() -> Result<()> {
use candle_core::safetensors::Load;
let tmp_file = TmpFile::create("st");
let t = Tensor::arange(0f32, 24f32, &candle_core::Device::Cpu)?;
t.save_safetensors("t", &tmp_file)?;
// Load from file.
let st = candle_core::safetensors::load(&tmp_file, &candle_core::Device::Cpu)?;
let t2 = st.get("t").unwrap();
let diff = (&t - t2)?.abs()?.sum_all()?.to_vec0::<f32>()?;
assert_eq!(diff, 0f32);
// Load from bytes.
let bytes = std::fs::read(tmp_file)?;
let st = candle_core::safetensors::SliceSafetensors::new(&bytes)?;
let t2 = st.get("t").unwrap().load(&candle_core::Device::Cpu);
let diff = (&t - t2)?.abs()?.sum_all()?.to_vec0::<f32>()?;
assert_eq!(diff, 0f32);
Ok(())
}
|
candle/candle-core/tests/serialization_tests.rs/0
|
{
"file_path": "candle/candle-core/tests/serialization_tests.rs",
"repo_id": "candle",
"token_count": 981
}
| 31
|
use candle::Tensor;
pub struct Dataset {
pub train_images: Tensor,
pub train_labels: Tensor,
pub test_images: Tensor,
pub test_labels: Tensor,
pub labels: usize,
}
pub mod cifar;
pub mod fashion_mnist;
pub mod mnist;
|
candle/candle-datasets/src/vision/mod.rs/0
|
{
"file_path": "candle/candle-datasets/src/vision/mod.rs",
"repo_id": "candle",
"token_count": 100
}
| 32
|
# candle-chinese-clip
Contrastive Language-Image Pre-Training (CLIP) is an architecture trained on
pairs of images with related texts. This one is trained using in chinese instead of english.
## Running on cpu
```bash
$ cargo run --example chinese_clip --release -- --images "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg","candle-examples/examples/yolo-v8/assets/bike.jpg" --cpu --sequences "一场自行车比赛","两只猫的照片","一个机器人拿着蜡烛"
> Results for image: candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg
>
> 2025-03-25T19:22:01.325177Z INFO chinese_clip: Probability: 0.0000% Text: 一场自行车比赛
> 2025-03-25T19:22:01.325179Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片
> 2025-03-25T19:22:01.325181Z INFO chinese_clip: Probability: 100.0000% Text: 一个机器人拿着蜡烛
> 2025-03-25T19:22:01.325183Z INFO chinese_clip:
>
> Results for image: candle-examples/examples/yolo-v8/assets/bike.jpg
>
> 2025-03-25T19:22:01.325184Z INFO chinese_clip: Probability: 100.0000% Text: 一场自行车比赛
> 2025-03-25T19:22:01.325186Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片
> 2025-03-25T19:22:01.325187Z INFO chinese_clip: Probability: 0.0000% Text: 一个机器人拿着蜡烛
```
## Running on metal
```bash
$ cargo run --features metal --example chinese_clip --release -- --images "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg","candle-examples/examples/yolo-v8/assets/bike.jpg" --cpu --sequences "一场自行车比赛","两只猫的照片","一个机器人拿着蜡烛"
> Results for image: candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg
>
> 2025-03-25T19:22:01.325177Z INFO chinese_clip: Probability: 0.0000% Text: 一场自行车比赛
> 2025-03-25T19:22:01.325179Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片
> 2025-03-25T19:22:01.325181Z INFO chinese_clip: Probability: 100.0000% Text: 一个机器人拿着蜡烛
> 2025-03-25T19:22:01.325183Z INFO chinese_clip:
>
> Results for image: candle-examples/examples/yolo-v8/assets/bike.jpg
>
> 2025-03-25T19:22:01.325184Z INFO chinese_clip: Probability: 100.0000% Text: 一场自行车比赛
> 2025-03-25T19:22:01.325186Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片
> 2025-03-25T19:22:01.325187Z INFO chinese_clip: Probability: 0.0000% Text: 一个机器人拿着蜡烛
```
|
candle/candle-examples/examples/chinese_clip/README.md/0
|
{
"file_path": "candle/candle-examples/examples/chinese_clip/README.md",
"repo_id": "candle",
"token_count": 1129
}
| 33
|
pub const LAYERNORM_KERNELS: &str = include_str!(concat!(env!("OUT_DIR"), "/layernorm_kernels.ptx"));
|
candle/candle-examples/examples/custom-ops/cuda_kernels.rs/0
|
{
"file_path": "candle/candle-examples/examples/custom-ops/cuda_kernels.rs",
"repo_id": "candle",
"token_count": 44
}
| 34
|
#[cfg(feature = "mkl")]
extern crate intel_mkl_src;
#[cfg(feature = "accelerate")]
extern crate accelerate_src;
use candle_transformers::models::distilbert::{
Config, DistilBertForMaskedLM, DistilBertModel, DTYPE,
};
use anyhow::{Context, Error as E, Result};
use candle::{Device, Tensor};
use candle_nn::VarBuilder;
use clap::{Parser, ValueEnum};
use hf_hub::{api::sync::Api, Repo, RepoType};
use std::path::PathBuf;
use tokenizers::Tokenizer;
enum ModelType {
Masked(Box<DistilBertForMaskedLM>),
UnMasked(Box<DistilBertModel>),
}
impl ModelType {
fn device(&self) -> &Device {
match self {
ModelType::Masked(model) => &model.bert.device,
ModelType::UnMasked(model) => &model.device,
}
}
fn forward(&self, input_ids: &Tensor, attention_mask: &Tensor) -> Result<Tensor> {
match self {
ModelType::Masked(model) => Ok(model.forward(input_ids, attention_mask)?),
ModelType::UnMasked(model) => Ok(model.forward(input_ids, attention_mask)?),
}
}
}
#[derive(Clone, Debug, Copy, PartialEq, Eq, ValueEnum)]
enum Which {
#[value(name = "distilbert")]
DistilBert,
#[value(name = "distilbertformaskedlm")]
DistilbertForMaskedLM,
}
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
struct Args {
/// Run on CPU rather than on GPU.
#[arg(long)]
cpu: bool,
/// Enable tracing (generates a trace-timestamp.json file).
#[arg(long)]
tracing: bool,
#[arg(long, default_value = "distilbert")]
model: Which,
/// The model to use, check out available models: https://huggingface.co/models?library=sentence-transformers&sort=trending
#[arg(long)]
model_id: Option<String>,
/// Revision or branch
#[arg(long)]
revision: Option<String>,
/// When set, compute embeddings for this prompt.
#[arg(long)]
prompt: String,
/// Use the pytorch weights rather than the safetensors ones
#[arg(long)]
use_pth: bool,
/// The number of times to run the prompt.
#[arg(long, default_value = "1")]
n: usize,
/// Number of top predictions to show for each mask
#[arg(long, default_value = "5")]
top_k: usize,
}
impl Args {
fn build_model_and_tokenizer(&self) -> Result<(ModelType, Tokenizer)> {
let device = candle_examples::device(self.cpu)?;
let (model_id, revision) = self.resolve_model_and_revision();
let (config_path, tokenizer_path, weights_path) =
self.download_model_files(&model_id, &revision)?;
let config = std::fs::read_to_string(config_path)?;
let config: Config = serde_json::from_str(&config)?;
let tokenizer = Tokenizer::from_file(tokenizer_path).map_err(E::msg)?;
let vb = self.load_variables(&weights_path, &device)?;
let model = self.create_model(&config, vb)?;
Ok((model, tokenizer))
}
fn resolve_model_and_revision(&self) -> (String, String) {
let default_model = "distilbert-base-uncased".to_string();
let default_revision = "main".to_string();
match (self.model_id.clone(), self.revision.clone()) {
(Some(model_id), Some(revision)) => (model_id, revision),
(Some(model_id), None) => (model_id, default_revision),
(None, Some(revision)) => (default_model, revision),
(None, None) => (default_model, default_revision),
}
}
fn download_model_files(
&self,
model_id: &str,
revision: &str,
) -> Result<(PathBuf, PathBuf, PathBuf)> {
let repo = Repo::with_revision(model_id.to_string(), RepoType::Model, revision.to_string());
let api = Api::new()?;
let api = api.repo(repo);
let config = api.get("config.json")?;
let tokenizer = api.get("tokenizer.json")?;
let weights = if self.use_pth {
api.get("pytorch_model.bin")?
} else {
api.get("model.safetensors")?
};
Ok((config, tokenizer, weights))
}
fn load_variables(&self, weights_path: &PathBuf, device: &Device) -> Result<VarBuilder<'_>> {
if self.use_pth {
Ok(VarBuilder::from_pth(weights_path, DTYPE, device)?)
} else {
Ok(unsafe { VarBuilder::from_mmaped_safetensors(&[weights_path], DTYPE, device)? })
}
}
fn create_model(&self, config: &Config, vb: VarBuilder) -> Result<ModelType> {
match self.model {
Which::DistilbertForMaskedLM => Ok(ModelType::Masked(
DistilBertForMaskedLM::load(vb, config)?.into(),
)),
Which::DistilBert => Ok(ModelType::UnMasked(
DistilBertModel::load(vb, config)?.into(),
)),
}
}
}
fn main() -> Result<()> {
let args = Args::parse();
let _guard = setup_tracing(&args);
let (model, tokenizer) = args.build_model_and_tokenizer()?;
let device = model.device();
let (token_ids, mask) = prepare_inputs(&args, &tokenizer, device)?;
let output = model.forward(&token_ids, &mask)?;
process_output(&model, &output, &token_ids, &tokenizer, &args)?;
Ok(())
}
fn setup_tracing(args: &Args) -> Option<impl Drop> {
if args.tracing {
use tracing_chrome::ChromeLayerBuilder;
use tracing_subscriber::prelude::*;
println!("tracing...");
let (chrome_layer, guard) = ChromeLayerBuilder::new().build();
tracing_subscriber::registry().with(chrome_layer).init();
Some(guard)
} else {
None
}
}
fn prepare_inputs(args: &Args, tokenizer: &Tokenizer, device: &Device) -> Result<(Tensor, Tensor)> {
let mut binding = tokenizer.clone();
let tokenizer_configured = binding
.with_padding(None)
.with_truncation(None)
.map_err(E::msg)?;
let tokens = tokenizer_configured
.encode(args.prompt.clone(), true)
.map_err(E::msg)?
.get_ids()
.to_vec();
let token_ids = Tensor::new(&tokens[..], device)?.unsqueeze(0)?;
let mask = match args.model {
Which::DistilbertForMaskedLM => attention_mask_maskedlm(tokenizer, &args.prompt, device)?,
Which::DistilBert => attention_mask(tokens.len(), device)?,
};
println!("token_ids: {:?}", token_ids.to_vec2::<u32>()?);
Ok((token_ids, mask))
}
fn process_output(
model: &ModelType,
output: &Tensor,
token_ids: &Tensor,
tokenizer: &Tokenizer,
args: &Args,
) -> Result<()> {
match model {
ModelType::UnMasked(_) => {
println!("embeddings");
println!("{output}");
}
ModelType::Masked(_) => {
process_masked_output(output, token_ids, tokenizer, args)?;
}
}
Ok(())
}
fn process_masked_output(
output: &Tensor,
token_ids: &Tensor,
tokenizer: &Tokenizer,
args: &Args,
) -> Result<()> {
let input_ids_vec = token_ids.to_vec2::<u32>()?;
let mask_token_id = tokenizer
.token_to_id("[MASK]")
.context("Mask token, \"[MASK]\", not found in tokenizer.")?;
println!("\nInput: {}", args.prompt);
for (token_idx, &token_id) in input_ids_vec[0].iter().enumerate() {
if token_id == mask_token_id {
println!("Predictions for [MASK] at position {token_idx}:");
let pos_logits = output.get(0)?.get(token_idx)?;
let probs = candle_nn::ops::softmax(&pos_logits, 0)?;
let (top_values, top_indices) = get_top_k(&probs, args.top_k)?;
let values = top_values.to_vec1::<f32>()?;
let indices = top_indices.to_vec1::<u32>()?;
for (i, (&token_id, &prob)) in indices.iter().zip(values.iter()).enumerate() {
let token = tokenizer.decode(&[token_id], false).map_err(E::msg)?;
println!(
" {}: {:15} (probability: {:.2}%)",
i + 1,
token,
prob * 100.0
);
}
}
}
Ok(())
}
fn get_top_k(tensor: &Tensor, k: usize) -> Result<(Tensor, Tensor)> {
let n = tensor.dims().iter().product::<usize>();
let k = std::cmp::min(k, n);
let values = tensor.to_vec1::<f32>()?;
let mut value_indices: Vec<(f32, usize)> = values
.into_iter()
.enumerate()
.map(|(idx, val)| (val, idx))
.collect();
value_indices.sort_by(|a, b| b.0.partial_cmp(&a.0).unwrap_or(std::cmp::Ordering::Equal));
let top_k_values: Vec<f32> = value_indices.iter().take(k).map(|(val, _)| *val).collect();
let top_k_indices: Vec<u32> = value_indices
.iter()
.take(k)
.map(|(_, idx)| *idx as u32)
.collect();
let device = tensor.device();
let top_values = Tensor::from_vec(top_k_values, (k,), device)?;
let top_indices = Tensor::from_vec(top_k_indices, (k,), device)?;
Ok((top_values, top_indices))
}
fn attention_mask(size: usize, device: &Device) -> Result<Tensor> {
let mask: Vec<_> = (0..size)
.flat_map(|i| (0..size).map(move |j| u8::from(j > i)))
.collect();
Ok(Tensor::from_slice(&mask, (size, size), device)?)
}
fn attention_mask_maskedlm(tokenizer: &Tokenizer, input: &str, device: &Device) -> Result<Tensor> {
let tokens = tokenizer.encode(input, true).map_err(E::msg)?;
let seq_len = tokens.get_attention_mask().to_vec().len();
let mask_token_id = tokenizer
.token_to_id("[MASK]")
.context("Mask token, \"[MASK]\", not found in tokenizer.")?;
let mut attention_mask_vec = Vec::with_capacity(seq_len * seq_len);
let ids = tokens.get_ids();
for _ in 0..seq_len {
for id in ids.iter() {
let mask_value = if id == &mask_token_id { 1u8 } else { 0u8 };
attention_mask_vec.push(mask_value);
}
}
let shape = (1, 1, seq_len, seq_len);
let mask = Tensor::from_vec(attention_mask_vec, shape, device)?;
Ok(mask)
}
|
candle/candle-examples/examples/distilbert/main.rs/0
|
{
"file_path": "candle/candle-examples/examples/distilbert/main.rs",
"repo_id": "candle",
"token_count": 4559
}
| 35
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 22