AI & ML interests

Open science and open source

jorgemunozl 
posted an update 16 days ago
view post
Post
263
Test

I know that it was buggy, OMG
louisbrulenaudet 
posted an update 6 months ago
view post
Post
6282
Supercharge Apple’s Shortcuts using Cloudflare Workers and Gemini within minutes (and for free, up to 1,500 requests per day) ☁️✨

Hello everyone, last week, while experimenting for fun, I created an API that allows you to easily access AI models (in this case, Google's) from the Shortcut app in order to analyze data from my apps and make the most of it thanks to the generative capabilities of advanced models.

It costs me nothing, and I think it might be good to share it so that others can build on it.

In README.md, you will find everything you need to get started and put your own microservice into production, which you can call from the app’s HTTP request features.

You will simply be asked to have a free Cloudflare account and an API key obtained from Google's AI Studio.

Feel free to take a look and get back to me if you encounter any problems during deployment.

Here is the GitHub repo where you can find all the source code and run it on your own: https://github.com/louisbrulenaudet/genai-api
louisbrulenaudet 
posted an update 6 months ago
view post
Post
758
Although more and more code editors are aligning themselves with the AGENTS.md file standard, some still use specific nomenclatures that can make it difficult to maintain different configuration files when several people are working on the same project with different agents.

Bodyboard addresses this by generating canonical instructions for code helpers from a single AGENTS.md file, thereby streamlining the production of adapter outputs for Gemini CLI, Copilot, Cline, Claude, Rules, Windsurf, and OpenAI Codex integrations.

You just have to:
npm install -g bodyboard

Then run, at the root of your project:
bodyboard all

Link to npm: https://www.npmjs.com/package/bodyboard
Link to the GitHub repo: https://github.com/louisbrulenaudet/bodyboard

It's a very simple project, but it addresses certain issues I've encountered, so why not make it available to everyone...

If you have other ideas for adapters to create, feel free to open a PR on the GitHub repo.
louisbrulenaudet 
posted an update 7 months ago
view post
Post
2860
Because hackathons are often the starting point for many AI projects, I've created a Python-backend template incorporating my feedback to streamline collaboration and urgent deployments 🏎️

Within a year, I had the opportunity to participate in hackathons organized by Mistral, OpenAI, and DeepMind and this GitHub template is structured around several fundamental building blocks and recommendations I offer developers eager to participate in their first hackathon, whether as part of a team or individually. Its emphasis is on rapid setup and deployment through:
- uv as a package manager, simplifying usage via a series of pre-configured make commands.
- FastAPI for API management, structured in a modular architecture designed to minimize branch conflicts during merges to main branches (using minimal health-check and ping routes to verify Docker’s proper execution and backend accessibility on the local network).
- Pydantic for validation and type handling, which simplifies debugging and enhances understanding of data objects.
- A set of custom instructions tailored for agents (Cline and GitHub Copilot), aimed at improving overall comprehension of the application and optimizing the vibe-coding experience.

This template includes unit tests with a 100% success rate and test coverage, as well as a minimal CI file ensuring that the FastAPI application runs correctly. Thus, merging code that breaks the server into production becomes impossible ⛔️

In general, I would reiterate an essential piece of advice: your two main adversaries are branch conflicts—particularly when the same file is modified concurrently within a brief period, especially if your architecture isn’t built for scalability—and deployment issues under urgent circumstances ⏱️

Link to GitHub: https://github.com/louisbrulenaudet/hackathon-backend

Simply issue these commands and you can ship your code at the speed of light:
make init
make dev
zamal 
posted an update 8 months ago
view post
Post
4273
Hey all
Finally it's happening. DeepGit lite is back now, running on cpu only devices. Just smartly search across Github and spin up conversational agents in the background and have grounded conversation with repositories
Try it out now!!!! zamal/DeepGit
  • 1 reply
·
louisbrulenaudet 
posted an update 8 months ago
view post
Post
1244
🌐 Clinical Trials Dataset now available on Hugging Face! 🧬

I’ve just released a comprehensive, ML-ready dataset featuring 500,000+ clinical trial records sourced directly from ClinicalTrials.gov for biomedical NLP, healthcare analytics, and clinical research applications 🤗

I wanted to produce the most complete and up-to-date dump with all raw data partially flattened to simplify extraction, self-querying and processing.

Do you have any ideas about what we can do with it? Using descriptions to enhance specialized embedding models?

louisbrulenaudet/clinical-trials
zamal 
posted an update 8 months ago
view post
Post
1653
Say hallo to GermaNER 💪– a lightweight, high-accuracy NER model for German texts, powered by XLM-RoBERTa + LoRA adapters!
⚡ Fast, efficient, and open-source – perfect for tagging names, places & orgs in real-world German data.
Try it now on Hugging Face 👉 fau/GermaNER
zamal 
posted an update 8 months ago
view post
Post
4432
🚀 Videoxity is live on Hugging Face! 🎞️
A powerful, modular toolkit for intelligent video manipulation and scene editing.

With Videoxity, you can:

🖼️ Auto-caption keyframes with BLIP

🧠 Filter scenes using natural language (e.g. “remove dog scenes”)

✂️ Seamlessly trim videos with FFmpeg

📊 Generate frame-based summaries

Powered by Groq LLM + LangChain, OpenCV, BLIP, and SentenceTransformers, Videoxity bridges vision and language to give developers full control over video content.
🔧 Built for developers. Feedback welcome!


👉 Try it out here fau/videoxity
zamal 
posted an update 11 months ago
view post
Post
1996
🚀 DeepGit Lite is live! 🔍✨

Hey folks!
Just launched DeepGit Lite — a lighter version of DeepGit with fewer components under the hood.
It won’t perform quite like the full powerhouse, but it’s great for a quick peek and first-hand feel! ⚙️👀

Give it a spin and tell us what you think!
👉 Try it here https://huggingface.co/spaces/zamal/DeepGit-lite
#opensource #DeepGit #gradio #githubresearch
·
zamal 
posted an update 11 months ago
view post
Post
2620
DeepGit: Your GitHub Gold Digger! 💰🚀
Hey Hugging Face gang! Meet DeepGit—my open-source sidekick that rips through GitHub to snag repos that fit you. Done with dead-end searches? Me too. Built it with LangGraph and some dope tricks:
Embeddings grab the good stuff (HF magic, baby!)

Re-ranking nails the best picks

Snoops docs, code, and buzz in one slick flow

Drops a clean list of hidden gems 💎

Unearth that sneaky ML lib or Python gem—run python app.py or langgraph dev and boom! Peek it at https://github.com/zamalali/DeepGit. Fork it, tweak it, love it—Docker’s in, HF vibes are strong. Drop a 🌟 or a crazy idea—I’m pumped to jam with you all! 🪂
louisbrulenaudet 
posted an update 11 months ago
view post
Post
1231
I’ve just released logfire-callback on PyPI, designed to facilitate monitoring of Hugging Face Transformer training loops using Pydantic Logfire 🤗

The callback will automatically log training start with configuration parameters, periodic metrics and training completion ⏱️

Install the package using pip:
pip install logfire-callback

First, ensure you have a Logfire API token and set it as an environment variable:
export LOGFIRE_TOKEN=your_logfire_token

Then use the callback in your training code:
from transformers import Trainer, TrainingArguments
from logfire_callback import LogfireCallback

# Initialize your model, dataset, etc.

training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    # ... other training arguments
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    callbacks=[LogfireCallback()]  # Add the Logfire callback here
)

trainer.train()

If you have any feedback, please reach out at @louisbrulenaudet
zamal 
posted an update 12 months ago
view post
Post
2052
🚀 ftBoost is LIVE – Stop Struggling with Fine-Tuning Data!

Alright folks, if you’re tired of manually crafting fine-tuning datasets, ftBoost is here to do the heavy lifting. One-click, LangChain-Groq-powered data augmentation that scales your training data in OpenAI, Gemini, Mistral, and LLaMA formats—automatically.

🔥 What’s inside?
✅ Smart Augmentations – Paraphrasing, back translation, synonym swapping & synthetic noise.
✅ No more JSONL headaches – Auto-formats everything for OpenAI, Gemini, Mistral & LLaMA.
✅ Custom tuning – Adjust similarity, diversity, and fluency in real-time.
✅ Upload, generate, download – That’s it.

⚡ If you’re fine-tuning LLMs, this will save you hours.

🚀 Try it now: 👉 zamal/Finetune-Boost

🌟 Give us a star on GitHub!

Let me know what you think & how it boosts your workflow! 🔥
louisbrulenaudet 
posted an update 12 months ago
view post
Post
3485
I am pleased to introduce my first project built upon Hugging Face’s smolagents framework, integrated with Alpaca for financial market analysis automation 🦙🤗

The project implements technical indicators such as the Relative Strength Index (RSI) and Bollinger Bands to provide momentum and volatility analysis. Market data is retrieved through the Alpaca API, enabling access to historical price information across various timeframes.

AI-powered insights are generated using Hugging Face’s inference API, facilitating the analysis of market trends through natural language processing with DuckDuckGo search integration for real-time sentiment analysis based on financial news 🦆

Link to the GitHub project: https://github.com/louisbrulenaudet/agentic-market-tool

zamal 
posted an update about 1 year ago
view post
Post
650
🚀 Try Out RAG Demo! 🚀

A Hugging Face Space where you can compare DeepSeek-R1 vs Llama-3 using Stuff RAG (Retrieval-Augmented Generation)!

🔍 Upload a PDF, ask questions, and see how both models perform in real-time!

Try out now:
zamal/Deepseek-R1-vs-LLama3
  • 1 reply
·
zamal 
posted an update about 1 year ago
view post
Post
1522
zamal/Multimodal-Chat-PDF

🚀 Introducing Chat PDF Multimodal 💬

Interact with your PDF documents like never before! 🤯
Extract text & images, then ask context-aware questions based on both. Powered by RAG techniques & multimodal LLMs. Perfect for studying, research & more! 📝👀
Try it out now!!!! ✍️

#LlavaNext #MultimodalAI #Transformers
Taylor658 
posted an update about 1 year ago
view post
Post
1043
🌐 The Stanford Institute for Human-Centered AI (https://aiindex.stanford.edu/vibrancy/) has released its 2024 Global AI Vibrancy Tool, a way to explore and compare AI progress across 36 countries.

📊 It measures progress across the 8 broad pillars of R&D, Responsible AI, Economy, Education, Diversity, Policy and Governance, Public Opinion and Infrastructure. (Each of these pillars have a number of Sub Indices)

📈 As a whole it is not surprising that the USA was at the top in terms of overall score as of 2023 (AI investment activity is a large part of the economic pillar for example and that is a large part of the overall USA ranking) but drilling in to more STRATEGIC Macro pillars like Education, Infrastructure or R&D reveal interesting growth patterns in Asia (particularly China) and Western Europe that I suspect the 2024 metrics will bear out.

🤖 Hopefully the 2024 Global Vibrancy ranking will break out AI and ML verticals like Computer Vision or NLP and or the AI Agent space as that may also from a global macro level give indications of what is to come globally for AI in 2025.
Taylor658 
posted an update about 1 year ago
view post
Post
1265
🤖💻 Function Calling is a key component of Agent workflows. To call functions, an LLM needs a way to interact with other systems and run code. This usually means connecting it to a runtime environment that can handle function calls, data, and security.

Per the Berkeley Function-Calling Leaderboard there are only 2 fully open source models (The other 2 in the top 20 that are not closed source have cc-by-nc-4.0 licenses) out of the top 20 models that currently have function calling built in as of 17 Nov 2024.
https://gorilla.cs.berkeley.edu/leaderboard.html

The 2 Open Source Models out of the top 20 that currently support function calling are:

meetkai/functionary-medium-v3.1
Team-ACE/ToolACE-8B

This is a both a huge disadvantage AND an opportunity for the Open Source community as Enterprises, Small Business, Government Agencies etc. quickly adopt Agents and Agent workflows over the next few months. Open Source will have a lot of catching up to do as Enterprises will be hesitant to switch from the closed source models that they may initially build their Agent workflows on in the next few months to an open source alternative later.

Hopefully more open source models will support function calling in the near future.
louisbrulenaudet 
posted an update over 1 year ago
view post
Post
2103
I’ve published a new dataset to simplify model merging 🤗

This dataset facilitates the search for compatible architectures for model merging with @arcee_ai’s mergekit, streamlining the automation of high-performance merge searches 📖

Dataset : louisbrulenaudet/mergekit-configs
  • 1 reply
·
louisbrulenaudet 
posted an update over 1 year ago
view post
Post
1355
Introducing Lemone-router, a series of classification models designed to produce an optimal multi-agent system for different branches of tax law.

Trained on a base of 49k lines comprising a set of synthetic questions generated by GPT-4 Turbo and Llama 3.1 70B, which have been further refined through evol-instruction tuning and manual curation and authority documents, these models are based on an 8-category decomposition of the classification scheme derived from the Bulletin officiel des finances publiques - impôts :

label2id = {
    "Bénéfices professionnels": 0,
    "Contrôle et contentieux": 1,
    "Dispositifs transversaux": 2,
    "Fiscalité des entreprises": 3,
    "Patrimoine et enregistrement": 4,
    "Revenus particuliers": 5,
    "Revenus patrimoniaux": 6,
    "Taxes sur la consommation": 7
}
	
id2label = {
    0: "Bénéfices professionnels",
    1: "Contrôle et contentieux",
    2: "Dispositifs transversaux",
    3: "Fiscalité des entreprises",
    4: "Patrimoine et enregistrement",
    5: "Revenus particuliers",
    6: "Revenus patrimoniaux",
    7: "Taxes sur la consommation"
}

It achieves the following results on the evaluation set:
- Loss: 0.4734
- Accuracy: 0.9191

Link to the collection: louisbrulenaudet/lemone-router-671cce21d6410f3570514762