url stringlengths 31 71 | targets stringlengths 11 143 | authors stringlengths 6 190 | date stringlengths 11 18 | inputs stringlengths 140 14.8k |
|---|---|---|---|---|
https://huggingface.co/blog/convert-transformers-to-onnx | Convert Transformers to ONNX with Hugging Face Optimum | Philipp Schmid | June 22, 2022 | Hundreds of Transformers experiments and models are uploaded to the Hugging Face Hub every single day. Machine learning engineers and students conducting those experiments use a variety of frameworks like PyTorch, TensorFlow/Keras, or others. These models are already used by thousands of companies and form the foundati... |
https://huggingface.co/blog/arxiv | Hugging Face Machine Learning Demos on arXiv | Abubakar Abid, Omar Sanseviero, Pedro Cuenca | November 17, 2022 | Hugging Face Machine Learning Demos on arXivHugging Face Models Datasets Spaces Posts Docs Solutions Pricing Log In Sign Up Back to Articles Hugging Face Machine Learning Demos on arXiv |
https://huggingface.co/blog/safetensors-security-audit | Audit shows that safetensors is safe and ready to become the default | Nicolas Patry, Stella Biderman | May 23, 2023 | Hugging Face, in close collaboration with EleutherAI and Stability AI, has orderedan external security audit of the safetensors library, the results of which allowall three organizations to move toward making the library the default formatfor saved models.The full results of the security audit, performed by Trail of Bi... |
https://huggingface.co/blog/textgen-pipe-gaudi | Text-Generation Pipeline on Intel® Gaudi® 2 AI Accelerator | Siddhant Jagtap | February 29, 2024 | With the Generative AI (GenAI) revolution in full swing, text-generation with open-source transformer models like Llama 2 has become the talk of the town. AI enthusiasts as well as developers are looking to leverage the generative abilities of such models for their own use cases and applications. This article shows how... |
https://huggingface.co/blog/pollen-vision | Pollen-Vision: Unified interface for Zero-Shot vision models in robotics | Antoine Pirrone, Simon Le Goff, Rouanet, Simon Revelly | March 25, 2024 | This is a guest blog post by the Pollen Robotics team. We are the creators of Reachy, an open-source humanoid robot designed for manipulation in the real world.In the context of autonomous behaviors, the essence of a robot's usability lies in its ability to understand and interact with its environment. This understandi... |
https://huggingface.co/blog/segmoe | SegMoE: Segmind Mixture of Diffusion Experts | Yatharth Gupta, Vishnu V Jaddipal, Harish Prabhala | February 3, 2024 | SegMoE is an exciting framework for creating Mixture-of-Experts Diffusion models from scratch! SegMoE is comprehensively integrated within the Hugging Face ecosystem and comes supported with diffusers 🔥!Among the features and integrations being released today:Models on the Hub, with their model cards and licenses (Apa... |
https://huggingface.co/blog/setfit-optimum-intel | Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon | Daniel Korat, Tom Aarsen, Oren Pereg, Moshe Wasserblat, Ella Charlaix, Abirami Prabhakaran | April 3, 2024 | SetFit is a promising solution for a common modeling problem: how to deal with lack of labeled data for training. Developed with Hugging Face’s research partners at Intel Labs and the UKP Lab, SetFit is an efficient framework for few-shot fine-tuning of Sentence Transformers models. SetFit achieves high accuracy with l... |
https://huggingface.co/blog/deep-rl-ppo | Proximal Policy Optimization (PPO) | Thomas Simonini | August 5, 2022 | Unit 8, of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A ne... |
https://huggingface.co/blog/fast-diffusers-coreml | Faster Stable Diffusion with Core ML on iPhone, iPad, and Mac | Pedro Cuenca | June 15, 2023 | WWDC’23 (Apple Worldwide Developers Conference) was held last week. A lot of the news focused on the Vision Pro announcement during the keynote, but there’s much more to it. Like every year, WWDC week is packed with more than 200 technical sessions that dive deep inside the upcoming features across Apple operating syst... |
https://huggingface.co/blog/paddlepaddle | Welcome PaddlePaddle to the Hugging Face Hub | PaddlePaddle | January 17, 2023 | We are happy to share an open source collaboration between Hugging Face and PaddlePaddle on a shared mission to advance and democratize AI through open source!First open sourced by Baidu in 2016, PaddlePaddle enables developers of all skill levels to adopt and implement Deep Learning at scale. As of Q4 2022, PaddlePadd... |
https://huggingface.co/blog/lcm_lora | SDXL in 4 steps with Latent Consistency LoRAs | Pedro Cuenca, Suraj Patil, Simian Luo, Daniel Gu, Yiqin Tan, Sayak Paul, Apolinário from multimodal AI art | November 9, 2023 | Latent Consistency Models (LCM) are a way to decrease the number of steps required to generate an image with Stable Diffusion (or SDXL) by distilling the original model into another version that requires fewer steps (4 to 8 instead of the original 25 to 50). Distillation is a type of training procedure that attempts to... |
https://huggingface.co/blog/train-decision-transformers | Train your first Decision Transformer | Edward Beeching, Thomas Simonini | September 8, 2022 | In a previous post, we announced the launch of Decision Transformers in the transformers library. This new technique of using a Transformer as a Decision-making model is getting increasingly popular.So today, you’ll learn to train your first Offline Decision Transformer model from scratch to make a half-cheetah run. We... |
https://huggingface.co/blog/course-launch-event | Course Launch Community Event | Sylvain Gugger | October 26, 2021 | We are excited to share that after a lot of work from the Hugging Face team, part 2 of the Hugging Face Course will be released on November 15th! Part 1 focused on teaching you how to use a pretrained model, fine-tune it on a text classification task then upload the result to the Model Hub. Part 2 will focus on all the... |
https://huggingface.co/blog/ml-for-games-2 | AI for Game Development: Creating a Farming Game in 5 Days. Part 2 | Dylan Ebert | January 9, 2023 | Welcome to AI for Game Development! In this series, we'll be using AI tools to create a fully functional farming game in just 5 days. By the end of this series, you will have learned how you can incorporate a variety of AI tools into your game development workflow. I will show you how you can use AI tools for:Art Style... |
https://huggingface.co/blog/stable-diffusion-xl-coreml | Stable Diffusion XL on Mac with Advanced Core ML Quantization | Pedro Cuenca, Orhon | July 27, 2023 | Stable Diffusion XL was released yesterday and it’s awesome. It can generate large (1024x1024) high quality images; adherence to prompts has been improved with some new tricks; it can effortlessly produce very dark or very bright images thanks to the latest research on noise schedulers; and it’s open source!The downsid... |
https://huggingface.co/blog/stable_diffusion_jax | 🧨 Stable Diffusion in JAX / Flax ! | Pedro Cuenca, Patrick von Platen | October 13, 2022 | 🤗 Hugging Face Diffusers supports Flax since version 0.5.1! This allows for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform.This post shows how to run inference using JAX / Flax. If you want more details about how Stable Diffusion works or want to run it in GPU, p... |
https://huggingface.co/blog/deep-rl-a2c | Advantage Actor Critic (A2C) | Thomas Simonini | July 22, 2022 | Unit 7, of the Deep Reinforcement Learning Class with Hugging Face 🤗⚠️ A new updated version of this article is available here 👉 https://huggingface.co/deep-rl-course/unit1/introductionThis article is part of the Deep Reinforcement Learning Class. A free course from beginner to expert. Check the syllabus here.⚠️ A ne... |
https://huggingface.co/blog/ray-tune | Hyperparameter Search with Transformers and Ray Tune | Ray Project (Anyscale) | November 2, 2020 | With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face transformers library has become critical to the success and growth of natural language processing today.For any machine learning model to achieve good performance, users often need to implement some form of param... |
https://huggingface.co/blog/sagemaker-distributed-training-seq2seq | Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker | Philipp Schmid | April 8, 2021 | In case you missed it: on March 25th we announced a collaboration with Amazon SageMaker to make it easier to create State-of-the-Art Machine Learning models, and ship cutting-edge NLP features faster. Together with the SageMaker team, we built 🤗 Transformers optimized Deep Learning Containers to accelerate training of... |
https://huggingface.co/blog/fastai | Welcome fastai to the Hugging Face Hub | Omar Espejel | May 6, 2022 | Making neural nets uncool again... and sharing themFew have done as much as the fast.ai ecosystem to make Deep Learning accessible. Our mission at Hugging Face is to democratize good Machine Learning. Let's make exclusivity in access to Machine Learning, including pre-trained models, a thing of the past and let's push ... |
https://huggingface.co/blog/setfit-absa | SetFitABSA: Few-Shot Aspect Based Sentiment Analysis using SetFit | Ronen Laperdon, Tom Aarsen, Lewis Tunstall, Oren Pereg, Moshe Wasserblat | December 6, 2023 | Aspect-Based Sentiment Analysis (ABSA) is the task of detecting the sentiment towards specific aspects within the text. For example, in the sentence, "This phone has a great screen, but its battery is too small", the aspect terms are "screen" and "battery" and the sentiment polarities towards them are Positive and Nega... |
https://huggingface.co/blog/ai-webtv | Building an AI WebTV | Julian Bilcke | July 17, 2023 | The AI WebTV is an experimental demo to showcase the latest advancements in automatic video and music synthesis.👉 Watch the stream now by going to the AI WebTV Space.If you are using a mobile device, you can view the stream from the Twitch mirror.ConceptThe motivation for the AI WebTV is to demo videos generated with ... |
https://huggingface.co/blog/sempre-health-eap-case-study | How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap | Hugging Face | May 19, 2022 | 👋 Hello, friends! We recently sat down with Swaraj Banerjee and Larry Zhang from Sempre Health, a startup that brings behavior-based, dynamic pricing to Healthcare. They are doing some exciting work with machine learning and are leveraging our Expert Acceleration Program to accelerate their ML roadmap.An example of ou... |
https://huggingface.co/blog/intel | Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration | Julien Simon | June 15, 2022 | The mission of Hugging Face is to democratize good machine learning and maximize its positive impact across industries and society. Not only do we strive to advance Transformer models, but we also work hard on simplifying their adoption.Today, we're excited to announce that Intel has officially joined our Hardware Part... |
https://huggingface.co/blog/sdxl_ort_inference | Accelerating SD Turbo and SDXL Turbo Inference with ONNX Runtime and Olive | Sophie Schoenmeyer, Tianlei Wu, Morgan Funtowicz | January 15, 2024 | IntroductionSD Turbo and SDXL Turbo are two fast generative text-to-image models capable of generating viable images in as little as one step, a significant improvement over the 30+ steps often required with previous Stable Diffusion models. SD Turbo is a distilled version of Stable Diffusion 2.1, and SDXL Turbo is a d... |
https://huggingface.co/blog/rocketmoney-case-study | Rocket Money x Hugging Face: Scaling Volatile ML Models in Production | Nico Kuzak, Chris Poirier | September 19, 2023 | Scaling and Maintaining ML Models in Production Without an MLOps Team We created Rocket Money (a personal finance app formerly known as Truebill) to help users improve their financial wellbeing. Users link their bank accounts to the app which then classifies and categorizes their transactions, identifying recurring pat... |
https://huggingface.co/blog/japanese-stable-diffusion | Japanese Stable Diffusion | Kei Sawada | October 5, 2022 | Stable Diffusion, developed by CompVis, Stability AI, and LAION, has generated a great deal of interest due to its ability to generate highly accurate images by simply entering text prompts. Stable Diffusion mainly uses the English subset LAION2B-en of the LAION-5B dataset for its training data and, as a result, requir... |
https://huggingface.co/blog/llama2 | Llama 2 is here - get it on Hugging Face | Philipp Schmid, Omar Sanseviero, Pedro Cuenca, Lewis Tunstall | July 18, 2023 | IntroductionLlama 2 is a family of state-of-the-art open-access large language models released by Meta today, and we’re excited to fully support the launch with comprehensive integration in Hugging Face. Llama 2 is being released with a very permissive community license and is available for commercial use. The code, pr... |
https://huggingface.co/blog/password-git-deprecation | Hugging Face Hub: Important Git Authentication Changes | Sylvestre Bcht, Pierric Cistac, Simon Brandeis | August 25, 2023 | Because we are committed to improving the security of our services, we are making changes to the way you authenticate when interacting with the Hugging Face Hub through Git.Starting from October 1st, 2023, we will no longer accept passwords as a way to authenticate your command-line Git operations. Instead, we recommen... |
https://huggingface.co/blog/diffusers-turns-1 | Happy 1st anniversary 🤗 Diffusers! | Steven Liu, Sayak Paul, Pedro Cuenca | July 20, 2023 | 🤗 Diffusers is happy to celebrate its first anniversary! It has been an exciting year, and we're proud and grateful for how far we've come thanks to our community and open-source contributors. Last year, text-to-image models like DALL-E 2, Imagen, and Stable Diffusion captured the world's attention with their ability ... |
https://huggingface.co/blog/accelerate-transformers-with-inferentia2 | Accelerating Hugging Face Transformers with AWS Inferentia2 | Philipp Schmid, Julien Simon | April 17, 2023 | In the last five years, Transformer models [1] have become the de facto standard for many machine learning (ML) tasks, such as natural language processing (NLP), computer vision (CV), speech, and more. Today, many data scientists and ML engineers rely on popular transformer architectures like BERT [2], RoBERTa [3], the... |
https://huggingface.co/blog/simple_sdxl_optimizations | Exploring simple optimizations for SDXL | Sayak Paul, Steven Liu | October 24, 2023 | Stable Diffusion XL (SDXL) is the latest latent diffusion model by Stability AI for generating high-quality super realistic images. It overcomes challenges of previous Stable Diffusion models like getting hands and text right as well as spatially correct compositions. In addition, SDXL is also more context aware and re... |
https://huggingface.co/blog/falcon-180b | Spread Your Wings: Falcon 180B is here | Philipp Schmid, Omar Sanseviero, Pedro Cuenca, Leandro von Werra, Julien Launay | September 6, 2023 | IntroductionToday, we're excited to welcome TII's Falcon 180B to HuggingFace! Falcon 180B sets a new state-of-the-art for open models. It is the largest openly available language model, with 180 billion parameters, and was trained on a massive 3.5 trillion tokens using TII's RefinedWeb dataset. This represents the long... |
https://huggingface.co/blog/bloom-inference-pytorch-scripts | Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate | Stas Bekman, Sylvain Gugger | September 16, 2022 | This article shows how to get an incredibly fast per token throughput when generating with the 176B parameter BLOOM model.As the model needs 352GB in bf16 (bfloat16) weights (176*2), the most efficient set-up is 8x80GB A100 GPUs. Also 2x8x40GB A100s or 2x8x48GB A6000 can be used. The main reason for using these GPUs is... |
https://huggingface.co/blog/elixir-bumblebee | From GPT2 to Stable Diffusion: Hugging Face arrives to the Elixir community | José Valim | December 9, 2022 | The Elixir community is glad to announce the arrival of several Neural Networks models, from GPT2 to Stable Diffusion, to Elixir. This is possible thanks to the just announced Bumblebee library, which is an implementation of Hugging Face Transformers in pure Elixir.To help anyone get started with those models, the team... |
https://huggingface.co/blog/evaluating-llm-bias | Evaluating Language Model Bias with 🤗 Evaluate | Sasha Luccioni, Margaret Mitchell, helen, Leandro von Werra, Douwe Kiela | October 24, 2022 | While the size and capabilities of large language models have drastically increased over the past couple of years, so too has the concern around biases imprinted into these models and their training data. In fact, many popular language models have been found to be biased against specific religions and genders, which ca... |
https://huggingface.co/blog/generative-ai-models-on-intel-cpu | Smaller is better: Q8-Chat, an efficient generative AI experience on Xeon | Julien Simon | May 16, 2023 | Large language models (LLMs) are taking the machine learning world by storm. Thanks to their Transformer architecture, LLMs have an uncanny ability to learn from vast amounts of unstructured data, like text, images, video, or audio. They perform very well on many task types, either extractive like text classification o... |
https://huggingface.co/blog/ethics-soc-1 | Ethics and Society Newsletter #1 | Margaret Mitchell | September 22, 2022 | Ethics and Society Newsletter #1Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesEthics and Society Newsletter #1 |
https://huggingface.co/blog/idefics | Introducing IDEFICS: An Open Reproduction of State-of-the-Art Visual Language Model | Hugo Laurençon, Daniel van Strien, Stas Bekman, Leo Tronchon, Lucile Saulnier, Thomas Wang, Siddharth Karamcheti, Amanpreet Singh, Giada Pistilli, Yacine Jernite, Victor Sanh | August 22, 2023 | We are excited to release IDEFICS (Image-aware Decoder Enhanced à la Flamingo with Interleaved Cross-attentionS), an open-access visual language model. IDEFICS is based on Flamingo, a state-of-the-art visual language model initially developed by DeepMind, which has not been released publicly. Similarly to GPT-4, the mo... |
https://huggingface.co/blog/leaderboard-livecodebench | Introducing the LiveCodeBench Leaderboard - Holistic and Contamination-Free Evaluation of Code LLMs | Naman Jain, Alex Gu, Tianjun Zhang, Wen-Ding Li, King Han, Fanjia Yan, Clémentine Fourrier | April 16, 2024 | We are excited to introduce the LiveCodeBench leaderboard, based on LiveCodeBench, a new benchmark developed by researchers from UC Berkeley, MIT, and Cornell for measuring LLMs’ code generation capabilities. LiveCodeBench collects coding problems over time from various coding contest platforms, annotating problems wit... |
https://huggingface.co/blog/graphcore-getting-started | Getting Started with Hugging Face Transformers for IPUs with Optimum | Tim Santos, Julien Simon | November 30, 2021 | Transformer models have proven to be extremely efficient on a wide range of machine learning tasks, such as natural language processing, audio processing, and computer vision. However, the prediction speed of these large models can make them impractical for latency-sensitive use cases like conversational applications o... |
https://huggingface.co/blog/text-generation-inference-on-inferentia2 | Hugging Face Text Generation Inference available for AWS Inferentia2 | Philipp Schmid, David Corvoysier | February 1, 2024 | We are excited to announce the general availability of Hugging Face Text Generation Inference (TGI) on AWS Inferentia2 and Amazon SageMaker. Text Generation Inference (TGI), is a purpose-built solution for deploying and serving Large Language Models (LLMs) for production workloads at scale. TGI enables high-performance... |
https://huggingface.co/blog/gradio | Using & Mixing Hugging Face Models with Gradio 2.0 | Abubakar Abid | May 25, 2021 | Using & Mixing Hugging Face Models with Gradio 2.0Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesUsing & Mixing Hugging Face Models with Gradio 2.0 |
https://huggingface.co/blog/skops | Introducing Skops | Merve Noyan, Adrin Jalali, Benjamin Bossan | August 12, 2022 | Introducing SkopsAt Hugging Face, we are working on tackling various problems in open-source machine learning, including, hosting models securely and openly, enabling reproducibility, explainability and collaboration. We are thrilled to introduce you to our new library: Skops! With Skops, you can host your scikit-learn... |
https://huggingface.co/blog/pretraining-bert | Pre-Training BERT with Hugging Face Transformers and Habana Gaudi | Philipp Schmid | August 22, 2022 | In this Tutorial, you will learn how to pre-train BERT-base from scratch using a Habana Gaudi-based DL1 instance on AWS to take advantage of the cost-performance benefits of Gaudi. We will use the Hugging Face Transformers, Optimum Habana and Datasets libraries to pre-train a BERT-base model using masked-language model... |
https://huggingface.co/blog/dialog-agents | What Makes a Dialog Agent Useful? | Nazneen Rajani, Nathan Lambert, Victor Sanh, Thomas Wolf | January 24, 2023 | The techniques behind ChatGPT: RLHF, IFT, CoT, Red teaming, and moreThis article has been translated to Chinese 简体中文. A few weeks ago, ChatGPT emerged and launched the public discourse into a set of obscure acronyms: RLHF, SFT, IFT, CoT, and more, all attributed to the success of ChatGPT. What are these obscure acronym... |
https://huggingface.co/blog/leaderboard-cot | Introducing the Open Chain of Thought Leaderboard | Gregor Betz, Sebastian Cacean, Clémentine Fourrier, Kyle Richardson | April 23, 2024 | Chain-of-thought prompting is emerging as a powerful and effective design pattern for LLM-based apps and agents. The basic idea of chain-of-thought prompting is to let a model generate a step-by-step solution (“reasoning trace”) before answering a question or taking a decision. With the Open CoT Leaderboard we’re track... |
https://huggingface.co/blog/megatron-training | How to train a Language Model with Megatron-LM | Loubna Ben Allal | September 7, 2022 | Training large language models in Pytorch requires more than a simple training loop. It is usually distributed across multiple devices, with many optimization techniques for a stable and efficient training. Hugging Face 🤗 Accelerate library was created to support distributed training across GPUs and TPUs with very eas... |
https://huggingface.co/blog/mixtral | Welcome Mixtral - a SOTA Mixture of Experts on Hugging Face | Lewis Tunstall, Philipp Schmid, Omar Sanseviero, Pedro Cuenca, Olivier Dehaene, Leandro von Werra, Younes Belkada | December 11, 2023 | Mixtral 8x7b is an exciting large language model released by Mistral today, which sets a new state-of-the-art for open-access models and outperforms GPT-3.5 across many benchmarks. We’re excited to support the launch with a comprehensive integration of Mixtral in the Hugging Face ecosystem 🔥!Among the features and int... |
https://huggingface.co/blog/speecht5 | Speech Synthesis, Recognition, and More With SpeechT5 | Matthijs Hollemans | February 8, 2023 | We’re happy to announce that SpeechT5 is now available in 🤗 Transformers, an open-source library that offers easy-to-use implementations of state-of-the-art machine learning models.SpeechT5 was originally described in the paper SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Micr... |
https://huggingface.co/blog/hub-duckdb | DuckDB: run SQL queries on 50,000+ datasets on the Hugging Face Hub | Steven Liu, Quentin Lhoest, Sylvain Lesage | June 7, 2023 | The Hugging Face Hub is dedicated to providing open access to datasets for everyone and giving users the tools to explore and understand them. You can find many of the datasets used to train popular large language models (LLMs) like Falcon, Dolly, MPT, and StarCoder. There are tools for addressing fairness and bias in ... |
https://huggingface.co/blog/agents-js | Introducing Agents.js: Give tools to your LLMs using JavaScript | Nathan Sarrazin | July 24, 2023 | We have recently been working on Agents.js at huggingface.js. It's a new library for giving tool access to LLMs from JavaScript in either the browser or the server. It ships with a few multi-modal tools out of the box and can easily be extended with your own tools and language models.InstallationGetting started is very... |
https://huggingface.co/blog/pytorch_block_sparse | Block Sparse Matrices for Smaller and Faster Language Models | François Lagunas | September 10, 2020 | Saving space and time, one zero at a time In previous blogposts we introduced sparse matrices and what they could do to improve neural networks.The basic assumption is that full dense layers are often overkill and can be pruned without a significant loss in precision.In some cases sparse linear layers can even improve ... |
https://huggingface.co/blog/gptj-sagemaker | Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker | Philipp Schmid | January 11, 2022 | Almost 6 months ago to the day, EleutherAI released GPT-J 6B, an open-source alternative to OpenAIs GPT-3. GPT-J 6B is the 6 billion parameter successor to EleutherAIs GPT-NEO family, a family of transformer-based language models based on the GPT architecture for text generation.EleutherAI's primary goal is to train a ... |
https://huggingface.co/blog/fast-mac-diffusers | Swift 🧨Diffusers: Fast Stable Diffusion for Mac | Pedro Cuenca, Vaibhav Srivastav | February 24, 2023 | Transform your text into stunning images with ease using Diffusers for Mac, a native app powered by state-of-the-art diffusion models. It leverages a bouquet of SoTA Text-to-Image models contributed by the community to the Hugging Face Hub, and converted to Core ML for blazingly fast performance. Our latest version, 1.... |
https://huggingface.co/blog/codegemma | CodeGemma - an official Google release for code LLMs | Pedro Cuenca, Omar Sanseviero, Vaibhav Srivastav, Philipp Schmid, Mishig Davaadorj, Loubna Ben Allal | April 9, 2024 | CodeGemma is a family of open-access versions of Gemma specialized in code, and we’re excited to collaborate with Google on its release to make it as accessible as possible.🤗CodeGemma comes in three flavors:A 2B base model specialized in infilling and open-ended generation.A 7B base model trained with both code infill... |
https://huggingface.co/blog/fine-tune-vit | Fine-Tune ViT for Image Classification with 🤗 Transformers | Nate Raw | February 11, 2022 | Just as transformers-based models have revolutionized NLP, we're now seeing an explosion of papers applying them to all sorts of other domains. One of the most revolutionary of these was the Vision Transformer (ViT), which was introduced in June 2021 by a team of researchers at Google Brain.This paper explored how you ... |
https://huggingface.co/blog/phi2-intel-meteor-lake | A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake | Julien Simon, Ella Charlaix, Ofir Zafrir, Igor Margulis, Guy Boudoukh, Moshe Wasserblat | March 20, 2024 | Because of their impressive abilities, large language models (LLMs) require significant computing power, which is seldom available on personal computers. Consequently, we have no choice but to deploy them on powerful bespoke AI servers hosted on-premises or in the cloud.Why local LLM inference is desirable What ... |
https://huggingface.co/blog/encrypted-llm | Towards Encrypted Large Language Models with FHE | Jordan Frery | August 2, 2023 | Large Language Models (LLM) have recently been proven as reliable tools for improving productivity in many areas such as programming, content creation, text analysis, web search, and distance learning.The Impact of Large Language Models on Users' PrivacyDespite the appeal of LLMs, privacy concerns persist surrounding u... |
https://huggingface.co/blog/run-musicgen-as-an-api | Deploy MusicGen in no time with Inference Endpoints | Vaibhav Srivastav, Merve Noyan | August 4, 2023 | MusicGen is a powerful music generation model that takes in text prompt and an optional melody to output music. This blog post will guide you through generating music with MusicGen using Inference Endpoints. Inference Endpoints allow us to write custom inference functions called custom handlers. These are particularly ... |
https://huggingface.co/blog/ml-for-games-1 | AI for Game Development: Creating a Farming Game in 5 Days. Part 1 | Dylan Ebert | January 2, 2023 | Welcome to AI for Game Development! In this series, we'll be using AI tools to create a fully functional farming game in just 5 days. By the end of this series, you will have learned how you can incorporate a variety of AI tools into your game development workflow. I will show you how you can use AI tools for:Art Style... |
https://huggingface.co/blog/gradio-lite | Gradio-Lite: Serverless Gradio Running Entirely in Your Browser | Abubakar Abid, Yuichiro Tachibana, Ali Abdalla | October 19, 2023 | Gradio is a popular Python library for creating interactive machine learning apps. Traditionally, Gradio applications have relied on server-side infrastructure to run, which can be a hurdle for developers who need to host their applications. Enter Gradio-lite (@gradio/lite): a library that leverages Pyodide to bring Gr... |
https://huggingface.co/blog/pytorch-xla | Hugging Face on PyTorch / XLA TPUs: Faster and cheaper training | Daniel JinYoung Sohn, Lysandre | February 9, 2021 | Training Your Favorite Transformers on Cloud TPUs using PyTorch / XLAThe PyTorch-TPU project originated as a collaborative effort between the Facebook PyTorch and Google TPU teams and officially launched at the 2019 PyTorch Developer Conference 2019. Since then, we’ve worked with the Hugging Face team to bring first-cl... |
https://huggingface.co/blog/t2i-sdxl-adapters | Efficient Controllable Generation for SDXL with T2I-Adapters | ChongMou, Suraj Patil, Sayak Paul, Xintao Wang, hysts | September 8, 2023 | T2I-Adapter is an efficient plug-and-play model that provides extra guidance to pre-trained text-to-image models while freezing the original large text-to-image models. T2I-Adapter aligns internal knowledge in T2I models with external control signals. We can train various adapters according to different conditions and ... |
https://huggingface.co/blog/mask2former | Universal Image Segmentation with Mask2Former and OneFormer | Niels Rogge, Shivalika Singh, Alara Dirik | January 19, 2023 | This guide introduces Mask2Former and OneFormer, 2 state-of-the-art neural networks for image segmentation. The models are now available in 🤗 transformers, an open-source library that offers easy-to-use implementations of state-of-the-art models. Along the way, you'll learn about the difference between the various for... |
https://huggingface.co/blog/open-llm-leaderboard-drop | Open LLM Leaderboard: DROP deep dive | Clémentine Fourrier, Alex Cabrera, Stella Biderman, Nathan Habib, Thomas Wolf | December 1, 2023 | Recently, three new benchmarks were added to the Open LLM Leaderboard: Winogrande, GSM8k and DROP, using the original implementations reproduced in the EleutherAI Harness. A cursory look at the scores for DROP revealed something strange was going on, with the overwhelming majority of models scoring less than 10 out of ... |
https://huggingface.co/blog/unity-in-spaces | How to host a Unity game in a Space | Dylan Ebert | April 21, 2023 | Did you know you can host a Unity game in a Hugging Face Space? No? Well, you can!Hugging Face Spaces are an easy way to build, host, and share demos. While they are typically used for Machine Learning demos, they can also host playable Unity games. Here are some examples:HuggyFarming Game Unity API DemoHere's how you ... |
https://huggingface.co/blog/gaussian-splatting | Introduction to 3D Gaussian Splatting | Dylan Ebert | September 18, 2023 | 3D Gaussian Splatting is a rasterization technique described in 3D Gaussian Splatting for Real-Time Radiance Field Rendering that allows real-time rendering of photorealistic scenes learned from small samples of images. This article will break down how it works and what it means for the future of graphics. What is 3D ... |
https://huggingface.co/blog/researcher-dataset-sharing | Creating open machine learning datasets? Share them on the Hugging Face Hub! | Daniel van Strien | October 30, 2023 | Who is this blog post for?Are you a researcher doing data-intensive research or using machine learning as a research tool? As part of this research, you have likely created datasets for training and evaluating machine learning models, and like many researchers, you may be sharing these datasets via Google Drive, OneDri... |
https://huggingface.co/blog/cnil | Hugging Face Selected for the French Data Protection Agency Enhanced Support Program | Yacine Jernite, Julien Chaumond, Anna Tordjmann, Ima Bello | May 15, 2023 | Hugging Face Selected for the French Data Protection Agency Enhanced Support ProgramHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesHugging Face Selected for the French Data Protection Agency Enhanced Support Program |
https://huggingface.co/blog/fhe-endpoints | Running Privacy-Preserving Inferences on Hugging Face Endpoints | Benoit Chevallier-Mames | April 16, 2024 | This is a guest blog post by the Zama team. Zama is an open source cryptography company building state-of-the-art FHE solutions for blockchain and AI.Eighteen months ago, Zama started Concrete ML, a privacy-preserving ML framework with bindings to traditional ML frameworks such as scikit-learn, ONNX, PyTorch, and Tenso... |
https://huggingface.co/blog/rwkv | Introducing RWKV - An RNN with the advantages of a transformer | BlinkDL, Harrison Vanderbyl, Sylvain Gugger, Younes Belkada | May 15, 2023 | ChatGPT and chatbot-powered applications have captured significant attention in the Natural Language Processing (NLP) domain. The community is constantly seeking strong, reliable and open-source models for their applications and use cases. The rise of these powerful models stems from the democratization and widespread ... |
https://huggingface.co/blog/habana | Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training | Susan Lansing | April 12, 2022 | Habana Labs and Hugging Face Partner to Accelerate Transformer Model TrainingHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesHabana Labs and Hugging Face Partner to Accelerate Transformer Model Training |
https://huggingface.co/blog/text-to-webapp | Making a web app generator with open ML models | Julian Bilcke | July 3, 2023 | As more code generation models become publicly available, it is now possible to do text-to-web and even text-to-app in ways that we couldn't imagine before.This tutorial presents a direct approach to AI web content generation by streaming and rendering the content all in one go.Try the live demo here! → Webapp FactoryU... |
https://huggingface.co/blog/aivsai | Introducing ⚔️ AI vs. AI ⚔️ a deep reinforcement learning multi-agents competition system | Carl Cochet, Thomas Simonini | February 7, 2023 | We’re excited to introduce a new tool we created: ⚔️ AI vs. AI ⚔️, a deep reinforcement learning multi-agents competition system.This tool, hosted on Spaces, allows us to create multi-agent competitions. It is composed of three elements:A Space with a matchmaking algorithm that runs the model fights using a background ... |
https://huggingface.co/blog/pricing-update | Introducing our new pricing | Simon Brandeis, Pierric Cistac | November 8, 2022 | Introducing our new pricingHugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesIntroducing our new pricing |
https://huggingface.co/blog/trl-peft | Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU | Edward Beeching, Younes Belkada, Leandro von Werra, Sourab Mangrulkar, Lewis Tunstall, Kashif Rasul | March 9, 2023 | We are excited to officially release the integration of trl with peft to make Large Language Model (LLM) fine-tuning with Reinforcement Learning more accessible to anyone! In this post, we explain why this is a competitive alternative to existing fine-tuning approaches. Note peft is a general tool that can be applied t... |
https://huggingface.co/blog/bert-inferentia-sagemaker | Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia | Philipp Schmid | March 16, 2022 | notebook: sagemaker/18_inferentia_inferenceThe adoption of BERT and Transformers continues to grow. Transformer-based models are now not only achieving state-of-the-art performance in Natural Language Processing but also for Computer Vision, Speech, and Time-Series. 💬 🖼 🎤 ⏳Companies are now slowly moving from the ex... |
https://huggingface.co/blog/websight | From screenshots to HTML code: Introducing the WebSight dataset | Hugo Laurençon, Leo Tronchon, Victor Sanh | March 15, 2024 | In the world of web development, turning designs into functional websites usually involves a lot of coding and careful testing. What if we could simplify this process, making it possible to convert web designs into working websites more easily and quickly? WebSight is a new dataset that aims at building AI systems capa... |
https://huggingface.co/blog/accelerate-library | Introducing 🤗 Accelerate | Sylvain Gugger | April 16, 2021 | 🤗 AccelerateRun your raw PyTorch training scripts on any kind of device.Most high-level libraries above PyTorch provide support for distributed training and mixed precision, but the abstraction they introduce require a user to learn a new API if they want to customize the underlying training loop. 🤗 Accelerate was cr... |
https://huggingface.co/blog/habana-gaudi-2-bloom | Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator | Régis Pierrard | March 28, 2023 | This article will show you how to easily deploy large language models with hundreds of billions of parameters like BLOOM on Habana® Gaudi®2 using 🤗 Optimum Habana, which is the bridge between Gaudi2 and the 🤗 Transformers library. As demonstrated in the benchmark presented in this post, this will enable you to run in... |
https://huggingface.co/blog/unity-asr | AI Speech Recognition in Unity | Dylan Ebert | June 2, 2023 | IntroductionThis tutorial guides you through the process of implementing state-of-the-art Speech Recognition in your Unity game using the Hugging Face Unity API. This feature can be used for giving commands, speaking to an NPC, improving accessibility, or any other functionality where converting spoken words to text ma... |
https://huggingface.co/blog/train-optimize-sd-intel | Optimizing Stable Diffusion for Intel CPUs with NNCF and 🤗 Optimum | Alexander, Yury Gorbachev, Helena, Sayak Paul, Ella Charlaix | May 25, 2023 | Latent Diffusion models are game changers when it comes to solving text-to-image generation problems. Stable Diffusion is one of the most famous examples that got wide adoption in the community and industry. The idea behind the Stable Diffusion model is simple and compelling: you generate an image from a noise vector i... |
https://huggingface.co/blog/idefics2 | Introducing Idefics2: A Powerful 8B Vision-Language Model for the community | Leo Tronchon, Hugo Laurençon, Victor Sanh | April 15, 2024 | We are excited to release Idefics2, a general multimodal model that takes as input arbitrary sequences of texts and images, and generates text responses. It can answer questions about images, describe visual content, create stories grounded in multiple images, extract information from documents, and perform basic arith... |
https://huggingface.co/blog/fl-with-flower | Federated Learning using Hugging Face and Flower | Charles Beauville | March 27, 2023 | This tutorial will show how to leverage Hugging Face to federate the training of language models over multiple clients using Flower. More specifically, we will fine-tune a pre-trained Transformer model (distilBERT) for sequence classification over a dataset of IMDB ratings. The end goal is to detect if a movie rating i... |
https://huggingface.co/blog/simple-considerations | 🚧 Simple considerations for simple people building fancy neural networks | Victor Sanh | February 25, 2021 | Photo by Henry & Co. on UnsplashAs machine learning continues penetrating all aspects of the industry, neural networks have never been so hyped. For instance, models like GPT-3 have been all over social media in the past few weeks and continue to make headlines outside of tech news outlets with fear-mongering titles.An... |
https://huggingface.co/blog/autonlp-prodigy | Active Learning with AutoNLP and Prodigy | Abhishek Thakur | December 23, 2021 | Active learning in the context of Machine Learning is a process in which you iteratively add labeled data, retrain a model and serve it to the end user. It is an endless process and requires human interaction for labeling/creating the data. In this article, we will discuss how to use AutoNLP and Prodigy to build an act... |
https://huggingface.co/blog/education | Introducing Hugging Face for Education 🤗 | Violette Lepercq | April 25, 2022 | Given that machine learning will make up the overwhelming majority of software development and that non-technical people will be exposed to AI systems more and more, one of the main challenges of AI is adapting and enhancing employee skills. It is also becoming necessary to support teaching staff in proactively taking ... |
https://huggingface.co/blog/getting-started-habana | Getting Started with Transformers on Habana Gaudi | Julien Simon | April 26, 2022 | A couple of weeks ago, we've had the pleasure to announce that Habana Labs and Hugging Face would partner to accelerate Transformer model training.Habana Gaudi accelerators deliver up to 40% better price performance for training machine learning models compared to the latest GPU-based Amazon EC2 instances. We are super... |
https://huggingface.co/blog/unity-api | How to Install and Use the Hugging Face Unity API | Dylan Ebert | May 1, 2023 | The Hugging Face Unity API is an easy-to-use integration of the Hugging Face Inference API, allowing developers to access and use Hugging Face AI models in their Unity projects. In this blog post, we'll walk through the steps to install and use the Hugging Face Unity API.InstallationOpen your Unity projectGo to Window ... |
https://huggingface.co/blog/searching-the-hub | Supercharged Searching on the Hugging Face Hub | Zachary Mueller | January 25, 2022 | The huggingface_hub library is a lightweight interface that provides a programmatic approach to exploring the hosting endpoints Hugging Face provides: models, datasets, and Spaces.Up until now, searching on the Hub through this interface was tricky to pull off, and there were many aspects of it a user had to "just know... |
https://huggingface.co/blog/asr-diarization | Powerful ASR + diarization + speculative decoding with Hugging Face Inference Endpoints | Sergei Petrov, Vaibhav Srivastav, Pedro Cuenca, Philipp Schmid | May 1, 2024 | Whisper is one of the best open source speech recognition models and definitely the one most widely used. Hugging Face Inference Endpoints make it very easy to deploy any Whisper model out of the box. However, if you’d like tointroduce additional features, like a diarization pipeline to identify speakers, or assisted g... |
https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api | Few-shot learning in practice: GPT-Neo and the 🤗 Accelerated Inference API | Philipp Schmid | June 3, 2021 | In many Machine Learning applications, the amount of available labeled data is a barrier to producing a high-performing model. The latest developments in NLP show that you can overcome this limitation by providing a few examples at inference time with a large language model - a technique known as Few-Shot Learning. In ... |
https://huggingface.co/blog/gradio-joins-hf | Gradio is joining Hugging Face! | Abubakar Abid | December 21, 2021 | Gradio is joining Hugging Face!Hugging FaceModelsDatasetsSpacesPostsDocsSolutionsPricingLog InSign UpBack to ArticlesGradio is joining Hugging Face! |
https://huggingface.co/blog/databricks-case-study | Databricks ❤️ Hugging Face: up to 40% faster training and tuning of Large Language Models | Ali Ghodsi, Maddie Dawson | April 26, 2023 | Generative AI has been taking the world by storm. As the data and AI company, we have been on this journey with the release of the open source large language model Dolly, as well as the internally crowdsourced dataset licensed for research and commercial use that we used to fine-tune it, the databricks-dolly-15k. Both ... |
https://huggingface.co/blog/chat-templates | Chat Templates | Matthew Carrigan | October 3, 2023 | A spectre is haunting chat models - the spectre of incorrect formatting!tl;drChat models have been trained with very different formats for converting conversations into a single tokenizable string. Using a format different from the format a model was trained with will usually cause severe, silent performance degradatio... |
https://huggingface.co/blog/lora-adapters-dynamic-loading | Goodbye cold boot - how we made LoRA Inference 300% faster | raphael g | December 5, 2023 | tl;dr: We swap the Stable Diffusion LoRA adapters per user request, while keeping the base model warm allowing fast LoRA inference across multiple users. You can experience this by browsing our LoRA catalogue and playing with the inference widget.In this blog we will go in detail over how we achieved that. We've been a... |
https://huggingface.co/blog/dpo-trl | Fine-tune Llama 2 with DPO | Kashif Rasul, Younes Belkada, Leandro von Werra | August 8, 2023 | IntroductionReinforcement Learning from Human Feedback (RLHF) has become the de facto last training step of LLMs such as GPT-4 or Claude to ensure that the language model's outputs are aligned with human expectations such as chattiness or safety features. However, it brings some of the complexity of RL into NLP: we nee... |
https://huggingface.co/blog/ethics-soc-3 | Ethics and Society Newsletter #3: Ethical Openness at Hugging Face | Irene Solaiman, Giada Pistilli, Nima Boscarino, Yacine Jernite, Elizabeth Allendorf, Margaret Mitchell, Sasha Luccioni | March 30, 2023 | Mission: Open and Good MLIn our mission to democratize good machine learning (ML), we examine how supporting ML community work also empowers examining and preventing possible harms. Open development and science decentralizes power so that many people can collectively work on AI that reflects their needs and values. Whi... |
https://huggingface.co/blog/dreambooth | Training Stable Diffusion with Dreambooth using 🧨 Diffusers | Suraj Patil, Pedro Cuenca, Valentine Kozin | November 7, 2022 | Dreambooth is a technique to teach new concepts to Stable Diffusion using a specialized form of fine-tuning. Some people have been using it with a few of their photos to place themselves in fantastic situations, while others are using it to incorporate new styles. 🧨 Diffusers provides a Dreambooth training script. It ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.