-
stable-diffusion-v1-5/stable-diffusion-v1-5
Text-to-Image • Updated • 1.5M • 1.01k -
suno/bark
Text-to-Speech • Updated • 58k • 1.51k -
suno/bark-small
Text-to-Speech • Updated • 44.3k • 253 -
sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
Sentence Similarity • 0.1B • Updated • 18.3M • • 1.14k
Dmitrii Kostakov PRO
kostakoff
AI & ML interests
MLOps
Recent Activity
posted
an
update
about 22 hours ago
My home lab for AI models - llmlaba v1
After I began learning MLOps I realized that I needed some kind of home lab, there are a lot of GPUs that I need to learn how to set up and test.
So I spent some time to do a researching which platform I could buy or build.
My requirements ware:
- Limited budget
- Power supply 1 kW or higher
- Few PCIe slots to be able to install more than one gpu
- Zero maintenance cost, I don't want spend a lot of time or money to maintain lab hardware, except for the GPUs
I chose the Intel Mac Pro 7.1:
- Prices on eBay acceptable
- Excelent cooling
- 1.4 kW power supply
- 7 PCIe slots
- Zero maintenance: I don't need to do anything with the Mac Pro hardware; it just works
- Classic UEFI boot loader
It requires a bit of OS preparation:
1. Install Ubuntu 24.04 (it works with the general PC ISO image)
2. Set up T2 drivers
```bash
sudo apt install -y dkms linux-headers-$(uname -r) applesmc-t2 apple-bce lm-sensors
```
3. Install t2fanrd to manually manage fans (/etc/t2fand.conf) https://wiki.t2linux.org/guides/fan/
4. Fix PCIe BAR: add pci=realloc to GRUB_CMDLINE_LINUX_DEFAULT so the Linux kernel will properly initializes server GPUs without Graphics Output Protocol
5. Install NVIDIA GPU driver:
```bash
sudo apt install nvidia-driver-570
```
And it works!
I was able to run server-grade Nvidia Tesla P100 (required DIY air duct), and consumer Nvidia Titan X, Titan V, GTX 1080 cards on the old Mac Pro 7.1 - even three in parallel.
https://huggingface.co/llmlaba
liked
a model
12 days ago
XiaomiMiMo/MiMo-7B-Base
liked
a model
12 days ago
tiiuae/falcon-40b