data stringlengths 14 24.3k |
|---|
dep#0002: You could have just redirected the welcome to a new channel yk
JL#1976: Let's keep all the discussions in this general chat, so #🙏︱welcome channel serves as a landing page and rules page if we get more users.
ZestyLemonade#1012: **Welcome to #💬︱general**
This is the start of the #💬︱general channel.
JL#1976... |
db0798#7460: This is identical with the vocab.json of the Stable Diffusion 1.5 base model https://huggingface.co/runwayml/stable-diffusion-v1-5/raw/main/tokenizer/vocab.json
db0798#7460: I think the genre names probably come from somewhere else
dep#0002: @seth (sorry if too many pings) I am trying to make my own audio ... |
alfredw#2036: what's the training set?
『komorebi』#3903: vaporwave, chopped and screwed, free folk, experimental rock, art pop, etc?
『komorebi』#3903: don't think the ai knows those genres too well
dep#0002: gptchat:
```python
def image_from_spectrogram(spectrogram: np.ndarray, max_volume: float = 50, power_for_image: fl... |
# Invert
data = 255 - data
# Flip Y and add a single channel
data = data[::-1, :, None]
# Convert to an image
return Image.fromarray(data.astype(np.uint8))
```
dep#0002: anyways I will see what I can do
dep#0002: this is basically audio2audio
『komorebi』#3903: could we help increase the dataset
dep#0002: If they relea... |
Slynk#7009: omg I've been dying for something audio related to happen with all this AI hype.
『komorebi』#3903: so that way the ai can endlessly churn out music i like >:D
my playlist would probably be too smoll for it though
Thistle Cat#9883: Hi what's happened?
Paulinux#8579: What I'm doing wrongly? I have URL like thi... |
hayk#0058: Yeah if you open an issue on github we will aim to get to it soon!
AmbientArtstyles#1406: Hey @ZestyLemonade, I'm writing an article about sound design (sfx for games/movies) and Riffusion, can I use your sentience.wav clip in it?
AmbientArtstyles#1406: I so want to collaborate on training the algorithm with... |
dep#0002: thats what I was trying
Jack Julian#8888: Crazy stuff yall
im a musician myself, and seeing this is both interesting and 'worrying'. Love how you thought this out and put it to works.
April#5244: was hoping to gen using automatic1111's sd webui and perhaps finetune the model using dreambooth. but I feel like ... |
this might be a stupid question. but what was this trained on?
April#5244: I'm also curious about the dataset tbh
April#5244: also managed a small success: converting from wav file to spectrogram and back is working perfectly, and I have a working ckpt that can generate the spectrogram images. Next is to make a finetu... |
April#5244: ```
def spectrogram_image_from_mp3(mp3_bytes: io.BytesIO, max_volume: float = 50, power_for_image: float = 0.25) -> Image.Image:
"""
Generate a spectrogram image from an MP3 file.
"""
# Load MP3 file into AudioSegment object
audio = pydub.AudioSegment.from_mp3(mp3_bytes)
# Convert to mono and set frame rat... |
# Convert to WAV and save as BytesIO object
wav_bytes = io.BytesIO()
audio.export(wav_bytes, format="wav")
wav_bytes.seek(0)
# Generate spectrogram image from WAV file
return spectrogram_image_from_wav(wav_bytes, max_volume=max_volume, power_for_image=power_for_image)
```
```
# Open MP3 file
with open('music.mp3', 'rb... |
# Save image to file
image.save('restoredinput.png')
```
April#5244: add this function and those lines at the bottom for mp3 and cutting first 5 seconds, seems to work great
a_robot_kicker#7014: any idea what could be happening here?
```
ERROR:server:Exception on /run_inference/ [POST]
Traceback (most recent call last... |
File "C:\Users\matth\miniconda3\envs\ldm\lib\site-packages\flask\app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\matth\miniconda3\envs\ldm\lib\site-packages\flask\app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
Fil... |
```
April#5244: this is the img2wav script I'm using https://cdn.discordapp.com/attachments/1053081177772261386/1053166079804981268/audio.py
April#5244: I'm actually not using any other code lol
April#5244: so idk why/how to fix any issues with the riffusion ui stuff
dep#0002: What error did you got
dep#0002: I didnt g... |
April#5244: "clip.wav" is the 5 second clip from the original mp3 that's used for conversion. the image is the converted spectrum from the mp3. and output.wav is the reconverted song from the image
April#5244: scripts used https://cdn.discordapp.com/attachments/1053081177772261386/1053176008141963364/audio2spectro.py,h... |
dep#0002: I prob will also make it into a bot
dep#0002: (riffusion)
dep#0002: although I've also heard that another dev is also working on one
dep#0002: u know lopho from sail?
dep#0002: the decentralized training server
dep#0002: I might ask him tomorrow
dep#0002: he knows a lot about this stuff
dep#0002: he rewrote t... |
dep#0002: og https://cdn.discordapp.com/attachments/1053081177772261386/1053183641699753984/invadercrop.wav
dep#0002: fixed https://cdn.discordapp.com/attachments/1053081177772261386/1053184852154916894/recompiled.mp3
April#5244: got it pretty close https://cdn.discordapp.com/attachments/1053081177772261386/10531893081... |
dep#0002: maybe the first finetune
April#5244: I wonder if there's a way to just have the whole song in the image 🤔
April#5244: I guess it'd have to be a larger image...
vai#0872: any way to fine tune this model?
Milano#2460: hi All! are you aware of https://www.isik.dev/posts/Technoset.html ?
Milano#2460: Technoset i... |
JeniaJitsev#1332: Great work, folks, very impressive! I am scientific lead and co-founder of LAION, datasets of which are used to train original image based stable diffusion. Very nice to see such a cool twist for getting spectrogram based training running. We would be very much interested to cooperate on that and scal... |
https://github.com/chavinlo/riffusion-manipulation
JL#1976: Let's create a post and get that pinned.
JL#1976: **Official website:**
https://riffusion.com/
**Technical explanation:**
https://www.riffusion.com/about
**Riffusion App Github:**
https://github.com/hmartiro/riffusion-app
**Riffusion Inference Server Github... |
@seth
@hayk
**HackerNews thread:**
https://news.ycombinator.com/item?id=33999162
**Subreddit:**
https://reddit.com/r/riffusion
**Riffusion manipulation tools from @dep :**
https://github.com/chavinlo/riffusion-manipulation
**Riffusion extension for AUTOMATIC1111 Web UI**:
https://github.com/enlyth/sd-webui-riffusio... |
**Notebook:**
https://colab.research.google.com/gist/mdc202002/411d8077c3c5bd34d7c9bf244a1c240e/riffusion_music2music.ipynb
**
Huggingface Riffusion demo:**
https://huggingface.co/spaces/anzorq/riffusion-demo
(pm me if any new resources have to be added or there are any errors in current listings)
JL#1976: https://tec... |
JL#1976: Feel free to share the best stuff you get in #🤘︱share-riffs
Eclipstic#9066: i can only say this:
this is cool as hell
HD#1311: what's the song
dep#0002: https://www.youtube.com/watch?v=jezqbMVqcLk
dep#0002: I am messing with the script rn so I can train a whole model on him
dep#0002: another sample
dep#0002... |
dep#0002: I've been discusing about it with another dev, and it could help to "channels for 24bit amp"
dep#0002: power 0.75 https://cdn.discordapp.com/attachments/1053081177772261386/1053421209599082496/planet_girl_rebuild.wav
dep#0002: power 0.1 (earrape) https://cdn.discordapp.com/attachments/1053081177772261386/1053... |
『komorebi』#3903: oh ok
how to convert images to spectrogram
also i have to import 5 second stuff right? nothing longer?
IgnizHerz#2097: https://github.com/chavinlo/riffusion-manipulation
April#5244: I posted code that does this earlier. though it seems a link to something better was just posted?
April#5244: and yes, ha... |
dep#0002: I can help you train one if needed
dep#0002: he just logged off .-.
IgnizHerz#2097: sweet, I did this with my very rusty python code. Only thing is dealing with getting to the end having like 2seconds or something leftover, guess you'll have to add empty noise.
dep#0002: yeah empty noise should do
April#5244:... |
dep#0002: if you need help I can join the vc and explain you in depth
oliveoil2222#2222: and thats all in command line right
dep#0002: yes
dep#0002: (unfortunately)
oliveoil2222#2222: yup
hayk#0058: I haven't tried it beyond a bit in the automatic1111 ui. Sometimes the longer results got repetitive, but there's a lot t... |
dep#0002: to generate
dep#0002: the images
dep#0002: in the webui
dep#0002: @April
April#5244: whatever you want it to generate?
dep#0002: You used Automatic's webUI to generate spectograms, right?
April#5244: yeah
IgnizHerz#2097: usually the prompt depends on what you intend to get the result as, jazz for jazz and suc... |
dep#0002: no "Spectogram of Electro Pop" then
IgnizHerz#2097: no written rules, can run the same seed with different prompts to see what does best
April#5244: no. riffusion generates spectrograms by default
IgnizHerz#2097: but uh it generates without having to say it yeah
IgnizHerz#2097: I imagine this fancy interpolat... |
matteo101man#6162: Ah I see, thanks
matteo101man#6162: Shoot let me know if you figure it out
matteo101man#6162: they do have interpolating prompt scripts and seed travel but I wouldn't know if that would work or if you could somehow frankenstein that with the webui riffusion thing
April#5244: https://cdn.discordapp.c... |
dep#0002: soon https://cdn.discordapp.com/attachments/1053081177772261386/1053554055764512878/image.png
dep#0002: https://cdn.discordapp.com/attachments/1053081177772261386/1053554170227064893/Snail_s_House____waiting_for_you_in_snowing_city._chunk_2.png
April#5244: testing seed travel https://cdn.discordapp.com/attac... |
wav3 = wav_bytes_from_spectrogram_image(img)
sound3 = pydub.AudioSegment.from_wav(wav3[0])
img = Image.open("test/00004.png")
wav4 = wav_bytes_from_spectrogram_image(img)
sound4 = pydub.AudioSegment.from_wav(wav4[0])
sound = sound0+sound1+sound2+sound3+sound4
mp3_bytes = io.BytesIO()
sound.export(mp3_bytes, format="mp... |
April#5244: the issue is there's no smoothing between clips :\
April#5244: I imagine a higher amount of interpolating steps would smooth it out a bit, but it'd also make the song longer...
matteo101man#6162: what gpu do you have
April#5244: my gpu is bad lol. 1660ti
matteo101man#6162: dang
April#5244: I can do like one... |
April#5244: warped sound is just due to low txt2img smapling steps I think
April#5244: I'm running 20 steps each since it's fast though it kinda gets the best results with like 70+
April#5244: also this interpolation script is nice since it stitches the spectrogram images together automatically so I can just throw that... |
April#5244: tbh I don't know what settings riffusion uses by default 🤷♀️
matteo101man#6162: nah it's just interesting never really messed with the scripts i was talking about, just kinda downloaded them and read about their function
matteo101man#6162: ah isee why
matteo101man#6162: how do i execute this code within a... |
parser = argparse.ArgumentParser()
parser.add_argument("filename", help="the file to process")
args = parser.parse_args()
# The filename is stored in the `filename` attribute of the `args` object
filename = args.filename
img = Image.open(filename)
wav = wav_bytes_from_spectrogram_image(img)
write_bytesio_to_file("musi... |
April#5244: here I fixed the commenting https://cdn.discordapp.com/attachments/1053081177772261386/1053559902334881823/audio.py
matteo101man#6162: uh, how do i define specific file names like if i have a folder of things named say "image (1)"
April#5244: just run this and have your files in the "test" folder next to it... |
April#5244: yeah the script I posted is a python script 🙂
matteo101man#6162: right
matteo101man#6162: so say
matteo101man#6162: i took that code verbatim
matteo101man#6162: and put it into notepad++ and saved it as a .py and ran it in a folder with images from 00000 to 00019 and a test folder in that same folder
April... |
April#5244: make sure it's either all spaces or all tabs and not a mix lol
matteo101man#6162: yeah fixed tht
matteo101man#6162: that*
matteo101man#6162: now no module named numpy
matteo101man#6162: ill look into that
April#5244: pip install numpy 🙂
matteo101man#6162: yep
matteo101man#6162: and for PIL
matteo101man#616... |
import numpy as np
from PIL import Image
import pydub
from scipy.io import wavfile
import torch
import torchaudio
import argparse
```
April#5244: are the imports for the file
matteo101man#6162: halfway there
April#5244: torch, argparse, scipy, pydub, pillow, numpy
April#5244: I think io and typing are python defaults?... |
matteo101man#6162: quite a lot of things
April#5244: this is why I do python stuff manually. I had torch installed already thanks to the whole auto webui stuff
April#5244: then again I actually code in python 🤷♀️
matteo101man#6162: yeah i'm definitely missing something
April#5244: https://pytorch.org/get-started/loca... |
matteo101man#6162: did not help lol
oliveoil2222#2222: would love to see if dall-e 2 style clip variations could happen but I guess spectrogram2spectrogram is the closest one can get for now
matteo101man#6162: i get frozen solve when trying to install cuda on anaconda then it gets stuck on solving environment
matteo101... |
db0798#7460: Thanks, I'll try this later
matteo101man#6162: @April after taking all the time just to get that darn thing working i see what you mean
matteo101man#6162: it doesnt really interpolate all that well
matteo101man#6162: also increasing to max steps doesn't really make it much better the skipping is still obvi... |
IDDQD#9118: Yeah, thanks anyhow !
IDDQD#9118: can't wait for this to get optimized n stuff. Would like to ought that there's alot that can be done on that front
Jay#0152: https://colab.research.google.com/gist/mdc202002/411d8077c3c5bd34d7c9bf244a1c240e/riffusion_music2music.ipynb
Jay#0152: finally, enjoy everyone!
Jay#... |
dep#0002: I'm not a musician but I do tinker a lot with the models and tech
At this point you can do txt2img to attempt to generate some new seeds
You can also use img2img to generate other variants of the same "beat"(?) or tempo
dep#0002: I posted some on #🤘︱share-riffs
dep#0002: Using songs that you like, convert ... |
dep#0002: never used pycharm btw
Nikuson#6709: I get absolutely nothing. just the terminal outputs “python” in response and nothing happens
dep#0002: have you tried running it directly on bash or cli
dep#0002: because on every instance I have ran it I never got that error
Nikuson#6709: just nothing happens, I don't th... |
dep#0002: and try running it
dep#0002: tell me what you get
Nikuson#6709: D:\Python\StableVoice\venv\lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work
warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarnin... |
About to call spectogram_image_from_file function
Loading Audio File
Process finished with exit code 1
dep#0002: you dont have ffmpeg installed
dep#0002: thats why
dep#0002: in what OS are you windows or linux
Nikuson#6709: Win
dep#0002: https://www.wikihow.com/Install-FFmpeg-on-Windows
dep#0002: a bit hard
ryan_helsi... |
Nikuson#6709: nothing has changed after installation
Nikuson#6709: although it looks like windows doesn't see ffmpeg. Strange, I did everything according to the instructions
Nikuson#6709: I fully installed it and the operating system even began to see ffmpeg, but the error is the same
dep#0002: At this point I suggest ... |
IgnizHerz#2097: got an example of what your spectrogram looked like?
April#5244: example outpainting spectro + converted song https://cdn.discordapp.com/attachments/1053081177772261386/1053814603445960824/music.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1053814603890561054/test.png
April#5244: this ... |
Nikuson#6709: 😢
IgnizHerz#2097: like uh with the built-in clip interrogator?
April#5244: yeah
April#5244: ? how I started it?
IgnizHerz#2097: `a black and white photo of a square area with a pattern on it` not sure if it'd help to be fair
IgnizHerz#2097: something tells me every graph looks this way
April#5244: 🤷♀️
... |
IgnizHerz#2097: I'd just like to keep the effective "style transfer" halfway consistent
IgnizHerz#2097: on the stuff I've been doing
IgnizHerz#2097: between two chunks of 5 seconds one will be quiet and then the next starts much louder
IgnizHerz#2097: harder to fix than popping or silence
matteo101man#6162: Ah maybe I’... |
db0798#7460: I think a 5 second clip could be a seed for making a loop. In the output of OpenAI Jukebox, there are usually some 5 to 10 second bits that sound great when they are looped but instead of looping, OpenAI Jukebox moves on to something unrelated, and the incoherence makes the output sound bad. So just loopin... |
IgnizHerz#2097: same way with images
Nikuson#6709: no, I really like them. At least until I ran them and there were no problems 😅
IgnizHerz#2097: its how outpaint on images can keep a general idea and even add to them
IgnizHerz#2097: I mean you can just keep outpainting couldn't you
April#5244: I showed some outpaint... |
IgnizHerz#2097: praise the open sourcing
April#5244: I like my anonymity
April#5244: the code I posted is mostly ripped from riffusion and chatgpt anyway 🤷♀️
April#5244: I'd formally release it but it's a mess lol
April#5244: normally I try to keep proper releases to things that are actually nice 😂
Nikuson#6709: ok,... |
April#5244: with regular stable diffusion it's using the laion dataset and clip which has damn near everything you'd want. but what does riffusion have?
Jay#0152: yes you're correct.
April#5244: it's entirely possible to keep outpainting. however you can't guarantee consistency. outpainting only makes sure the connecti... |
April#5244: 18432x512
April#5244: no that's still wrong
April#5244: i'm dumb
April#5244: actually....
IgnizHerz#2097: which is applicable to music as it is to images. For both you'd want something that consists of new material but is not just random nonsense
April#5244: yes that's correct.
5s=512
180s/5s = 36
36*512 = ... |
April#5244: tldr: sticking the whole dang song into sd is unworkable
April#5244: even though that's what you'd need to do to get song structure
dep#0002: Not really, I mean sure SD 1.5. but SD 2.0 works at 768, and we already have aspect ratio trainers
April#5244: I suppose if you clip into like song sections like vers... |
April#5244: 512 height is fine
April#5244: it's just width we need
Jay#0152: oh my god it kind of worked..... with default settings
dep#0002: About the loss, lopho and me were talking about inserting more data on the other 2 channels
Currently it uses 1 channel (B&W) and replicates across the 2 other channels
dep#0002:... |
dep#0002: Anyways
.....
April#5244: yeah so we don't need like 2048x2048 images, but rather 2048x512
dep#0002: It exists
April#5244: oh?
dep#0002: It's called bucketing
dep#0002: I just said it.....
April#5244: doing that would definitely help
April#5244: but still I think it gets expensive even past 2048x512
April#524... |
db0798#7460: I think just generating loops with Stable Diffusion and then stitching them together with some post-processing script would work better than trying to outpaint a whole song
April#5244: 5s loop isn't really that interesting to listen to though lol
April#5244: okay seems my laptop *is* able to gen a 2048x512... |
Nikuson#6709: and in what variable does the untransformed spectrogram lie here?
April#5244: ?
db0798#7460: Yes, generating a song structure the way I described is definitely more primitive compared to getting a neural network to produce a song structure. But the neural network method seems to be more difficult to do
Ap... |
a_robot_kicker#7014: another change I ended up needing to do was converting the spectrogram image to float64, otherwise it would always overflow and produce nans https://cdn.discordapp.com/attachments/1053081177772261386/1053875449765318737/image.png
a_robot_kicker#7014: but, now I'm getting output spectrograms and wav... |
April#5244: https://discord.com/channels/1053034685590143047/1053081177772261386/1053559417808900096
https://discord.com/channels/1053034685590143047/1053081177772261386/1053176008687230978
April#5244: I'm sure the riffusion-manipulation one will work fine too
matteo101man#6162: oh no
April#5244: or you can just grab a... |
noop_noob#0479: @hayk @seth Sorry for the ping. May I know what the dataset was? Or if not, maybe at least what the text in the dataset looks like? I think that maybe knowing what the data looks like could lead to better prompts.
a_robot_kicker#7014: Yeah I can't find this info anywhere and it's critical to understand ... |
Nikuson#6709: from constant problems, I can only come to the conclusion that pycharm is far from the best IDE
Twee#2335: damn this app is nuts
dep#0002: https://developer.spotify.com/documentation/web-api/reference/#/operations/get-audio-features
dep#0002: Might be really useful to use this for future finetuning
dep#00... |
Twee#2335: and im not talking about mere meta genres like pop, rock, jazz, hip hop, etc
Twee#2335: im talking about more specific subgenres and scenes
Sheppy#4289: that's important too
Twee#2335: like if i added "radiohead but vaporwave"
Sheppy#4289: lol
a_robot_kicker#7014: working on a simple tkinter local GUI, will ... |
IDDQD#9118: Neither does it do black metal :(((
XIVV#9579: or hardcore punk :((((
IDDQD#9118: Looking forwards to the future iterations of this riffusion. Also would gladly contribute if anyhow possible at some point (most likely via labelling etc. since I don't posses programming prowess). this immense.
IDDQD#9118: "a... |
*- **BEFORE:** (Original_Riffusion_Output_Sound)*
*- **AFTER:** (Mixed_Mastered_Sound)*
🙂 https://cdn.discordapp.com/attachments/1053081177772261386/1054429627482910832/Original_Riffusion_Output_Sound.wav,https://cdn.discordapp.com/attachments/1053081177772261386/1054429627810062436/Mixed_Mastered_Sound.wa... |
│7: python setup.py install │
│8: conda install ffmpeg │
│9: python whatsthis.py |```
pnuts#1013: `whatsthis.py` is the first sample in the repo```import os
import glob
from pydub import AudioSegment
video_dir = './samples' # Path where the ... |
AgentA1cr#8430: does Riffusion support negative prompt weights?
AgentA1cr#8430: Also, loving what this model can do. However, it seems to me that, given enough time, it will slowly (or sometimes not-so-slowly) drift away from the prompt and start doing its own thing, with a strong preference for percussion and piano.
h... |
Nikuson#6709: "Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work"
Nikuson#6709: I installed psydub via pip, but the audio backend for it is ffmpeg and I installed it according to the guide from wikiHow
a_robot_kicker#7014: alrighty, here's my fork that has a simple local tkinter gui that can read ... |
db0798#7460: It should be possible to use chavinlo's scripts from https://github.com/chavinlo/riffusion-manipulation to do conversions between audio and spectrograms, and to run Dreambooth to add new spectrograms to the Riffusion model
cravinadventure#7884: Amazing! Thank you for sharing. 🙂
nullerror#1387: thank you d... |
dep#0002: ok
nullerror#1387: https://cdn.discordapp.com/attachments/1053081177772261386/1054509685790744627/image.png
dep#0002: you mean if the repo uses the griffin-lim thing to reconstruct the audio?
nullerror#1387: https://cdn.discordapp.com/attachments/1053081177772261386/1054509803369668618/image.png
nullerror#1... |
nullerror#1387: how many images/training steps are recommended (if known)
db0798#7460: My first Dreambooth test run just finished. I used 53 5 second pieces of a chiptune for training, used 'techno' as the class prompt. The output sounded like a chiptune already after 750 steps. I think it got to overfitting territory ... |
nullerror#1387: gonna go for like 200-400
db0798#7460: That's like in the Terminator movie where Skynet travels back in time and contributs to Skynet's code
a_robot_kicker#7014: That's awesome. I'd love to try a fine tuned chip tune model
db0798#7460: I'll try again later with a larger input dataset
a_robot_kicker#7014... |
denny#1553: yeah it's super fast!
denny#1553: I've been impressed
nullerror#1387: finetuned my model but ive run into the issue of idk how to run it now lmao
nullerror#1387: i have an amd gpu so i dont think the webapp will work for me. is there any other way of running riffusion with a custom model? maybe a colab?
nul... |
db0798#7460: There are smaller versions of the Riffusion model that someone linked to on Reddit: https://www.reddit.com/r/riffusion/comments/znbo75/how_to_load_model_to_automatic1111/ . I was using the 4Gb one for training. I don't actually know what the difference is between these versions
db0798#7460: Here's a random... |
db0798#7460: For spectrum to audio I used the default settings, except I reduced maxvol from 100 to 50 because otherwise the audio started clipping
nullerror#1387: gotcha
nullerror#1387: i tried running spec to audio but again amd gpu so i couldnt do it
db0798#7460: It would be handy if there was a Colab version of tha... |
db0798#7460: I don't know what those images are for exactly but I think replacing the image files in that directory won't do anything unless you retrain the whole Riffusion model from scratch in the way the people who created that model did
db0798#7460: I think textual inversion might also work in place of Dreambooth b... |
JL#1976: https://arstechnica.com/information-technology/2022/12/riffusions-ai-generates-music-from-text-using-visual-sonograms/ Ars Technica article
IDDQD#9118: yes, indeed
Nikuson#6709: for one script to convert audio to image spectrogram
XIVV#9579: how do i turn that off
Edenoide#0166: Hi! I'm a windows user and I've... |
Edenoide#0166: wow
Edenoide#0166: maybe it's something wrong with the audio. I've been generating the loops with Audacity (free sound software):
Edenoide#0166: Just drag and drop a sound on it. Then select 4 beats for making a loop and delete the rest (In case you are generating 'four-to-the-floor' electronic music). D... |
Edenoide#0166: How did you avoid the 'clipping' artifacts in the second .wav?
Nikuson#6709: don't know, i just cut the audio through this service for even 5 seconds: https://mp3cut.net/
Edenoide#0166: I think riffusion only works with loops of 5,12 seconds (maybe I'm wrong). This means if you are not training your mode... |
nullerror#1387: ok checked the code can confirm it is 44.1khz
nullerror#1387: scared me for a sec
Haycoat#4808: Should I start a list of artists the model currently recognizes?
Haycoat#4808: Because there's a few that are very prominent when generating with their name
a_robot_kicker#7014: Yeah 44.1 kHz
a_robot_kicker#7... |
Twee#2335: i should make an ai-generated lo-fi hip hop livestream
Twee#2335: nobody will tell the difference
nullerror#1387: uploaded roughly 1400pics
nullerror#1387: haha twee that was what i was gonna go for here in a sec
nullerror#1387: endless lofi beats
Twee#2335: i mostly wanna do it as a critique
Twee#2335: of h... |
nullerror#1387: yeye
Semper#0669: Oh I see! Would you mind share the google collab link you used?
Twee#2335: most of my ai ideas are mostly satirical critiques of lack of creativity within culture
nullerror#1387: sure thing
Semper#0669: I am interested in that question
Semper#0669: As well
nullerror#1387: it comes with... |
pnuts#1013: install it from the official repo and run the web-app locally? at least that way you'll get continuous playback
Twee#2335: tried and ran into a lot of headaches lol
pnuts#1013: oh 🙂
Twee#2335: also storage is an issue
Twee#2335: all these models, man
Twee#2335: they eat ur hard drive up
Semper#0669: Ahah y... |
nullerror#1387: then use that github linked somewhere above called like manipulation tools for riffusion or smth to get audiotoimg for training
pnuts#1013: sure, I don't recall running into any major issues. I've got it working on 2 machines. I'm sure we can work it out
nullerror#1387: twee do u have an amd graphics ca... |
pnuts#1013: https://lambdalabs.com/ seems to be one of the cheaper options
pnuts#1013: make sure you have node/npm installed, then run `npm install` from inside the folder
Twee#2335: which folder though
Twee#2335: the node folder?
Twee#2335: i wish that was specified tbh
pnuts#1013: the git repo you cloned
Twee#2335: w... |
pnuts#1013: if it's running elsewhere, add the correct IP
nullerror#1387: thanks punts i’ve been looking at that and vast ai
nullerror#1387: i’ll extend my search
Twee#2335: i should had know i should had git cloned, im just still getting used to "developer unclear instructions to layman user" syndrome
Twee#2335: do u ... |
Twee#2335: oh lmao
Twee#2335: whats the checkpoint download for then
pnuts#1013: more fine=tuning perhaps? I'm pretty confident I didn't download it on the 2nd install I did.
Twee#2335: also
Twee#2335: wasnt this suppose to be in the inference folder
Twee#2335: which is a separate download
pnuts#1013: also it's 14GB I ... |
Haycoat#4808: Like if you have a spectrogram of your voice, you can convert it to the style of Avicii EDM with img2img implementation
Twee#2335: https://cdn.discordapp.com/attachments/1053081177772261386/1054806218834718791/Screenshot_2022-12-20_at_12.03.15_PM.png
pnuts#1013: you've launched the inference server too?
... |
pnuts#1013: you ran 3 commands at once
Twee#2335: i mean that usually tends to work lol
pnuts#1013: ```conda create --name riffusion-inference python=3.9
conda activate riffusion-inference
python -m pip install -r requirements.txt```
Twee#2335: one command, then the other, then the other
Haycoat#4808: What if we used t... |
a_robot_kicker#7014: In my case that indicated needing to install torch
a_robot_kicker#7014: Specifically torch audio and cuda
pnuts#1013: `pip install --no-cache-dir --ignore-installed --force-reinstall --no-warn-conflicts torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116`
Haycoat#4... |
python3 file2img.py -i INPUT_AUDIO.wav -o OUTPUT_FOLDER
But using /foldername/*.wav doesn’t work
Twee#2335: i can add spectrograms into the web app?
pnuts#1013: yes, there's a seed image folder
pnuts#1013: <https://github.com/riffusion/riffusion-inference/tree/main/seed_images>
Twee#2335: once i add a seed image, how... |
https://github.com/nikuson/trimmed
Nikuson#6709: ChatGPT generated
LAIONardo#4462: Thank you!
nullerror#1387: doesnt the riffusion manipulation thing already do this for spectrograms?
nullerror#1387: unless this is meant for smth else
Philpax#0001: hey there! apologies if this has already been asked, but is there any i... |
Edenoide#0166: Is this Linux? Did someone been capable of make it run on windows?
denny#1553: The inference server runs fine on windows through conda
denny#1553: Haven't tried the front end but I suspect it's fine too
Edenoide#0166: through conda you say? I'm gonna give it a try then
Edenoide#0166: I had as lot of prob... |
db0798#7460: It looks more like torch is installed but Cuda isn't
Meatfucker#1381: pytorch website has a little command configuration thing on it to build you a command
Meatfucker#1381: https://pytorch.org/get-started/locally/
db0798#7460: If I remember it right, on my computer I first had to install Cuda from NVIDIA w... |
Nikuson#6709: it seems to me that only the sampler is trained in riffusion, which gives such poor quality
matteo101man#6162: Anyone know of any local mashup AIs?
nullerror#1387: rave dot dj
nullerror#1387: been around a while its okay
hayk#0058: 🤘 Hey riffusers! 🤘 @here
@seth and I have been absolutely blown away b... |
+ Attached is an awesome sample created by producer Jamison Baken incorporating outputs from Riffusion.
If you’re a software eng or musician interested in being more directly involved, feel free to send us a DM. And everyone, thanks for being here! https://cdn.discordapp.com/attachments/1053081177772261386/10549103689... |
- set up a new “***Competitions Channel***” which includes:
- “***Weekly Top 10 Leaderboard***”
- Top liked posts in 1 week. Ideally you should keep track of the weekly leaderboards so anyone could go back in time to look and see who won on any given week, in the past.
- “***All-Time Top 10 Leaderboard***”
- Top liked ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.