Request for instructions to correctly convert a model to fp8 with appropriate scaling

#39
by Pbgsgxyx - opened

Hi,

I am using some qwen finetunes and they all have the same issue as the ComfyUI released qwen model, where the fp8 model was produced by directly downcasting the original fp32/fp16 weights, rather than employing a calibrated conversion process with appropriate scaling, resulting in grid-like pattern when used with Qwen-Image-Lightning-4steps-V2.0.safetensors or Qwen-Image-Lightning-8steps-V2.0.safetensors.

Would it be possible to share the methodology/code used for the calibrated conversion process with appropriate scaling? I would like to convert these finetunes properly so that they can be used with your more recent lightning loras.

Sign up or log in to comment