De-censored, tuned, and tuned again via Unsloth using custom in house datasets and methods:
DavidAU/gemma-4-E4B-it-The-DECKARD-Expresso-Universe-HERETIC-UNCENSORED-Thinking
Join the community of Machine Learners and AI enthusiasts.
Sign UpHi David! Any chance I could persuade you to make an Heretic Uncensored character-centric/roleplaying variant?
I'm looking for model that will stay in-character and not fight about having its thinking output turned off, with the goal of using it as both a chatbot and to aid with creative writing: bouncing ideas and story context off the persona to see how it reacts to knowledge and lore + prompts to generate dialogue.
I'm hopeful for these Gemma4 models, and really enjoy your work. Thanks!
Maybe in the future ; atm still learning/addressing quirks with these new Gemmas.
Google released three different arch structure here : "E", "MOE", and 31B dense.
Also plans to create larger Gemma 4s too ; which may work better for specific applications and/or work better period.
These are in the plans for next week.
Yet completely incapable of keeping characters straight in RP, constantly switching which one is "you" and misgendering in Second Person RP. I'll stick with 26B.