Daichi and Pascal
It's been many moons since Gemma-3 released, The world blessed by it not being a total dud like LLama-4, I'm just here to dump 2 of my newest, warmest creations - A finetune and a merge of Gemma-3-12B.
Firstly I trained a Text completion lora ontop of Gemma-12b-Instruct, The data for this was mostly Light-Novels (Yuri, Romance, Fantasy, And own Personal Fav, I'm in love with the villaness.) along with The Boba Fett Novels. This became the base for Pascal-12B.
Now so far i'd only taught the model to complete text, Ontop of the Text-completion trained base, I finetuned the model with new Roleplay datasets, Mostly Books/Light-Novels(Again) which were converted into turns via Gemini-Flash and Human Roleplay data from RP-Guild, Giant in the playground, Etc. Creating Pascal-12B Pascal was very good at SFW roleplaying, Has a nice short & sweet prose with very little slop.
A problem i noticed with the model was that it lacked specific kink/trope coverage, As such i merged it with The-Omega-Directive-Gemma3-12B-v1.0
- An NSFW based finetune of Gemma-3.
The resulting model, Named Daichi, kept the same Short-style responses of Pascal while being good at specific NSFW scenarios.
The models can be found here, Along with GGUF quants: https://huggingface.co/collections/Delta-Vector/daichi-and-pascal-67fb43d24300d7e608561305
[Please note that EXL2 will not work with Gemma-3 finetunes as of now due to Rope issues. Please use VLLM or LLama.cpp server for inference and make sure to be up-to-date.]