No reasoning?
Why does this not have reasoning while the base model does?
Hi,
The official base model here: google/gemma-4-26B-A4B-it and this model: llmfan46/gemma-4-26B-A4B-it-ultra-uncensored-heretic-GGUF have both reasoning, but by default on the official base model reasoning is disabled as it is with this model, you can enable it if you want, in LM Studio go into "Inference", then scroll down, then click on "Prompt Template" and add at the very top:
{%- set enable_thinking = true -%}
And then you have reasoning, to disable reasoning again, simply remove it.
You are probably thinking of lmstudio-community/gemma-4-26B-A4B-it-GGUF when you say this, but this is a custom version made for LM Studio by lmstudio-community that adds a enable/disable reasoning toggle, however this is not the official version, the official version is here: google/gemma-4-26B-A4B-it which does not have that enable/disable toggle and that's the one my heretic is based on, so to enable/disable reasoning just add that single line at the very top like I explained above.
You can easy (2minutes of work) add yourself reasoning toggle same as LmStudio model have. It will work for any model, no metter heretic or base.
Basicly you create 1 file with notepad named: model.yaml
Here someone make instruction - https://www.reddit.com/r/LocalLLaMA/comments/1sc9s1x/tutorial_how_to_toggle_onoff_the_thinking_mode/
You can easy (2minutes of work) add yourself reasoning toggle same as LmStudio model have. It will work for any model, no metter heretic or base.
Basicly you create 1 file with notepad named: model.yaml
Here someone make instruction - https://www.reddit.com/r/LocalLLaMA/comments/1sc9s1x/tutorial_how_to_toggle_onoff_the_thinking_mode/
Yeah that won't work because this is just a workaround, it's not integrated inside the GGUFs, so this is not something that I upload and people can just download and just enable/disable reasoning.
What you see in lmstudio-community releases is something just for their own releases, I literally cannot add the toggle to my Hugging Face GGUF repos. The mechanism that lmstudio-community uses isn't in their GGUF repos either, it's in LM Studio's Hub, which is a separate system that community uploaders don't have access to. Even if you I put a model.yaml in my Hugging Face repos, LM Studio won't read it from there; it only reads yamls from its own hub cache directory. The Reddit tutorial is just instructing users to manually create the hub cache entries that LM Studio would normally create automatically for official Hub models.
You want to enable/disable reasoning, just follow the instructions here: https://huggingface.co/llmfan46/gemma-4-26B-A4B-it-ultra-uncensored-heretic-GGUF/discussions/1#69dff308da6936595aa47469
Thanks for the response! I will try!
to bad you can't set a custom <|channel>thought in LM studio. Never mind found it...
(Amazing btw. that you got all that from just one sentence!)
Thanks for the response! I will try!
Cool.
You can easy (2minutes of work) add yourself reasoning toggle same as LmStudio model have. It will work for any model, no metter heretic or base.
Basicly you create 1 file with notepad named: model.yaml
Here someone make instruction - https://www.reddit.com/r/LocalLLaMA/comments/1sc9s1x/tutorial_how_to_toggle_onoff_the_thinking_mode/Yeah that won't work because this is just a workaround, it's not integrated inside the GGUFs, so this is not something that I upload and people can just download and just enable/disable reasoning.
What you see in
lmstudio-communityreleases is something just for their own releases, I literally cannot add the toggle to my Hugging Face GGUF repos. The mechanism thatlmstudio-communityuses isn't in their GGUF repos either, it's in LM Studio's Hub, which is a separate system that community uploaders don't have access to. Even if you I put amodel.yamlin my Hugging Face repos, LM Studio won't read it from there; it only reads yamls from its own hub cache directory. The Reddit tutorial is just instructing users to manually create the hub cache entries that LM Studio would normally create automatically for official Hub models.You want to enable/disable reasoning, just follow the instructions here: https://huggingface.co/llmfan46/gemma-4-26B-A4B-it-ultra-uncensored-heretic-GGUF/discussions/1#69dff308da6936595aa47469
I write it to Op just make it for yourself and got toggle for reasoning.
Didn't mean You add something to gguf's.
Just people should know how easy they can do it themself.
Edit:
Other part is: yes You can create this file for that repo and put it to hf - with note people should replace this file to there's Hub folder.
You can easy (2minutes of work) add yourself reasoning toggle same as LmStudio model have. It will work for any model, no metter heretic or base.
Basicly you create 1 file with notepad named: model.yaml
Here someone make instruction - https://www.reddit.com/r/LocalLLaMA/comments/1sc9s1x/tutorial_how_to_toggle_onoff_the_thinking_mode/Yeah that won't work because this is just a workaround, it's not integrated inside the GGUFs, so this is not something that I upload and people can just download and just enable/disable reasoning.
What you see in
lmstudio-communityreleases is something just for their own releases, I literally cannot add the toggle to my Hugging Face GGUF repos. The mechanism thatlmstudio-communityuses isn't in their GGUF repos either, it's in LM Studio's Hub, which is a separate system that community uploaders don't have access to. Even if you I put amodel.yamlin my Hugging Face repos, LM Studio won't read it from there; it only reads yamls from its own hub cache directory. The Reddit tutorial is just instructing users to manually create the hub cache entries that LM Studio would normally create automatically for official Hub models.You want to enable/disable reasoning, just follow the instructions here: https://huggingface.co/llmfan46/gemma-4-26B-A4B-it-ultra-uncensored-heretic-GGUF/discussions/1#69dff308da6936595aa47469
I write it to Op just make it for yourself and got toggle for reasoning.
Didn't mean You add something to gguf's.
Just people should know how easy they can do it themself.
Oh sorry, I misunderstood, thought you wanted me to implement it in my GGUFs since you did not quote OP in your reply.