This is definitely the type that grants wishes.
It’s just gibberish. I wrote the image description, the actual prompt is under the “full generation parameters” spoiler.
But the people making money off of all of that are mad now, hence this article.
You can’t be sued over or copyright styles. Studio Ponoc is made up of ex-Ghibli staff, and they have been releasing moves for a while. Stop spreading misinformation.
https://www.imdb.com/title/tt16369708/
https://www.imdb.com/title/tt15054592/
The dream is dead.
This doesn’t mean you can misrepresent facts like this though. The line I quoted is misinformation, and you don’t know what you’re talking about. I’m not trying to sound so aggressive, but it’s the only way I can phrase it.
Generating an AI voice to speak the lines increases that energy cost exponentially.
TTS models are tiny in comparison to LLMs. How does this track? The biggest I could find was Orpheus-TTS that comes in 3B/1B/400M/150M parameter sizes. And they are not using a 600 billion parameter LLM to generate the text for Vader’s responses, that is likely way too big. After generating the text, speech isn’t even a drop in the bucket.
You need to include parameter counts in your calculations. A lot of these assumptions are so wrong it borders on misinformation.
There are no generation parameters for this image, so we don’t actually know what the prompt was. I was all on my own for the title and description.
That’s my fault. I obviously don’t know the difference between a bison and a buffalo, and I wrote the title and image description.
30B means the model is 30 billion parameters, basically how big the model is. MoE means it’s a Mixture of Experts. A mixture of Experts model is a team of smaller models that work together to process tasks. In the case of an MoE model, the 30B stands for the total parameters of the whole team. For example, Qwen3-30B-A3B’s parameter total is split between 16 checkpoints.
My bad, I used the wrong Github link. It should be fixed now.
I’ve gotten the deepseek-r1-0528-qwen3-8b to answer correctly once, but not consistently. Abliterated Deepseek models I’ve used in the past have been able to pass the test.
It can’t answer The Tiananmen Square question. Gonna have to wait for an abliterated one.
It’s probably pretty crummy for anything outside of drum loops, foley, instrument riffs, and ambient textures, since those were the only examples they provided on the blog.
Comfyanonymous of Comfy UI.
Are you the comfy?
I don’t know.
Whoops. Let me fix that. There weren’t any generation parameters provided for this one.
Hail Satan.
I can still hear that laugh.