I just shipped multi-model support in Mathify.
Until now, Mathify ran on a single model. You typed a prompt, hit enter, and whatever came out was “the” Mathify output. That was fine for getting something working — but it wasn’t great for actually understanding what the system was doing, or why some animations turned out better than others.
So I changed that.
You can now choose which AI model generates your animations.
Right now, Mathify supports:
GPT-5.2
Gemini 2.5 Pro
Grok 3
Claude Sonnet 4.0
You can also switch models mid-conversation and directly compare how each one interprets the same prompt.
Same Prompt, Different Brains
In the video I ran the exact same prompt across all four models:
“Render something really cool and sciency.”
That’s a deliberately vague prompt. It doesn’t describe a formula, a theorem, or a scene layout. It just gives a direction.
What came back was interesting.
Each model “latched onto” a different idea:
One focused on clean mathematical structure.
Another leaned toward physical intuition.
One went straight into wave-particle duality.
Another emphasized equation-driven motion and dynamics.
They’re all technically capable of producing correct math — but they clearly have different instincts about what is worth visualizing and how it should look.
That matters more than I expected.
Why This Exists
Mathify isn’t just about getting a video file out of an LLM. It’s about exploring how different models think visually about math and physics.
When you only have one model, you don’t really see that. You just tweak prompts and hope the next run is better.
With multiple models, you can:
Compare scene structure
Compare explanation style
Compare layout decisions
Compare how “physical” vs “symbolic” the animations feel
It turns animation generation into an experiment, not just a button you press.
Where This Is Going
This is an early step toward something bigger.
Eventually, I want Mathify to support:
Fine-tuned models specifically trained on math animation
Open models from the community
Model variants that specialize in different fields (calculus, physics, linear algebra, etc.)
At that point, “which model did you use?” becomes as meaningful as “which prompt did you use?”
That’s more interesting to me than just adding another checkbox in the UI.
Try It
If you want to play with it, you can try it at:
Run the same prompt on different models. See which one you like.
You’ll probably start noticing differences you can’t unsee.
Let me know what you find.
