Qwen3-Max-Thinking Drops: 36T Tokens 2 by SilasYee | 2 comments on Hacker News. Alibaba has officially launched Qwen3-Max-Thinking, a trillion-parameter MoE flagship LLM pretrained on 36T tokens—double the corpus of Qwen 2.5—and it’s already matching or outperforming top-tier models like GPT-5.2-Thinking, Claude-Opus-4.5, and Gemini 3 Pro across 19 authoritative benchmarks. Its two core technical breakthroughs are what truly set it apart. First, Adaptive Tool Calling: No manual prompts are needed—it autonomously invokes search engines, memory tools, and code interpreters based on task demands. This cuts down on hallucinations and boosts real-time problem-solving; for instance, coding tasks trigger automatic error correction loops, while research tasks combine search with context synthesis. Second, Test-Time Scaling (TTS): It outperforms standard parallel sampling by refining reasoning through iterative insights, with measurable jumps in key benchmarks—GPQA rose from 90.3 to 92.8, Li...