Claude's Cycles: When an AI Solved a Problem Knuth Couldn't
Created: 2026-03-25 | Size: 16483 bytes
TL;DR
Donald Knuth published a paper documenting how Claude Opus 4.6 independently solved an open combinatorial problem - decomposing a specific digraph with vertices into three directed Hamiltonian cycles for all odd . Knuth had been working on the problem for weeks. Claude solved it in about an hour through 31 explorations, exhibiting a remarkably human-like research process of reformulating, hitting dead ends, and synthesizing structural insights. Knuth then rigorously proved the construction correct and enumerated all 760 valid decompositions of its type. This isn't benchmark performance - it's genuine mathematical discovery.
The Problem, and Why It's Hard
Knuth was working on directed Hamiltonian cycles for a future volume of The Art of Computer Programming when he hit a wall. The specific structure is a directed graph (digraph) built from modular arithmetic in three dimensions.
Take all triples where . That gives you vertices. From each vertex, draw exactly three arcs: one that increments , one that increments , one that increments (all mod ). Every vertex has exactly three outgoing arcs and three incoming arcs. The question: can you partition all arcs into exactly three directed Hamiltonian cycles, cycles that visit every vertex exactly once?
This is a natural question in combinatorics, but it hides serious depth. Here's why it's hard:
The search space is astronomical. At each vertex, you need to assign which of the three arcs belongs to which cycle. That's a permutation from at each of vertices, giving possible assignments. For alone, that's . Brute force is out of the question for any meaningful .
Local choices have global consequences. Assigning an arc to a cycle at one vertex constrains what happens at the next vertex, and the next, cascading across the entire graph. A Hamiltonian cycle must visit every vertex exactly once and return to the start. One wrong assignment anywhere breaks the whole structure. This is the hallmark of NP-hard territory: easy to verify, brutal to construct.
The problem connects to deep combinatorial theory. This digraph is a Cayley graph of with generators (the three unit vectors). Decomposing it into Hamiltonian cycles touches on questions about group structure, Gray codes, and the interplay between algebraic symmetry and Hamiltonian path existence, topics Knuth has been building toward across multiple volumes of TAOCP.
To visualize the fiber decomposition that proved key to solving it: the quotient map partitions vertices into layers. Every arc goes from fiber to , giving the digraph a clean layered structure:
Each fiber contains vertices (for a given , there are triples with ). A Hamiltonian cycle must visit every fiber exactly times, cycling through . The challenge is choosing which arc to follow at each vertex so the cycle closes perfectly.
Knuth had been stuck on this for weeks. His friend Filip Stappers posed the problem to Claude using Knuth's exact mathematical formulation.
The Prompting Strategy That Made It Work
Stappers didn't just paste the problem and wait. He imposed a strict self-documentation discipline: after every exploration script Claude ran, it had to immediately update a plan.md file before doing anything else. No exceptions, no starting the next exploration until the previous one was documented.
This matters more than it sounds. Knuth reports the session wasn't smooth. Stappers had to restart Claude when it hit random errors (losing some earlier results), and had to "remind Claude again and again that it was supposed to document its progress carefully." The prompting strategy was active management: (1) give the exact problem statement, (2) enforce a running log after every attempt, and (3) intervene when Claude drifted.
The forced self-documentation likely played a key role in the outcome. Without it, Claude's explorations would have been isolated attempts. With it, each failed approach fed structural insights into the next one, creating the cumulative reasoning process that ultimately produced the solution. For practitioners building AI-assisted workflows, this is a concrete takeaway: structured reflection prompts can turn a language model's tendency to explore broadly into something closer to a directed research process.
How Claude Solved It
What makes this paper remarkable isn't just the result. It's watching the process unfold. Claude's approach over ~31 explorations mirrors how a skilled mathematician actually works:
Phase 1: Reformulation and naive attempts. Claude first recast the problem as finding a function that assigns a permutation of to each vertex. It tried linear/cyclic approaches (failed), then brute-force DFS over possibilities (too slow).
Phase 2: Structural insight. Claude recognized the digraph as a Cayley digraph and developed a "serpentine" pattern based on modular Gray codes. This got partial results, but the residual graph after removing one cycle resisted decomposition.
Phase 3: The breakthrough. Exploration 15 introduced a fiber decomposition using the quotient map . This revealed the digraph's layered structure - fibers mapping to . From there, Claude combined exhaustive search (), simulated annealing (), and coordinate-rotation symmetry to zero in on a construction where permutation choices depend only on whether , , and the fiber index equal , , or something in between.
The entire process took about one hour.
The Construction
The final solution is elegantly compact. For cycle 0:
- When : bump if , otherwise bump
- When : bump if , otherwise bump
- When : bump if , otherwise bump
Cycles 1 and 2 are obtained by coordinate rotation. Knuth proves this works because coordinate changes only when and , so all triples with a given are visited consecutively, and within each block advances by 2 (mod ), which works because is odd.
The Enumeration
Knuth didn't stop at verifying Claude's answer. He defined "generalizable" cycles, Hamiltonian cycles for whose structure extends to all odd via a collapsing map. Of 11,502 total Hamiltonian cycles for :
| Category | Count |
|---|---|
| Total Hamiltonian cycles () | 11,502 |
| Generalizable cycles | 996 |
| Valid "Claude-like" decompositions | 760 |
| Total decompositions () | 4,554 |
Claude found one of exactly 760 valid decompositions where permutations depend only on the three-way classification of coordinates. Not bad for an hour's work.
The Even Case Remains Open
For even , the picture is murkier. was proved impossible. Claude found solutions for but couldn't generalize. Even after Stappers gave it 4 hours of compute time, it eventually resorted to constraint solvers (OR-Tools CP-SAT).
Separately, Ho Boon Suan used GPT-5.3-Codex to generate a construction for even , tested up to (8 billion vertices). The pattern works but is far more complex and remains unproven.
AI and Mathematical Discovery: The Bigger Picture
Claude's result doesn't exist in a vacuum. Over the past few years, AI systems have been steadily crossing into territory that was supposed to be uniquely human: genuine mathematical discovery.
| Year | System | Discovery | Nature |
|---|---|---|---|
| 2021 | DeepMind + Oxford | New relationship between algebraic and geometric knot invariants, conjecture discovered by ML, then proven by mathematicians | New theorem |
| 2022 | AlphaTensor | Faster matrix multiplication algorithms beating decades of human research for certain matrix sizes | New algorithms |
| 2023 | FunSearch | New cap set constructions in extremal combinatorics, exceeding best known human constructions | New constructions |
| 2024 | AlphaProof + AlphaGeometry 2 | Silver medal at IMO 2024, solved 4/6 problems including the hardest (problem 6) | Competition proofs |
| 2026 | Claude Opus 4.6 | Hamiltonian cycle decomposition for all odd , verified and proven by Knuth | Open problem solved |
What's striking is the progression, not just in results, but in how much scaffolding the AI needed:
Claude's result is different in kind. It was a general-purpose LLM, given a problem in natural language, with no specialized mathematical architecture. No formal verifier in the loop. No evolutionary search. No reinforcement learning against a proof checker. Just a language model reasoning its way through 31 explorations to a correct construction.
Terence Tao famously described LLMs (circa 2023) as like a "mediocre but not completely incompetent graduate student," useful for brainstorming but unreliable for proofs. Knuth's paper suggests the ceiling is rising fast. Claude didn't just brainstorm; it constructed, verified, and iterated to a solution that Knuth himself hadn't found.
Knuth's Reaction
This is worth quoting in full, because it's Donald Knuth:
Shock! Shock! I learned yesterday that an open problem I'd been working on for several weeks had just been solved by Claude Opus 4.6. It seems that I'll have to revise my opinions about "generative AI" one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving.
Knuth has historically been skeptical of generative AI. That he calls this a "dramatic advance in automatic deduction and creative problem solving" and concedes he needs to "revise my opinions" carries weight. He closes the paper with: "Dear reader, I hope you have enjoyed reading this story at least half as much as I've enjoyed writing it. We are living in very interesting times indeed."
Why This Matters
This isn't another "AI beats benchmark" story. Benchmarks measure performance on known problem types with known solutions. Here, Claude solved an open problem, one that a world-class mathematician was actively stuck on.
The process is what makes it significant:
- Reformulation: Claude didn't just search; it recast the problem into more tractable forms
- Dead ends as signal: Failed approaches (serpentine patterns, brute-force search) contributed structural insights that fed into the eventual solution
- Synthesis: The final construction combined fiber decomposition, coordinate rotation, and analysis of simulated annealing outputs, ideas from different failed attempts
This is closer to how mathematical research actually works than anything we've seen from AI systems. It's not benchmark performance that collapses in production; it's genuine creative problem-solving on a problem where no training signal could have existed.
Knuth frames it carefully. He doesn't claim Claude "understands" mathematics. But he documents, with full mathematical rigor, that Claude produced a correct and elegant construction through a process that looks remarkably like research. That alone is worth paying attention to.
Generate, Then Verify
There's an important detail in the workflow. Claude produced the construction. Knuth proved it correct. These are different activities, and the paper keeps them clearly separated.
After Claude's hour of exploration, Knuth writes: "Of course, a rigorous proof was still needed. And the construction of such a proof turns out to be quite interesting." He then spent pages working through the formal argument himself. Later, Kim Morrison from the Lean community formally verified Knuth's proof in Lean 4, and Knuth (at 88) appreciated it: "That's good to know, because I've been getting more errorprone lately."
This suggests a workflow that's already emerging in practice: AI generates candidate constructions or conjectures, humans (or formal proof assistants) verify them. The verification step isn't optional overhead; it's what turns an interesting output into a trusted result. For mathematical research, that might mean Lean or Coq. For software engineering, it means tests, type checkers, and code review. The pattern is the same: let the AI explore broadly, then verify rigorously.
The bar for what counts as reliable AI agent performance just got a lot more interesting.
References
- Claude's Cycles - Donald Knuth, Stanford CS (Feb-Mar 2026). Original paper.
- Jacques Aubert and Bernadette Schneider, "Graphes orientes indecomposables en circuits hamiltoniens." Journal of Combinatorial Theory B32 (1982), 347-349.
- Donald E. Knuth, The Art of Computer Programming, Volume 4A: Combinatorial Algorithms, Part 1 (Addison-Wesley, 2011).
- Knuth's preliminary drafts on Hamiltonian paths and cycles
- Even case solution (Python) - Ho Boon Suan via GPT-5.3-Codex
- Even case closed form (C)
- Advancing mathematics by guiding human intuition with AI - DeepMind + Oxford knot theory, Nature (2021)
- Discovering novel algorithms with AlphaTensor - DeepMind (2022)
- FunSearch: Making new discoveries in mathematical sciences using LLMs - DeepMind (2023)
- AI solves IMO problems at silver medal level - AlphaProof + AlphaGeometry 2, DeepMind (2024)
- A pilot project in universal algebra to explore new ways to collaborate and use machine assistance - Terence Tao (2024)
- Your LLM Scores 88% on Code Benchmarks. In Production, It Hits 30%. - Daita blog
- Your AI Agent Aces the Benchmark. It Still Can't Be Trusted. - Daita blog