monogate.dev / search
EML Symbolic Regression
LeaderboardHow to Submit
Gradient descent over random EML tree topologies. Searches for a construction that equals the target function at all test points. These are open problems — convergence is not guaranteed.
Target function
Max tree depth
Deeper trees have more parameters. Depth 3 is a good starting point. Each random restart samples a new topology.
How it works
1. Random tree. A random EML tree topology is sampled at the chosen depth. Every leaf is a learnable scalar weight initialized to a positive value.
2. Gradient descent. Adam optimizer minimizes MSE between tree output and target at 10 test points. Gradients flow through eml(x, y) = exp(x) − ln(y) via reverse-mode automatic differentiation.
3. Random restarts. Every 500 iterations, a new random tree is sampled. The globally best weights and loss are tracked across all restarts.
4. Strict grammar. ln(y) returns NaN if y ≤ 0. Ill-conditioned evaluations are skipped — no extended-reals shortcut. Same constraints as manual submissions.
5. Auto-submit. If loss falls below 1×10⁻¹⁰, the expression is automatically submitted to the challenge leaderboard as search-bot.