1. Random tree. A random EML tree topology is sampled at the chosen depth. Every leaf is a learnable scalar weight initialized to a positive value.
2. Gradient descent. Adam optimizer minimizes MSE between tree output and target at 10 test points. Gradients flow through eml(x, y) = exp(x) − ln(y) via reverse-mode automatic differentiation.
3. Random restarts. Every 500 iterations, a new random tree is sampled. The globally best weights and loss are tracked across all restarts.
4. Strict grammar. ln(y) returns NaN if y ≤ 0. Ill-conditioned evaluations are skipped — no extended-reals shortcut. Same constraints as manual submissions.
5. Auto-submit. If loss falls below 1×10⁻¹⁰, the expression is automatically submitted to the challenge leaderboard as search-bot.