ICLR logo
RSI 2026 logo

ICLR 2026 Workshop: AI with Recursive Self-Improvement

Design the
lps,
Prove the
gains.

🗓️ April 23 to 27
Rio de Janeiro, Brazil
Alongside ICLR 2026

Submit on OpenReview

Bem-vindos! We’re the ICLR 2026 Workshop on AI with Recursive Self-Improvement (RSI 2026), held alongside ICLR 2026 in Rio de Janeiro (April 23 to 27, 2026; workshop day April 26 or 27).

Recursive self improvement is no longer a speculative vision. It is becoming a concrete systems problem. Across text, speech, vision, and embodied interaction, today’s models can already diagnose their failures, critique their behavior, update internal representations, and modify external tools. What’s missing is not ambition, but principled methods, system designs, and evaluations that make self improvement measurable, reliable, and deployable.

This workshop brings together researchers working on recursive self improving AI across omni models, multimodal agents, robotics, and scientific discovery. We focus on practical advances such as critique and reward driven learning, test time adaptation, experience accumulation, and governed model updates.

Call for Papers

We’re looking for methods, systems, and evaluations that move self improving AI from promise to practice across language, speech, and vision, as well as applications in robotics and scientific discovery.

We frame contributions through six lenses: (1) what changes (parameters, world models, memory, tools and skills, architectures), (2) when changes happen (within an episode, at test time, or after deployment), (3) how changes are produced (reward or value learning, imitation, evolutionary search), (4) where systems operate (web and UI, games, robotics, science, enterprise), (5) alignment, security, and safety (long horizon stability, regression risk), and (6) evaluation and benchmarks. We also welcome work on optimization and curricula, memory and model editing, instrumentation, and rollback.

Important Dates (AoE)

Submission Instructions

Submissions must be made through OpenReview and formatted using the ICLR conference proceedings style.

Both tracks are non-archival. Accepted tiny papers may be featured on the workshop website. The reviewing process is single-blind and managed through OpenReview; during submission, authors must disclose all sources of funding related to the research. AI-generated papers are not allowed, however, due to the nature of the workshop, AI-generated artifacts used as part of the work (e.g., demos, systems, or experiments) are allowed, but must be clearly disclosed, and papers must follow the Policies on Large Language Model Usage at ICLR 2026, including the disclosure of LLM usage.

Awards 🏆

We will present two Best Paper Awards and several Outstanding Paper Awards, selected by the program committee.

Speakers & Panelists (alphabetical order)

Arman Cohan
Arman CohanYale University
Bang Liu
Bang LiuUniversité de Montréal / Mila
Chelsea Finn
Chelsea FinnStanford University
Graham Neubig
Graham NeubigOpenHands / Carnegie Mellon University
Jeff Clune
Jeff CluneUBC / DeepMind
Matej Balog
Matej BalogGoogle DeepMind
Yu Su
Yu SuOhio State University
Yuandong Tian
Yuandong TianStealth Startup

Committee (organizers & executors)

Mingchen Zhuge
Mingchen ZhugeKAUST
Ailing Zeng
Ailing ZengAnuttacon
Deyao Zhu
Deyao ZhuByteDance
Sherry Yang
Sherry YangNYU / DeepMind
Yan Hu
Yan HuCUHK
Yunzhong He
Yunzhong HeScale
Levi Li
Levi LiTencent
Vikas Chandra
Vikas ChandraMeta Reality Labs
Jürgen Schmidhuber
Jürgen SchmidhuberKAUST / IDSIA

Sponsors

Tencent
Meta

Contact

Questions? Reach us at mczhuge@gmail.com, ailingzengzzz@gmail.com.

References

Several related sample pages and papers on recursive self-improvement are listed below:

  1. https://people.idsia.ch/~juergen/metalearning.html
  2. GPTSwarm: Language Agents as Optimizable Graphs
  3. AlphaEvolve: A coding agent for scientific and algorithmic discovery
  4. Agent-as-a-Judge: Evaluate Agents with Agents
  5. Open-Ended Evolution of Self-Improving Agents
  6. https://people.idsia.ch/~juergen/goedelmachine.html
  7. Human-Level Coding Agent Development by an Approximation of the Optimal Self-Improving Machine