The Ex-ASE workshop aims to explore the intersection between explainability and automated software engineering. It invites researchers and practitioners to reflect on the theoretical foundations, practical applications, challenges, and ethical considerations involved in developing explainable approaches to software automation.
All papers must be original and not simultaneously submitted to another journal or conference. All submissions will be reviewed by at least two members of the Program Committee. Evaluation criteria include technical quality, relevance, clarity, and potential to spark discussion. EasyChair will be used to manage submissions and reviews. Submissions must follow the IEEE conference format: (\textbackslash documentclass[10pt,conference]{IEEEtran})
Submission Guidelines:
Stevens Institute of Technology
LLM-based agentic reasoning systems have been used in scenarios including software engineering, explanation, and problem-solving, among many others. Meanwhile, many improvement areas have also been identified, and we look forward to improving the reasoning capabilities. In recent years, mechanistic interpretability offers useful tools and inspirations. In this talk, I introduce some of our recent attempts to apply the tools and inspirations to improving LLM reasoning, and share some exciting preliminary findings along this avenue.
**Note:** The proceedings for all accepted papers will be included in the ASE 2025 conference proceedings, available online and indexed in the ACM/IEEE Digital Library. You can view the full workshop schedule via the ASE 2025 main website.
All accepted papers will be included in the ASE 2025 conference proceedings. The proceedings will be made available online and indexed in the ACM/IEEE Digital Library.