Data-Driven Extract Method Recommendations: A Study at ING
The sound identification of refactoring opportunities is still an open problem in software engineering. Recent studies have shown the effectiveness of machine learning models in recommending methods that should undergo different refactoring operations.
In this work, we experiment with such approaches to identify methods that should undergo an Extract Method refactoring, in the context of ING, a large financial organization. More specifically, we (i) compare the code metrics distributions, which are used as features by the models, between open-source and ING systems, (ii) measure the accuracy of different machine learning models in recommending Extract Method refactorings, (iii) compare the recommendations given by the models with the opinions of ING experts.
Our results show that the feature distributions of ING systems and open-source systems are somewhat different, that machine learning models can recommend Extract Method refactorings with high accuracy, and that experts tend to agree with most of the recommendations of the model.
Watch a summary of the paper (in English):
BibTeX:
@inproceedings{aniche-refactoring-recommendation-at-ing, author = "David van der Leij and Jasper Binda and Robbert van Dalen and Pieter Vallen and Yaping Luo and Maurício Aniche", title = "Data-Driven Extract Method Recommendations: A Study at ING", booktitle = "Proceedings of Proceedings of the 29th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE '21)", year = 2021, doi = "10.1145/3468264.3473927" }