Repository landing page

We are not able to resolve this OAI Identifier to the repository landing page. If you are the repository manager for this record, please head to the Dashboard and adjust the settings.

JUGE : an infrastructure for benchmarking Java unit test generators

Abstract

Researchers and practitioners have designed and implemented various auto-mated test case generators to support effective software testing. Such genera-tors exist for various languages (e.g., Java, C#, or Python) and variousplatforms (e.g., desktop, web, or mobile applications). The generators exhibitvarying effectiveness and efficiency, depending on the testing goals they aim tosatisfy (e.g., unit-testing of libraries versus system-testing of entire applica-tions) and the underlying techniques they implement. In this context, practi-tioners need to be able to compare different generators to identify the mostsuited one for their requirements, while researchers seek to identify futureresearch directions. This can be achieved by systematically executing large-scale evaluations of different generators. However, executing such empiricalevaluations is not trivial and requires substantial effort to select appropriatebenchmarks, setup the evaluation infrastructure, and collect and analyse theresults. In this Software Note, we present ourJUnit Generation BenchmarkingInfrastructure(JUGE) supporting generators (search-based, random-based,symbolic execution, etc.) seeking to automate the production of unit tests forvarious purposes (validation, regression testing, fault localization, etc.). Theprimary goal is to reduce the overall benchmarking effort, ease the comparisonof several generators, and enhance the knowledge transfer between academiaand industry by standardizing the evaluation and comparison process. Since2013, several editions of a unit testing tool competition, co-located with theSearch-Based Software Testing Workshop, have taken place where JUGE wasused and evolved. As a result, an increasing amount of tools (over 10) fromacademia and industry have been evaluated on JUGE, matured over the years,and allowed the identification of future research directions. Based on the expe-rience gained from the competitions, we discuss the expected impact of JUGEin improving the knowledge transfer on tools and approaches for test genera-tion between academia and industry. Indeed, the JUGE infrastructure demon-strated an implementation design that is flexible enough to enable theintegration of additional unit test generation tools, which is practical for devel-opers and allows researchers to experiment with new and advanced unit testingtools and approaches

Similar works

Full text

thumbnail-image

ZHAW digitalcollection

redirect
Last time updated on 13/02/2023

This paper was published in ZHAW digitalcollection.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.