On Tue, 28 Feb 2023 at 12:56, Darren Duncan <darren@darrenduncan.net> wrote: > Regarding the thing about the automated CPAN testers infrastructure > providing an > automated "blead breaks cpan" report anyone can see, I feel that this idea > has > merit, and in theory it should be doable in a way that isn't too onerous. > One thing to consider is that you want to have a comparison corpus. So for instance if we introduce deprecation warnings in a given commit, what you want to do is more than just look for test failures, you want to look for any output changes between the most recent build and some "last known good" state. It is not just as simple as run test and find what is broken, as that doesnt address the transitive breakage very well. It also doesn't help with a module that has been long broken. Sometimes when I look into something it now fails for something we recently added, but was failing before that for something we not-so-recently changed. For instance i looked at a module (the name eludes me just now) that /was/ failing because scalar keys was removed in a previous release, but is now failing because smartmatch is deprecated as well. Anyway, dont let that discourage you from coming up with something to automate this more than it is and helping to improve the infra overall. I feel like our existing cpantesters infra is in a less than ideal state. I cant get reports from the db from reports we *do* already receive for instance, the DB is just too slow and overloaded most times. (Not sure what we can do about that really, except throw money at the problem and get more/bigger databases. Maybe a company with deep pockets like Booking.com would like to help out.) cheers, YvesThread Previous | Thread Next