On 2023-02-28 4:11 a.m., demerphq wrote: > On Tue, 28 Feb 2023 at 12:56, Darren Duncan wrote: > > Regarding the thing about the automated CPAN testers infrastructure > providing an > automated "blead breaks cpan" report anyone can see, I feel that this idea has > merit, and in theory it should be doable in a way that isn't too onerous. > > One thing to consider is that you want to have a comparison corpus. So for > instance if we introduce deprecation warnings in a given commit, what you want > to do is more than just look for test failures, you want to look for any output > changes between the most recent build and some "last known good" state. It is > not just as simple as run test and find what is broken, as that doesnt address > the transitive breakage very well. It also doesn't help with a module that has > been long broken. Sometimes when I look into something it now fails for > something we recently added, but was failing before that for something we > not-so-recently changed. For instance i looked at a module (the name eludes me > just now) that /was/ failing because scalar keys was removed in a previous > release, but is now failing because smartmatch is deprecated as well. > > Anyway, dont let that discourage you from coming up with something to automate > this more than it is and helping to improve the infra overall. I feel like our > existing cpantesters infra is in a less than ideal state. I cant get reports > from the db from reports we *do* already receive for instance, the DB is just > too slow and overloaded most times. (Not sure what we can do about that really, > except throw money at the problem and get more/bigger databases. Maybe a company > with deep pockets like Booking.com would like to help out.) I consider the minimally useful version of this idea to just flag that there is a problem in blead, and the comparison corpus is the indicators from the otherwise most recent tested versioned Perl release. The main point of the idea is that there is a large number of CPAN modules and its useful to find out quickly which ones have a problem, and then performing the manual work of gathering more details only needs to be done on those modules and not the rest. As far as a corpus of other output useful to figuring out the problem, I would assume that this rolling "blead" report would have all of the same details normally associated with a CPAN testers report. So if the normal ones already have the required details then we're good just keep doing that. If they don't, then this issue you raised is orthogaonal to the one I'm talking about and can be implemented independently for its own benefits. -- Darren DuncanThread Previous | Thread Next