Last modified: 2014-11-13 14:45:57 UTC
The VisualEditor and ULS MediaWiki extensions have recently been blessed with QA browser tests. We would need them to be in the Contint Jenkins and triggered by Zuul whenever a patchset is submitted as well as in the gate-and-submit pipeline. That needs several step: - describe the jobs in Jenkins - add the triggers in Zuul configuration
raise priority, add 'tracking' in summary.
We will go with ULS first since we have a good experience interacting with the i18n team and they are in European timezone just like Zeljko and I.
See also the Wiki page https://www.mediawiki.org/wiki/Continuous_integration/Browser_tests
The Jenkins jobo template in https://gerrit.wikimedia.org/r/#/c/86868/ would let us trigger tests.
Quick status update: - ULS has browser tests being triggered albeit lot of tests ends up being - MobileFrontend has a patch that has browser tests passing locally against a freshly installed wiki. Pending review / merge then I will add the browser tests. - VisualEditor browser tests depends on some work to be done to safely start and stop a Parsoid daemon for each test. I am focusing this week on integrating Parsoid daemon in Jenkins
This is no more a top priority. We have most browsertests run twice per day instead. We will eventually managed to get them added in Gerrit, but there is a bit more work that needs to be done first.
(In reply to Antoine "hashar" Musso from comment #6) > This is no more a top priority. We have most browsertests run twice per day > instead. > > We will eventually managed to get them added in Gerrit, but there is a bit > more work that needs to be done first. What's the status of this? Are there any projects that have browser tests triggering on each change (even if non-voting)? If not, can you outline the blockers?
(In reply to Matthew Flaschen from comment #7) <snip> > What's the status of this? Are there any projects that have browser tests > triggering on each change (even if non-voting)? > > If not, can you outline the blockers? It is not hold, mostly because we lack resources to do the integration / make them faster. On top of my head: * make them faster * runnable in parallel and aggregate the results * rethink the way we setup the environment to run the browser tests, it is currently a huge shell snippet which is copy pasted in different place Feel free to take the leadership. Zeljkof and Nik Everett would be able to assist. I can offer reviews / guidances as well.