Last modified: 2014-08-13 13:56:47 UTC
Moving discussion to here for the record: Some time ago we noted that while we have some reasonable unit tests in some repos, and some reasonable browser tests in some repos, we lack any consistent tests at the API or services level. Having such tests would be useful not only as regression tests in the test environments, but also in order to monitor the availability of key services in the production environments. UploadWizard on Commons is a case of particular note, and the upload function in particular for UW: http://commons.wikimedia.org/w/api.php?action=help&modules=upload with the "stash" parameter. Matt Flaschen is also doing things along those lines: https://gist.github.com/mattflaschen/7904894
https://gist.github.com/mattflaschen/7904894 is not an availability test, but wikitools (https://code.google.com/p/python-wikitools/) (the library it uses) is a candidate to help write such availability tests (though there are of course other possible libraries).
Pywikibot has a decently large suite of tests that actually use the live API on WMF sites: https://github.com/wikimedia/pywikibot-core/tree/master/tests That said, what's the goal? Most API modules should have proper unit tests that check their output.
Presumably it's intended as an end-to-end integration test. It would apply to cases where: * The commit itself is not broken, or at least Jenkins-run unit tests don't catch it.. * The basic "is the homepage up?" checks (e.g. http://status.wikimedia.org/) don't catch it, since the site itself still works. * A particular API doesn't work as deployed, e.g. due to a specific production database outage. This has probably happened in some cases.
(In reply to comment #2) > That said, what's the goal? Most API modules should have proper unit tests > that > check their output. Two goals: The general goal is to provide test coverage along the lines of Mike Cohn's 'test pyramid': http://www.mountaingoatsoftware.com/blog/the-forgotten-layer-of-the-test-automation-pyramid . We have at least some coverage at the top and bottom of the pyramid, but lack coverage at the middle. The specific goal is also to monitor certain services in production in a reliable way using the WMF continuous integration tools. For example, some time ago Commons was configured incorrectly and media uploads stopped working, even though there was nothing wrong with the code and UploadWizard continued to work properly in test environments.
(made this a tracking bug for the sub-tasks like creating API level tests for UploadWizard. Please report those bugs and make them blocking this one. For clarify: those bugs should be something like "UploadWizard API test for some specific interesting/fragile part of the code".)
Created bug 58555 to track the creation of basic API smoke tests for the Upload Wizard. Just updated that bug with my latest findings.
Chris (and others), we now have some tests that are using the API. Can this be resolved, or is there something else to be done?
Željko, which tests are you referring to?
Jenkins jobs: https://integration.wikimedia.org/ci/job/UploadWizard-api-commons.wikimedia.beta.wmflabs.org/ https://integration.wikimedia.org/ci/job/UploadWizard-api-commons.wikimedia.org/ Job configuration: https://github.com/wikimedia/integration-jenkins-job-builder-config/blob/cloudbees/jobs.yaml#L255-L270 https://github.com/wikimedia/integration-jenkins-job-builder-config/blob/cloudbees/job_template.yaml#L81-L126
Okay, those are bug 58555's tests, right? This is currently a tracking bug for APIs in general to test like this. I don't know if it should be closed now (and potentially reopened if tests are wanted for other APIs), or left open.
Yes, looks like I was thinking of bug 58555. Anyway, this tracking bug does not track any other bugs, so I vote to resolve it, unless somebody adds a bug that needs to be tracked.