[webkit-reviews] review denied: [Bug 98422] [TestResultServer] Add support for non-Chromium TestExpectations files : [Attachment 167131] Patch

bugzilla-daemon at webkit.org bugzilla-daemon at webkit.org
Wed Oct 10 11:29:26 PDT 2012


Ojan Vafai <ojan at chromium.org> has denied Zan Dobersek
<zandobersek at gmail.com>'s request for review:
Bug 98422: [TestResultServer] Add support for non-Chromium TestExpectations
files
https://bugs.webkit.org/show_bug.cgi?id=98422

Attachment 167131: Patch
https://bugs.webkit.org/attachment.cgi?id=167131&action=review

------- Additional Comments from Ojan Vafai <ojan at chromium.org>
View in context: https://bugs.webkit.org/attachment.cgi?id=167131&action=review


Thanks for working on this! This stuff it crazy complicated.

Overall, my comments are mostly about style. The only correctness concern I
have is the early "return false" in processExpectations. I've long wished that
we could avoid duplicating all the python logic for this, but this patch a fine
interim step to keep things working until then.

What I had in mind is that we'd have some python code that would generate JSON
representing the expectations/modifiers for each test on each platform.
Something like:
{
    "fast": {
	"ChromiumLion": { buildTypes: [...], bugs: [...], expectations: [...] }

	"AppleLion": {...},
	"forms": {
	    "ChromiumLion": { buildTypes: [...], bugs: [...], expectations:
[...] }
	    "AppleMountainLion": {...},
	}
    }
}

Once we have the python code generate this global expectations json file, then
the flakiness dashboard just needs to walk this trie and doesn't need to
duplicate (and more importantly, keep up to date with) the python logic.

As to this patch, at a high-level, I'm trying to incrementally move this code
away from being a big group of global functions into being more object
oriented. One place you could maybe do so here is to make the trie a proper
class.

function TestTrie()
TestTrie.prototype.addTest(test)  // instead of addTestToTrie
TestTrie.prototype.forEach(callback)  // instead of allTestsTrieTraversal

> Tools/TestResultServer/static-dashboards/dashboard_base.js:566
> +    (function requestExpectationsFile() {

Why not request all the expectations files in parallel? Won't that improve
dashboard load time? I think it would also simplify the code a bit.

> Tools/TestResultServer/static-dashboards/dashboard_base.js:569
> +	       logTime('requestExpectationsFiles: ', start);

Nit: here and elsewhere. I've been trying to cut back on the amount of stuff we
always spew to the console. Logging these things it not much more useful than
what you can get automatically with modern dev tools (e.g. network panel +
profiler).

> Tools/TestResultServer/static-dashboards/dashboard_base.js:622
> +    g_waitingOnExpectations = isLayoutTestResults() && !isTreeMap();

lol. whoops!

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:283
> +function platformsTreeTraversal(callback)

I find the non-verbness of this name confusing. How about
traversePlatformsTree?

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:287
> +	   for (var i = 0; i < platformsList.length; i++) {

Nit: How about using platformsList.forEach(...)?

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:332
> +function determineWKPlatform(builderName, basePlatform) {

Nit: curly on new line

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:439
> +function allTestsTrieTraversal(callback, startingTriePath) {

Ditto: How about traverseAllTestsTrie?

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:663
> +    for (var i = 1; i < platformsList.length; i++)

Nit: use forEach

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:692
> +	       for (var i = 0; i < subPlatformsList.length; i++)

Nit: use forEach

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:861
> +	   if (platform.platformModifierUnions) {
> +	       processExpectationsForPlatform(platformName,
g_expectationsByPlatform[platformName]);
> +	       return false;
> +	   }

I'm confused by this. For Chromium, we don't look at subplatforms? Shouldn't we
return true here?

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:866
> +		   if (fallbackPlatform in g_expectationsByPlatform)

Isn't it an error if fallbackPlatform isn't in g_expectationsByPlatform? I
think it makes sense to fail gracefully, but maybe we should console.error?

> Tools/TestResultServer/static-dashboards/flakiness_dashboard.js:871
> +	   if (platformName in g_expectationsByPlatform)

ditto


More information about the webkit-reviews mailing list