[webkit-reviews] review denied: [Bug 184419] Write a script that detects chart changes by using v3 API. : [Attachment 337503] Patch
bugzilla-daemon at webkit.org
bugzilla-daemon at webkit.org
Fri Apr 13 00:49:54 PDT 2018
Ryosuke Niwa <rniwa at webkit.org> has denied dewei_zhu at apple.com's request for
review:
Bug 184419: Write a script that detects chart changes by using v3 API.
https://bugs.webkit.org/show_bug.cgi?id=184419
Attachment 337503: Patch
https://bugs.webkit.org/attachment.cgi?id=337503&action=review
--- Comment #2 from Ryosuke Niwa <rniwa at webkit.org> ---
Comment on attachment 337503
--> https://bugs.webkit.org/attachment.cgi?id=337503
Patch
View in context: https://bugs.webkit.org/attachment.cgi?id=337503&action=review
> Websites/perf.webkit.org/public/v3/models/measurement-set.js:293
> + const asyncTaskIsAvailable = typeof module == 'undefined' && typeof
window == 'undefined';
This should be a helper function / static variable on AsyncTask itself.
> Websites/perf.webkit.org/tools/run-analysis.js:2
> +'use strict';
No need to use strict mode.
> Websites/perf.webkit.org/tools/run-analysis.js:26
> + const configs =
JSON.parse(fs.readFileSync(options['--change-detection-config-json'],
'utf-8'));
This isn't really a list/set of configurations so it should be called
changeDetectionConfig instead.
> Websites/perf.webkit.org/tools/run-analysis.js:40
> + constructor(configs, remoteAPI, logger)
Ditto about calling the first argument simply config or changeDetectionConfig.
> Websites/perf.webkit.org/tools/run-analysis.js:42
> + this._configs = configs;
We should be parsing & validating the parsed config here.
> Websites/perf.webkit.org/tools/run-analysis.js:53
> + try {
> + manifest = await Manifest.fetch();
> + } catch (reason) {
try-catch should be happening in the loop.
In general, we should almost never use try-catch in our codebase.
They hide errors & make debugging much harder.
r- because of this.
> Websites/perf.webkit.org/tools/run-analysis.js:62
> + } catch(error) {
> + this._logger.log(`Failed analyzing "${metric.fullName()}" on
"${platform.name()}" due to ${error}`);
> + }
Ditto. We shouldn't be try-catching it here.
Let it propagate all the way up to loop instead.
> Websites/perf.webkit.org/tools/run-analysis.js:81
> + configurations.push([platform, metric]);
We should assert that we found valid platform & metric.
> Websites/perf.webkit.org/tools/run-analysis.js:103
> + if (!segmentedValues || !segmentedValues.length)
> + return;
We should probably log something here.
> Websites/perf.webkit.org/tools/run-analysis.js:106
> + const ranges =
Statistics.findRangesForChangeDetectionsWithWelchsTTest(rawValues,
segmentedValues, 0.99).map((range) => {
> + return {
Just use (range) => ({~}) trick.
> Websites/perf.webkit.org/tools/run-analysis.js:117
> + const allAnalysisTasks = await
AnalysisTask.fetchByPlatformAndMetric(platform.id(), metric.id());
> + const analysisTasks = allAnalysisTasks.filter((task) =>
task.startTime() && task.endTime());
I'm pretty sure all analysis tasks fetched by
AnalysisTask.fetchByPlatformAndMetric are guaranteed to have startTime &
endTime.
> Websites/perf.webkit.org/tools/run-analysis.js:125
> + for (const range of ranges) {
This is O(N^2)... it probably doesn't matter though.
> Websites/perf.webkit.org/tools/run-analysis.js:130
> + this._logger.info(`Found overlapped range:
${range.description} with task id: ${task.id()}`);
This is going to be really spammy because we'll be finding the same range over
& over once we created an analysis task.
> Websites/perf.webkit.org/tools/run-analysis.js:137
> + const filteredRanges = ranges.filter((range) => range.endPoint.time
>= startTime && !range.overlappingAnalysisTasks.length)
What's the point of range.endPoint.time >= startTime?
> Websites/perf.webkit.org/tools/run-analysis.js:141
> + let summary, range = null;
> + for (range of filteredRanges) {
We're finding the last range??
Why don't we find the range with the biggest change instead?
I'm pretty sure that's what I had in v2.
> Websites/perf.webkit.org/tools/run-analysis.js:158
> + const content = await AnalysisTask.create(summary,
range.startPoint.id, range.endPoint.id);
> + const analysisTask = await
AnalysisTask.fetchById(content['taskId']);
> + const startCommitSet = range.startPoint.commitSet();
> + const endCommitSet = range.endPoint.commitSet();
> + await TestGroup.createAndRefetchTestGroups(analysisTask, 'Confirm',
this._configs.confirmTaskRepeatCount, [startCommitSet, endCommitSet]);
This will create a dangling analysis task if the creation of an analysis task
succeeds but the creation of a test group fails.
Please use /privileged-api/create-test-group to create both of them at once.
More information about the webkit-reviews
mailing list