Mozilla, Data Visualization

Mission Control

Oct 6th, 2017

Time for an overdue post on the mission control project that I’ve been working on for the past few quarters, since I transitioned to the data platform team.

One of the gaps in our data story when it comes to Firefox is being able to see how a new release is doing in the immediate hours after release. Tools like crashstats and the telemetry evolution dashboard are great, but it can take many hours (if not days) before you can reliably see that there is an issue in a metric that we care about (number of crashes, say). This is just too long — such delays unnecessarily retard rolling out a release when it is safe (because there is a paranoia that there might be some lingering problem which we we’re waiting to see reported). And if, somehow, despite our abundant caution a problem did slip through it would take us some time to recognize it and roll out a fix.

Enter mission control. By hooking up a high-performance spark streaming job directly to our ingestion pipeline, we can now be able to detect within moments whether firefox is performing unacceptably within the field according to a particular measure.

To make the volume of data manageable, we create a grouped data set with the raw count of the various measures (e.g. main crashes, content crashes, slow script dialog counts) along with each unique combination of dimensions (e.g. platform, channel, release).

Of course, all this data is not so useful without a tool to visualize it, which is what I’ve been spending the majority of my time on. The idea is to be able to go from a top level description of what’s going on a particular channel (release for example) all the way down to a detailed view of how a measure has been performing over a time interval:

This particular screenshot shows the volume of content crashes (sampled every 5 minutes) over the last 48 hours on windows release. You’ll note that the later version (56.0) seems to be much crashier than earlier versions (55.0.3) which would seem to be a problem except that the populations are not directly comparable (since the profile of a user still on an older version of Firefox is rather different from that of one who has already upgraded). This is one of the still unsolved problems of this project: finding a reliable, automatable baseline of what an “acceptable result” for any particular measure might be.

Even still, the tool can still be useful for exploring a bunch of data quickly and it has been progressing rapidly over the last few weeks. And like almost everything Mozilla does, both the source and dashboard are open to the public. I’m planning on flagging some easier bugs for newer contributors to work on in the next couple weeks, but in the meantime if you’re interested in this project and want to get involved, feel free to look us up on irc.mozilla.org #missioncontrol (I’m there as ‘wlach’).

Mozilla

Functional is the future

Aug 28th, 2017

Just spent well over an hour tracking down a silly bug in my code. For the mission control project, I wrote this very simple API method that returns a cached data structure to our front end:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
def measure(request):
    channel_name = request.GET.get('channel')
    platform_name = request.GET.get('platform')
    measure_name = request.GET.get('measure')
    interval = request.GET.get('interval')
    if not all([channel_name, platform_name, measure_name]):
        return HttpResponseBadRequest("All of channel, platform, measure required")
    data = cache.get(get_measure_cache_key(platform_name, channel_name, measure_name))
    if not data:
        return HttpResponseNotFound("Data not available for this measure combination")
    if interval:
        try:
            min_time = datetime.datetime.now() - datetime.timedelta(seconds=int(interval))
        except ValueError:
            return HttpResponseBadRequest("Interval must be specified in seconds (as an integer)")

        # Return any build data in the interval
        empty_buildids = set()
        for (build_id, build_data) in data.items():
            build_data['data'] = [d for d in build_data['data'] if d[0] > min_time]
            if not build_data['data']:
                empty_buildids.add(build_id)

        # don't bother returning empty indexed data
        for empty_buildid in empty_buildids:
            del data[empty_buildid]

    return JsonResponse(data={'measure_data': data})

As you can see, it takes 3 required parameters (channel, platform, and measure) and one optional one (interval), picks out the required data structure, filters it a bit, and returns it. This is almost what we wanted for the frontend, unfortunately the time zone information isn’t quite what we want, since the strings that are returned don’t tell the frontend that they’re in UTC format — they need a ‘Z’ appended to them for that.

After a bit of digging, I found out that Django’s json serializer will only add the Z if the tzinfo structure is specified. So I figured out a simple pattern for adding that (using the dateutil library, which we are fortunately already using):

1
2
from dateutil.tz import tzutc
datetime.datetime.fromtimestamp(mydatestamp.timestamp(), tz=tzutc())

I tested this quickly on the python console and it seemed to work great. But when I added the code to my function, the unit tests mysteriously failed. Can you see why?

1
2
3
4
5
6
7
8
for (build_id, build_data) in data.items():
    # add utc timezone info to each date, so django will serialize a
    # 'Z' to the end of the string (and so javascript's date constructor
    # will know it's utc)
    build_data['data'] = [
        [datetime.datetime.fromtimestamp(d[0].timestamp(), tz=tzutc())] + d[1:] for
        d in build_data['data'] if d[0] > min_time
    ]

Trick question: there’s actually nothing wrong with this code. But if you look at the block in context (see the top of the post), you see that it’s only executed if interval is specified, which it isn’t necessarily. The first case that my unit tests executed didn’t specify interval, so fail they did. It wasn’t immediately obvious to me why this was happening, so I went on a wild-goose chase of trying to figure out how the Django context might have been responsible for the unexpected output, before realizing my basic logic error.

This was fairly easily corrected (my updated code applies the datetime-mapping unconditionally to set of optionally-filtered results) but perfectly illustrates my issue with idiomatic python: while the language itself has constructs like map and reduce that support the functional programming model, the language strongly steers you towards writing things in an imperative style that makes costly and annoying mistakes like this much easier to make. Yes, list and dictionary comprehensions are nice and compact but they start to break down in the more complex cases.

As an experiment, I wrote up what this function might look like in a pure functional style with immutable data structures:

1
2
3
4
5
6
7
8
def transform_and_filter_data(build_data):
    new_build_data = copy.copy(build_data)
    new_build_data['data'] = [
        [datetime.datetime.fromtimestamp(d[0].timestamp(), tz=tzutc())] + d[1:] for
        d in build_data['data'] if d[0] > min_time
    ]
    return new_build_data
transformed_build_data = {k: v for k, v in {k: transform_and_filter_data(v) for k, v in data}.items() if len(v['data']) > 0}

A work of art it isn’t — and definitely not “pythonic”. Compare this to a similar piece of code written in Javascript (ES6) with lodash (using a hypothetical tzified function):

1
2
3
4
5
6
7
let transformedBuildData = _.filter(_.mapValues(data, (buildData) => ({
    ...buildData,
    data: buildData.data
      .filter(datum => datum[0] > minTimestamp)
      .map(datum => [tzcified(datum[0])].concat(datum.slice(1)))
  })),
  (data, buildId) => data.data.length > 0);

A little bit easier to understand, but more importantly (to me anyway) it comes across as idiomatic and natural in a way that the python version just doesn’t. I’ve been happily programming Python for the last 10 years, but it’s increasingly feeling time to move on to greener pastures.

Mozilla, mozregression

mozregression’s new mascot

Jul 3131, 2017

Spent a few hours this morning on a few housekeeping issues with mozregression. The web site was badly in need of an update (it was full of references to obsolete stuff like B2G and codefirefox.com) and the usual pile of fixes motivated a new release of the actual software. But most importantly, mozregression now has a proper application icon / logo, thanks to Victoria Wang!

One of the nice parts about working at Mozilla is the flexibility it offers to just hack on stuff that’s important, whether or not it’s part of your formal job description. Maintaining mozregression is pretty far outside my current set of responsibilities (or even interests), but I keep it going because it’s a key tool used by developers team here and no one else seems willing to take it over. Fortunately, tools like appveyor and pypi keep the time suckage to a mostly-reasonable level.

Mozilla

Taking over an npm package: sanity prevails

Jul 13th, 2017

Sometimes problems are easier to solve than expected.

For the last few months I’ve been working on the front end of a new project called Mission Control, which aims to chart lots of interesting live information in something approximating realtime. Since this is a greenfield project, I thought it would make sense to use the currently-invogue framework at Mozilla (react) along with our standard visualization library, metricsgraphics.

Metricsgraphics is great, but its jquery-esque api is somewhat at odds with the react way. The obvious solution to this problem is to wrap its functionality in a react component, and a quick google search determined that several people have tried to do exactly that, the most popular one being one called (obviously) react-metrics-graphics. Unfortunately, it hadn’t been updated in quite some time and some pull requests (including ones implementing features I needed for my project) weren’t being responded to.

I expected this to be pretty difficult to resolve: I had no interaction with the author (Carter Feldman) before but based on my past experiences in free software, I was expecting stonewalling, leaving me no choice but to fork the package and give it a new name, a rather unsatisfying end result.

But, hey, let’s keep an open mind on this. What does google say about unmaintained npm packages? Oh what’s this? They actually have a policy?

tl;dr: You email the maintainer (politely) and CC support@npmjs.org about your interest in helping maintain the software. If you’re unable to come up with a resolution on your own, they will intervene.

So I tried that. It turns out that Carter was really happy to hear that Mozilla was interested in taking over maintenance of this project, and not only gave me permission to start publishing newer versions to npm, but even transferred his repository over to Mozilla (so we could preserve issue and PR history). The project’s new location is here:

https://github.com/mozilla/react-metrics-graphics

In hindsight, this is obviously the most reasonable outcome and I’m not sure why I was expecting anything else. Is the node community just friendlier than other areas I’ve worked in? Have community standards improved generally? In any case, thank you Carter for a great piece of software, hopefully it will thrive in its new home. :P

Counting

The vastness

Jul 8th, 2017

Had a good all hands with the rest of Mozilla in San Francisco (at least those able and willing to attend due to the current political situation in the U.S.). I stayed a few extra days to hang out with some of my friends who had moved to S.F. On Sunday we went to Muir Woods, where I took this picture:

It occurred to me at the time that I took that photo that pretty much every sensory receptor in my optic nerve was registering the signal of some kind of life. Thousands of beings (trees, clover, moss, lichens) in turn made up of trillions upon trillions of tiny beings (cells, bacteria) all conscious and interacting with each other in ways that I can barely begin to understand.

Mozilla, Docker

Using Docker to run automated tests

Jun 2nd, 2017

A couple months ago, I joined the Mozilla Data Platform team, to work on our Telemetry and automated data collection services. This has been an interesting transition for me, and a natural jumping off point from my work on Perfherder. Now, instead of manipulating mere 10s of gigabytes worth of fairly regular data, I’m working with 100s of terrabytes of noisy data with a much larger number of dimensions. :P It’s been interesting so far.

One of the first things I decided to work on was improving our unit testing story around a few of our primary packages for data analysis/etl: python_moztelemetry (a library we use for running custom spark jobs against Telemetry data) and telemetry-batch-view (a set of scala jobs we run against the main telemetry data store to create a useful set of aggregations that are easily queried with tools like redash).

It turns out that these tools interact with several larger / more involved pieces than I’m used to dealing with (such as hbase and thrift). For continuous integration/automation, we already had a set of travis scripts to install and reproduce the environment needed to test these parts, but there was no straightforward way to do this locally. My third time through creating an Ubuntu virtual machine environment to reproduce this environment locally (long story), I figured it was finally time for me to investigate using something to automate that setup procedure and make it easier for new developers to get into these projects.

I hadn’t used it much before, but Docker seemed like a fairly obvious choice. Small, simple, and Linuxy? Sign me up.

I’m pretty happy with how things turned out, but there were a few caveats. Docker is more of a general purpose tool for building environments for running things, whether that be an apache webserver or a jabber messaging doohickey (whereas e.g. something like travis is basically a domain-specific language for creating and running automated tests). There were a few tricks I needed to employ to make the whole testing process smooth in both cases, which I’ll document here for posterity:

  1. You can ADD a set of files / directories to a docker environment inside your Dockerfile, but if you want your set of tests to pick up any changes made since the environment was created, you really should mount your testing directory inside the container using the -v option.
  2. If you need to download/install a piece of software when building the docker container, use the RUN directive instead of ADD. This will speed up rebuilding the container while you’re iterating on it (because you can take advantage of the Docker layers cache).
  3. You almost certainly want to create a script (example) to streamline all the steps of running the tests: this will make running the tests easier for anyone wanting to contribute to your project and reduce the amount of documentation that you will have to write.

The relevant files and documentation are in the repositories linked above.

Mozilla, Treeherder, Taskcluster

Easier reproduction of intermittent test failures in automation

Apr 5th, 2017

As part of the Stockwell project, I’ve been hacking on ways to make it easier for developers to diagnose failure of our tests in automation. It’s often very difficult to reproduce an intermittent failure we see in Treeherder locally since the environment is so different, but historically it has been a big hassle to get access to the machines we use in automation for various reasons.

One option that rolled out last year was the so-called one-click loaner, which enabled developers to sign out an virtual machine instance identical to the ones used to run unit tests (at least if the tests are running on Taskcluster, which is increasingly often the case), then execute their particular case with whatever extra debugging options they would find useful. This is a big step forward, but it’s still quite a bit of hassle, since it requires a bunch of manual work on the part of the developer to interact with the instance.

What if we could just re-run the particular test an arbitrary number of times with whatever options we wanted, simply by clicking on a few buttons on Treeherder? I’ve been exploring this for the first few months of 2017 and I’ve come up with a prototype which I think is ready for people to start playing with.

The user interface to this is pretty straightforward. Just find a job you want to retrigger in Treeherder:

Then select the ’…’ option in the panel below and press “Custom Action…”:

You should get a small piece of JSON to edit, which corresponds to the configuration for the retriggered job:

The main field to edit is “path”. You should set this to the name of the test you want to try retriggering. For example dom/animation/test/css-transitions/test_animation-ready.html. You can also set custom Firefox preferences and environment variables, to turn on different types of debugging.

Unfortunately as usual with a new feature at Mozilla, there are a bunch of limitations and caveats:

  • This depends on functionality that’s only in Taskcluster, so buildbot jobs are exempt.
  • No support for Android yet. In combination with the above limitation, this implies that this functionality only works on Linux (at least until other platforms are moved to Taskcluster, which hopefully isn’t that far off).
  • Browser chrome tests failing in mysterious ways if run repeatedly (bug 1347654)
  • Only reftest and mochitest are currently supported. XPCShell support is blocked by the lack of support in its harness for running a job repeatedly (bug 1347696). Web Platform Tests need the requisite support in mozharness for just setting up the tests without running them — the same issue that prevents us from debugging such tests with a one-click loaner (bug 1348833).

Aside from fixing the above limitations, the following features would also be really nifty to have:

  • Ability to trigger a custom job as part of a try push (i.e. not needing to retrigger off an existing job)
  • Run these jobs under rr, and provide a way to login and interactively debug when the problem is actually reproduced.

I am actually in the process of moving to another team @ Mozilla (more on that in another post), so I probably won’t have a ton of time to work on the above — but I’d be happy to help anyone who’s interested in developing this idea further.

A special shout out to the Taskcluster team for helping me with the development of this feature: in particular the action task implementation from Jonas Finnemann Jensen that made it possible to develop this feature in the first place.

Mozilla, Treeherder

Cancel all the things

Feb 7th, 2017

I just added a feature to Treeherder which lets you cancel a set of jobs (say, from a botched try push) much more easily. I’m hopeful that this will be helpful in keeping our resource usage on try more under control.

It uses the “pinboard” feature of Treeherder which very few people are familiar with, so I made a very short video tutorial on how to make use of this feature and put it on the Joy of Automation channel:

Happy cancelling!

Mozilla, Treeherder

Training an autoclassifier

Nov 28th, 2016

Here at Mozilla, we’ve accepted that a certain amount of intermittent failure in our automated testing of Firefox is to be expected. That is, for every push, a subset of the tests that we run will fail for reasons that have nothing to do with the quality (or lack thereof) of the push itself.

On the main integration branches that developers commit code to, we have dedicated staff and volunteers called sheriffs who attempt to distinguish these expected failures from intermittents through a manual classification process using Treeherder. On any given push, you can usually find some failed jobs that have stars beside them, this is the work of the sheriffs, indicating that a job’s failure is “nothing to worry about”:

This generally works pretty well, though unfortunately it doesn’t help developers who need to test their changes on Try, which have the same sorts of failures but no sheriffs to watch them or interpret the results. For this reason (and a few others which I won’t go into detail on here), there’s been much interest in having Treeherder autoclassify known failures.

We have a partially implemented version that attempts to do this based on structured (failure line) information, but we’ve had some difficulty creating a reasonable user interface to train it. Sheriffs are used to being able to quickly tag many jobs with the same bug. Having to go through each job’s failure lines and manually annotate each of them is much more time consuming, at least with the approaches that have been tried so far.

It’s quite possible that this is a solvable problem, but I thought it might be an interesting exercise to see how far we could get training an autoclassifier with only the existing per-job classifications as training data. With some recent work I’ve done on refactoring Treeherder’s database, getting a complete set of per-job failure line information is only a small SQL query away:

1
2
3
4
5
select bjm.id, bjm.bug_id, tle.line from bug_job_map as bjm
  left join text_log_step as tls on tls.job_id = bjm.job_id
  left join text_log_error as tle on tle.step_id = tls.id
  where bjm.created > '2016-10-31' and bjm.created < '2016-11-24' and bjm.user_id is not NULL and bjm.bug_id is not NULL
  order by bjm.id, tle.step_id, tle.id;

Just to give some explanation of this query, the “bug_job_map” provides a list of bugs that have been applied to jobs. The “text_log_step” and “text_log_error” tables contain the actual errors that Treeherder has extracted from the textual logs (to explain the failure). From this raw list of mappings and errors, we can construct a data structure incorporating the job, the assigned bug and the textual errors inside it. For example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
"bug_number": 1202623,
"lines": [
  "browser_private_clicktoplay.js Test timed out -",
  "browser_private_clicktoplay.js Found a tab after previous test timed out: http:/<number><number>:<number>/browser/browser/base/content/test/plugins/plugin_test.html -",
  "browser_private_clicktoplay.js Found a browser window after previous test timed out -",
  "browser_private_clicktoplay.js A promise chain failed to handle a rejection:  - at chrome://mochikit/content/browser-test.js:<number> - TypeError: this.SimpleTest.isExpectingUncaughtException is not a function",
  "browser_privatebrowsing_newtab_from_popup.js Test timed out -",
  "browser_privatebrowsing_newtab_from_popup.js Found a browser window after previous test timed out -",
  "browser_privatebrowsing_newtab_from_popup.js Found a browser window after previous test timed out -",
  "browser_privatebrowsing_newtab_from_popup.js Found a browser window
  after previous test timed out -"
  ]
}

Some quick google searching revealed that scikit-learn is a popular tool for experimenting with text classifications. They even had a tutorial on classifying newsgroup posts which seemed tantalizingly close to what we needed to do here. In that example, they wanted to predict which newsgroup a post belonged to based on its content. In our case, we want to predict which existing bug a job failure should belong to based on its error lines.

There are obviously some differences in our domain: test failures are much more regular and structured. There are lots of numbers in them which are mostly irrelevant to the classification (e.g. the “expected 12 pixels different, got 10!” type errors in reftests). Ordering of failures might matter. Still, some of the techniques used on corpora of normal text documents for training a classifier probably map nicely onto what we’re trying to do here: it seems plausible that weighting words which occur more frequently less strongly against ones that are less common would be helpful, for example, and that’s one thing their default transformers does.

In any case, I built up a small little script to download a subset of the downloaded data (from November 1st to November 23rd), used it as training data for a classifier, then tested that against another subset of test failures between November 24th and 28th.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import os
from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.linear_model import SGDClassifier


training_set = load_files('training')
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(training_set.data)
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
clf = SGDClassifier(loss='hinge', penalty='l2',
                    alpha=1e-3, n_iter=5, random_state=42).fit(X_train_tfidf, training_set.target)

num_correct = 0
num_missed = 0

for (subdir, _, fnames) in os.walk('testing/'):
    if fnames:
        bugnum = os.path.basename(subdir)
        print bugnum, fnames
        for fname in fnames:
            doc = open(os.path.join(subdir, fname)).read()
            if not len(doc):
                print "--> (skipping, empty)"
            X_new_counts = count_vect.transform([doc])
            X_new_tfidf = tfidf_transformer.transform(X_new_counts)
            predicted_bugnum = training_set.target_names[clf.predict(X_new_tfidf)[0]]
            if bugnum == predicted_bugnum:
                num_correct += 1
                print "--> correct"
            else:
                num_missed += 1
                print "--> missed (%s)" % predicted_bugnum
print "Correct: %s Missed: %s Ratio: %s" % (num_correct, num_missed, num_correct / float(num_correct + num_missed))

With absolutely no tweaking whatsoever, I got an accuracy rate of 75% on the test data. That is, the algorithm chose the correct classification given the failure text 1312 times out of 1959. Not bad for a first attempt!

After getting that working, I did some initial testing to see if I could get better results by reusing some of the error ETL summary code in Treeherder we use for bug suggestions, but the results were pretty much the same.

So what’s next? This seems like a wide open area to me, but some initial areas that seem worth exploring, if we wanted to take this idea further:

  1. Investigate cases where the autoclassification failed or had a near miss. Is there a pattern here? Is there something simple we could do, either by tweaking the input data or using a better vectorizer/tokenizer?
  2. Have a confidence threshold for using the autoclassifier’s data. It seems likely to me that many of the cases above where we got the wrong were cases where the classifier itself wasn’t that confident in the result (vs. others). We can either present that in the user interface or avoid classifications for these cases altogether (and leave it up to a human being to make a decision on whether this is an intermittent).
  3. Using the structured log data inside the database as input to a classifier. Structured log data here is much more regular and denser than the free text that we’re using. Even if it isn’t explicitly classified, we may well get better results by using it as our input data.

If you’d like to experiment with the data and/or code, I’ve put it up on a github repository.

Mozilla, Treeherder, Performance

Slow Treeherder, Fast Treeherder

Oct 3131, 2016

Just wanted to talk about some recent performance improvements we’ve made recently to Treeherder:

  • Bug 1311511: Changed the repository endpoint so we don’t do 40 redundant database queries (this was generally innocuous, but could delay loading by 400ms if the database was under heavy load).
  • Bug 1310016: Persisted database connections across requests (this can save ~40–50ms per request, of which there can be 5–10 when loading a Treeherder page).
  • Bug 1308782: Don’t download job type and group information from the server to get a “sorting order” for the job lists. This was never necessary, but it’s gotten exponentially more painful as people have added job types to Treeherder (job type information is now around a megabyte of JSON these days). This saves 5–10 seconds on a typical page load.

There’s more to come, but with these changes Treeherder should be faster for everyone to load. It should be particularly noticeable on try pushes, where the last item was by far the largest bottleneck. Here’s a youtube video of the changes:

The original is on the left. The newer, faster Treeherder is on the right. Pay particular attention to how much faster the job information populates.

Moral of the story? Optimization can be helpful, but it’s better if you can avoid doing the work altogether.