First, a warning: today’s exercise is likely to generate tasks for you that will take longer than 20 minutes to fix. This doesn’t mean something bad has happened, or that you’ve failed. Just file a ticket to remind you to complete the work later.
Today’s exercise: run your test suite with your internet connection disabled.
Turns out, it’s fairly easy to accidentally rely on an external service during test runs (I’ve done it many times). This won’t just make your test suite slow, but brittle.
Chances are, you’ll see a few cryptic failures when you try this.
In general, I recommend reaching for a tool like Webmock to fix issues like these.
As a bonus, Webmock has a setting to disable external web requests during test runs. If you attempt to make a connection to the outside world, you’ll get an exception. This will prevent you from writing internet-hitting tests in the future.
Nice one! I was pretty confident that our project would not have any failures, but turns out we use an email validator that does host checking by making a DNS lookup (very useful for catching typos in emails, e.g. something@gmial.com). Fortunately it supports mocking it for tests which was pretty easy to set up.
Added a ticket for Trufflepiggy - Quick Search to also show a message when the cloud-hub can’t be reached. So far the extension only added a warning when the extension wasn’t properly loaded within a website. When the cloud-hub isn’t reachable right now nothing happens. The user just wonders and might try again. Definitely not the best UX.
So I plan to publish this with the next store update. Beside that the loaded cloud search overlay doesn’t even load any external dependencies.
The Trufflepiggy - Context Search extension only uses local files once the setup has been completed. No tracking requests or anything else. 100% private direct search. So the setup via our cloud-hub is the only critical part and when its offline the browser gonna let you know about it.
For my offline/slow internet connection tests the Chrome dev tool network functions are sufficient right now.
I realise that “wifi” has become synonymous with “the network” these days, but some of us are on old-skool desktop machines But, it’s possible to turn off the network from the command line, which could be integrated into the test script. For my Ubuntu desktop, it’s
sudo ip link list
to list the network interfaces, then
sudo ip link set down <interface-id>
to turn the network off (and ... set up ... to turn it back on again).
For the exercise, my tests failed when the integration tests couldn’t write to S3. I could, I suppose, run a local test S3 service - I know that AWS make that possible. But I start to feel a bit uncomfortable when the test setup gets really complicated, and I begin to worry about what I’m actually gaining from the test at that point?
I’m a bit shocked, but our iOS app only had 1 failing test that resulted from turning off my WiFi. This exercise did point out that we were improperly stubbing out a service we use to detect whether or not the user has internet access before performing an action.
This was a quick one for me, 6 different repos, and all 6 tests were green with wifi off. We’ve made it a point to make sure that http calls are appropriately mocked out in tests.
Good lesson! We unknowingly had 3 tests that relied on external http calls. I was able to quickly mock these out with the Webmock equivalent for node nock
All green now with wifi off
And for the node version of the bonus points, this will prevent external http requests that are not “nocked”:
25/1000 failing tests… not too bad, and they’re all tests that hit the same domain (a project that in maintained specifically for this codebase, so it kind of counts as internal, in that if it’s down or broken, it means we need to deal with it just like broken code in the main codebase).
But I created an issue to wrap VCR around it, just to make it possible to run tests without wifi.
I got behind on the challenges during the week, so I am playing catch up this weekend.
This one was intimidating, but running the tests with no Internet resulted in an odd failure right at the start; this was a more obvious place to start than I expected.
At some point, we seemed to have run into an issue with Faye that required us to stop using localhost as the hostname and instead use an actual IP address. To do this, we were getting our local IP by opening a UDP socket to a Google at application startup. This workaround causes an exception to be raised at startup when I tried running my tests with no Internet.
I added some exception handling to this setup code, so tests could be run offline and was fortunate to find our tests all passed. Like many others mentioned, VCR is to thank here.
I was wondering what everyone’s best practice is for VCR. On The Bike Shed I recall them recommending not checking cassettes in version control and having them clear out every few days. If that were the case, the success of this challenge would depend on whether or not my cassettes have been recently recorded or not. It seems like a better practice, though. Maybe I’m too hung up on “checking the box” of not needing internet to have your tests pass and should focus on balancing these things.