A slow and unreliable test suite is a test suite developers won't want to work on.
In Part One of this series, I dove deeper into how you might organize classes and methods for dealing with HTTP calls. In this article, we're going to dig into how to write tests that use data returned from external HTTP APIs. We'll also test that the interaction between that API is tested both on success and on failure.
Wrap Your API Calls
It's important to wrap the APIs that you use.
The first reason is that it provides a layer of abstraction/encapsulation between the underlying implementation of how to communicate with that API and the code that's consuming it. It gives you a place to handle potential errors and normalize requests/responses. These are important things, but the consumer doesn't need to know how a JSON response is transformed into the object that it receives back. It's only concerned with how to make a request and what is going to be returned.
The second reason you need to wrap your APIs is that it can make testing a lot easier. It provides an entry point or gateway that can be mocked, thereby completely avoiding the need to make actual HTTP calls during testing. This will not only make your tests a lot faster, but it will also make them more consistent and reliable.
In the following sections, we'll look at three different techniques to avoid making HTTP calls in our tests, along with some of the pros and cons of each technique.
Avoid Code Altogether with Mocks
One way to avoid making an HTTP call is to completely avoid that code being run. Code that isn't run also happens to be the fastest code! We'll be using RSpec, but the same thing can be done in other testing frameworks. The way to do this generally involves using stubbing and doubles. The overall term for this is called mocking.
Mocking involves overriding our Ruby code to say: When this method is called, don't run the real code. Instead return this pre-canned result.
In the example below, we are overriding (or stubbing) the upcase
method of a String
object to return a pre-canned response and avoid calling the actual method. When name
receives the upcase
method, it will return 'up_LEIGH'
instead of performing the actual upcase
method.
it 'mocks upcase' do name = 'Leigh' allow(name).to receive(:upcase).and_return('up_LEIGH') expect(name.upcase).to eq('up_LEIGH') end
In combination with this, we can create entirely fake versions of our objects with something called doubles
.
it 'doubles a string' do name = double('name', upcase: 'up_LEIGH') expect(name.upcase).to eq('up_LEIGH') end
Let's look at a real example of how mocking can be used to avoid making an HTTP call while testing that a method works as expected.
Here is part of a class called Products
, which provides an interface to the LCBO API that we began using in Part One of this series. The method we're interested in testing is the fetch
method, which will take a product_id
and fetch the product details for it.
def fetch(product_id) product_response = make_product_request(HTTP, product_id) if product_response.success? product_response.product else fail LcboError, product_response.error_message end end private def make_product_request(connection, product_id) ProductRequest.new(connection, token, product_id).response end
If you look at the fetch
method, there are two flows that we'll need to test: success and failure.
It's hard to test the negative path; we would have to get an external API that we don't own to fail for us every time we run the tests. So what we can do is avoid making the make_product_request
method call altogether and deal with fake responses.
describe '#fetch' do context 'success' do it 'returns the product to be fetched' do # Setup: Instantiate, mock, prepare doubles response = double('response') allow(response).to receive(:success?).and_return(true) allow(response).to receive(:product).and_return('product') products = Lcbo::Products.new(ENV.fetch('LCBO_TOKEN')) allow(products).to receive(:make_product_request).and_return(response) # Perform product = products.fetch(12345) # Verify expect(product).to eq('product') expect(products).to have_received(:make_product_request) end end context 'failure' do it 'raises an LcboError exception' do response = double('response') allow(response).to receive(:success?).and_return(false) allow(response).to receive(:error_message).and_return('error') products = Lcbo::Products.new(ENV.fetch('LCBO_TOKEN')) allow(products).to receive(:make_product_request).and_return(response) expect { products.fetch(12345) }.to raise_error(Lcbo::LcboError, 'error') end end end
Using Webmock
The next way to test our HTTP call is by using a library called webmock. It basically "digs in" to the underlying HTTP libraries and stops real requests from leaking out. Unless you explicitly tell it to allow real requests, it will actually raise an exception if your code makes an HTTP request that hasn't been stubbed.
Webmock allows us to provide fake HTTP responses, only filling in the details (status, headers, body) required to perform the task at hand.
describe '#product (webmock)' do let(:product_json) { File.read('spec/fixtures/product.json') } it 'is found using its id an a Product object is returned' do VCR.turned_off do stub_request(:get, "https://lcboapi.com/products/438457"). to_return(status: 200, body: product_json, headers: {}) product = Lcbo::Client.new.product(438457) expect(product).to be_a(Lcbo::Product) expect(product.id).to eq(438457) end end end
If you ignore (for now) the line about VCR.turned_off
, you'll see that we are telling webmock to stub the GET
request made to https://lcboapi.com/products/438457
. In its place, we tell it to return a response with a 200
status, containing a body read from a fixture file and containing no headers.
This allows us to increase the "realism" of our code; it allows us to call all of the methods except for the final step, the actual HTTP request. The downside to this approach is that you are required to hand-craft your HTTP responses to accurately reflect the real ones.
Using VCR
A third approach is to use a library called VCR. Similar to using webmock in the example above, this library avoids real HTTP calls from leaking out during testing. However, it goes about it in a different way.
VCR makes an actual HTTP request once, and it will record the request and response to a yaml file. Then, on subsequent requests, it will "play back" the request and response rather than making a real one. This allows you to care less about what the underlying HTTP looks like and focus on exactly what you're testing.
After installing the VCR gem, you can add a config block to your spec_helper.rb
file:
VCR.configure do |c| c.cassette_library_dir = "spec/cassettes" c.hook_into :webmock c.configure_rspec_metadata! end
Then you can go about testing your code, with the first request being recorded and subsequent ones being played back.
describe '#product (vcr)', vcr: { record: :new_episodes } do it 'is found using its id an a Product object is returned' do product = Lcbo::Client.new.product(438457) expect(product).to be_a(Lcbo::Product) expect(product.id).to eq(438457) end end
You might have noticed the { record: :new_episodes }
line as part of the describe
method. This is telling VCR that it should use existing recordings for all requests. However, if it sees a request that it hasn't encountered before, it is free to make that request and record its result.
I typically use this setting when writing the test, but then I change the option to { record: :none }
once I'm ready to commit it. This setting change will raise an exception if it sees a new request that it hasn't encountered before. If the code is making an HTTP request that I don't know about, I want to hear about it!
I like to look at the request and response it has recorded to see exactly what we're sending them and what they are responding with.
Caveats
Nothing is free, and each of the above approaches sets us up for potential errors. We are using fake or recorded responses, so if the underlying API changes (or we messed up our mocking), we may be testing against the incorrect API. We'll only hear about it not working when it hits the real API on production and produces an exception for our users.
Because of this, if possible, it might be a good thing to have a series of tests that hit the real API. A more full integration/feature style of test will let us know that our interaction with the API is accurate and works as expected.
Testing the Real Thing
Testing the real thing is slow. It's something that you may only want to do every so often, not multiple times a day by multiple developers on the team plus in the CI environment.
We can do this in RSpec by providing a tag that's generally ignored but can be run if asked to explicitly. In this case, I've created a tag called type: :feature
.
describe '#product (feature)', type: :feature do it 'is found using its id an a Product object is returned' do product = Lcbo::Client.new.product(438457) expect(product).to be_a(Lcbo::Product) expect(product.id).to eq(438457) end end
The actual test is no different from the others. However, in the spec_helper.rb
file, I've configured RSpec to allow real HTTP requests through and to avoid using VCR/Webmock for these types of tests.
config.around(:each) do |example| if example.metadata[:type] == :feature WebMock.allow_net_connect! VCR.turned_off { example.run } WebMock.disable_net_connect! else example.run end end
In the .rspec
file, there is a line that looks like --tag ~type:feature
. This acts as a default, which says to avoid any test tagged with type: :feature
.
When we run bundle exec rspec
, we'll see the following line at the top of the output: Run options: exclude {:type=>"feature"}
. But if we run it with the command bundle exec
rspec --tag type:feature, we'll see
Run options: include {:type=>"feature"}, and it will run only the tests tagged with
type: :feature`.
Conclusion
In this article, we looked at a number of ways to test code that contains HTTP requests without actually having to make them. This not only sped up our tests considerably but allowed us to trigger certain error responses that would be difficult to see under normal circumstances.
Lastly, we looked at an approach to have a smaller number of "feature" tests, which actually do perform the real HTTP request but with a technique to avoid running this group of tests by default.