Sayed Alesawy

Sayed Alesawy

Testing Ruby on Rails Applications Using Rspec

Testing Ruby on Rails Applications Using Rspec

An essential part of any Software Engineer's daily development flow is writing tests. We write tests to:

  • Try different things while developing.
  • To artificially simulate some corner cases that are really hard to produce organically.
  • To gain confidence of whatever software you have developed.
  • And finally to be able to detect any regression (code changes that introduce bugs into different parts of the application even if they seem unrelated).

And there is also the TDD (Test Driven Development) approach where developers would write tests that define how a certain module or feature should behave under different conditions and then write as minimal code as possible to make those tests pass.

As we can easily see in either approaches, writing tests is an essential skill for any software engineer.

I have been writing production Ruby on Rails applications for like 2 years now and been using Rspec as my go to framework for writing tests. In this article, we will take a look at a sample Ruby on Rails application and learn how to test the different scenarios that we may run into while building and testing APIs.

The Code Under Test

The sample application we will be testing throughout this article is a miniature note taking API. It has the following resources:

  • Notebook
  • Note

A notebook can have many notes. The API implements all CRUD operations for both resources. Some of those CRUD operations, like the creation of notes, is performed in background workers using Sidekiq. And in some cases, our API would do external calls to other APIs to fetch some needed data.

The purpose of this is to artificially create a complexity that would introduce a lot of unique scenarios that we will have to test so that we can learn multiple techniques.

You can find the sample application, here.

What Are We Going to Learn?

In the context of testing the sample API, we will learn how to:

  1. Test an endpoint.
  2. Use fixtures to validate a large endpoint expected response.
  3. Validate persistence in a database.
  4. Use test hooks to prepare test prerequisites.
  5. Test code that issues external requests.
  6. Test asynchronous background jobs.
  7. Evaluate a test suite by measuring suite coverage.

The above list doesn't cover \(100\%\) of the scenarios you may run into while testing an API, but I would say it comes pretty close. So let's dive in!

1. How to Test an Endpoint?

At the heart of every API there is a collection of endpoints that the API serves. An endpoint is a communication channel through which an API receives HTTP requests and exposes its resources to an API consumer. Testing an endpoint would require the ability to send fake requests to this specific endpoint with certain parameters and also be able to consume the response sent back by the endpoint.

Rspec provides the ability to send fake requests to an endpoint that are routed inside the application exactly the same way a legit request would be routed. You can use different HTTP verbs (GET, POST, PUT, DELETE, etc.) and control the body payload and headers. This type of tests is known as request specs and can be implemented as follows:

RSpec.describe NotebooksController, type: :request do
  context '#index' do
    context 'When there are no notebooks' do 
      let!(:expected_response) { [] }

      it 'should return an empty array' do
        get '/notebooks'

        expect(response).to have_http_status(:ok)
        expect(response.body).to eq expected_response.to_json
      end
    end
end

We can see that the code above defines that this test is is being done to cover the NotebooksController and it's of type :request. This helps Rspec route the get '/notebooks' request to the correct controller (note that we are not using the full endpoint url). The test result validation takes place in the expect method as we validate that the response will have a status of ok or 200 and that the response body matches the expected body which is just an empty array here (because we don't have any data to return so far). Note that we need to define RSpec.describe NotebooksController, type: :request only once per test group.

2. How to Validate a Large Response?

As we can see from the previous example, we are validating the expected response body by explicitly defining it in our test code. That was fine for the previous test since it didn't return any data, but what if we are testing a case that would return a large body of data? It would seem to be a bad idea to define that body in our test code.

Instead of writing the expected response inside the test scripts which will take up unnecessary space and render the test scripts unreadable, we can use files to store any large values (whether response bodies or request bodies or anything related to test data). The convention here is to use a folder called fixtures where we place all our files, then make our test code read those files and use them within the test. We can do so as follows:

context '#index' do
  before do
    Notebook.create(title: "Notebook1", description: "title1")
    Notebook.create(title: "Notebook2", description: "title2")
    Notebook.create(title: "Notebook3", description: "title3")
    Notebook.create(title: "Notebook4", description: "title4")
  end

  let!(:expected_response) do
    JSON.parse(File.read('fixture_path.json'))
  end

  it 'should return an array of all records in the DB' do
    get '/notebooks'

    expect(response).to have_http_status(:ok)
    expect(response.body).to eq expected_response.to_json
  end
end

Where fixture_path.json contains the expected body as follows:

[
  {
    "id": 1,
    "title": "Notebook1",
    "description": "title1"
  },
  {
    "id": 2,
    "title": "Notebook2",
    "description": "title2"
  },
  {
    "id": 3,
    "title": "Notebook3",
    "description": "title3"
  },
  {
    "id": 4,
    "title": "Notebook4",
    "description": "title4"
  }
]

As we can see from the above test, this time we have data and the expected response is quite large to define within the test code so we populate the value of expected_response by reading and parsing the fixture file. It's pretty much the same as the previous test with the only difference of how the expected_response is populated.

3. How to Validate Persistence in Databases?

In the previous tests, we were testing GET endpoints whose job is usually returning data stored in a database. But what if we are to test other types of endpoints, such as, POST or DELETE? Those types of endpoints usually insert or delete records stored in a database. So how to validate that records have been inserted in or deleted from a database?

Rspec enables us to expect a change and also expect exceptions. So for example, if we are testing a DELETE endpoint, we can test it as follows:

context '#delete' do
  before do
    Notebook.create(title: "Notebook1", description: "title1")
    Notebook.create(title: "Notebook2", description: "title2")
  end

  let!(:expected_response) { { message: 'success' } }

  it 'should return 200' do
    delete '/notebooks/1'

    expect(response).to have_http_status(:ok)
    expect(response.body).to eq expected_response.to_json
  end

  it 'should decrease the size of the notebook relation by 1' do
    expect{ delete '/notebooks/1' }.to change{ Notebook.count }.from(Notebook.count).to(Notebook.count - 1)
  end

  it 'should delete the requested record' do
    delete '/notebooks/1'

    expect { Notebook.find(id) }.to raise_error(ActiveRecord::RecordNotFound)
  end
end

As we can see in the previous test, we validate the delete endpoint by:

  • Validating that it returns 200 and the expected message which indicates that the endpoint is there and working as expected in terms of API contracts, but it doesn't guarantee that it's actually deleting anything, it could be empty.
  • Validating that the number of notebook records has decreased by 1 which guarantees that a record has been deleted, but which record? Maybe it's deleting a random record.
  • Validating that the on selecting the notebook with id = 1, it raises the exception ActiveRecord::RecordNotFound which guarantees that it deleted the correct record.

4. How to Use Test Hooks to Prepare Test Prerequisites?

While writing large test suites, it’s often the case that there will be common steps that need to run at a specific time, either before/after the entire test suite or before/after each individual test, etc.

Test hooks have a lot of use cases, one of the most important use cases is data creation for a given test. We can already see that we have used test hooks in previous tests, but let's take a closer look at them here:

context '#delete' do
  before do
    Notebook.create(title: "Notebook1", description: "title1")
    Notebook.create(title: "Notebook2", description: "title2")
  end

  # rest of the test goes here ...

In the previous snippet, we define a before test hook that runs before the context called #delete executes and creates 2 Notebook records to be used later inside tests.

Another common use case for test hooks is preparing the test environment such as, cleaning the database before running each test so that a test would run in an isolated environment from any previous tests that may have run before it and altered the database state. There is a commonly used gem for that called database-cleaner that it's usually invoked via test hooks as follows:

config.around(:each) do |example|
  DatabaseCleaner.cleaning do
    example.run
  end
end

This cleans the database before each test.

There is a very important thing that we need to understand when it comes to test hooks which is their scope. For example consider the following test structure:

context 'context 1' do
  before do
    # hook 1
  end

  context 'sub context 1.1' do
    # tests 1.1
  end

  context 'sub context 1.2' do
    # tests 1.2
  end

  context 'sub context 1.3' do
    before do
      # hook 2
    end

    # test 1.3
  end
end

The previous structure, the test hook defined at hook 1 will run before all tests defined in all the following contexts sub context 1.1, sub context 1.2 and sub context 1.3. Meanwhile, the test hook defined at hook2 will only run before all tests defined in sub context 1.3 and it's not visible to any tests outside that context.

5. How to Test External API Calls?

In many cases, an API might need to communicate with another API for any reason which would require an internet connection and that the called API is up and running. In some other cases, the calls are internal calls to other services within the same cluster which will require that they will have to be up and running during the tests as well. But what would happen if the test ran in an offline environment? Or what if the called API was down? Or what if we want to run each service test suite at a time? All the aforementioned should not affect the test outcome.

To achieve this isolation, external API calls should be stubbed. Let's consider the following example, we have an endpoint that creates resources based on a set of parameters, one of which is an IP address that we use to populate the country attribute for that resource. For this task, we communicate with an external geo-location API that given an IP address, returns the country from which this IP address originates.

Rspec uses webmock which enables an application to disable all external communications and enforce that any external requests have to be stubbed. This can be done by adding WebMock.disable_net_connect! to your spec_helper.rb. Then we can write the test as follows:

before do
  stub_request(:get, "http://ip-api.com/json/#{user_ip}").
  to_return(status: 200, body: "{\"country\": \"dummy_country\"}", headers: {})
end

As we can see, we stub any requests to http://ip-api.com/json/#{user_ip}, like "http ://ip -api. com/json/156.204.128.187 " and return a pre-determine response. Also note that we used a test hook to implement this stub.

6. How to Test Asynchronous Background Jobs?

In all previous tests endpoints did their work in the foreground, in other words, they do all the computations in the main application thread causing other activities to block waiting. Often times, those computations might take a long time or be of a sheer volume and applications need to be responsive, i.e. receive and serve other requests while some previous computations are still being executed. For that use case, background jobs are used to enqueue jobs for deferred execution. This type of configuration requires special methods of validating that a job was enqueued and executed.

For the sake of this example, I am using Sidekiq to do background jobs and using its testing gem which is called rspec-sidekiq to write the tests as follows:

context '#create' do
  it 'should enqueue a job in the noota::notes-creator queue' do
    expect { post url, params: params }.to change(
      Sidekiq::Queues['noota::notes-creator'], :size
    ).by(1)
  end

  it 'Should enqueue the job with the correct args' do
    post url, params: params

    expected_job_args = {
      title: 'Note1', 
      body: 'Notebook1 does all the work', 
      notebook_id: '1', 
      country: 'dummy_country'
    }

    expect(
      JSON.parse(Sidekiq::Queues['noota::notes-creator'].first.with_indifferent_access['args'].first
      ).symbolize_keys
    ).to eq(expected_job_args)
  end
end

The above tests validates that the create notes endpoint is working correctly by:

  1. Validating that a job has been enqueued in the correct queue by expecting its size to change by 1.
  2. Validating that the enqueued job has the correct expected parameters.

The above tests don't really validate that a record is being inserted which is not the scope of a request spec. But we can do so in the worker specs as follows:

context 'Executes the job correctly' do
  let!(:args) do
    {
      title: 'Note1', 
      body: 'Notebook1 does all the work', 
      notebook_id: '1', 
      country: 'dummy_country'
    }
  end

  before do 
    NotesCreator.perform_async(args)
    NotesCreator.drain
  end

  it 'should increase the size of the notes table by 1' do
    expect { NotesCreator.drain }.to change{
      Note.count
    }.from(Note.count).to(Note.count + 1)
  end
end

In the above test, we enqueue a job and use rspec-sidekiq method called drain to execute all enqueued jobs in the queue and validate that doing so increases the number of note records by 1.

7. How to Measure Test Suite Coverage?

It’s important to be able to evaluate how comprehensive is a test suite. Having a good idea about the quality of a test suite helps setting the level of confidence and pinpointing possible areas of improvement.

There are a lot of methods to evaluate the quality of a test suite. One of the most commonly used metrics in the industry is Code Coverage. Rspec works with a gem called simplecov that generates detailed code coverage reports, highlighting the total coverage, how many covered lines, what lines are not covered, how many hits per line and lots of other insightful statistics that help developers improve a test suite.

How to Navigate the Included Sample Application?

The sample application follows your conventional Ruby on Rails file structure. You can find:

Any used gems can be checkout here.

Other Things to Learn?

There are a lot of other scenarios that you may run into while building and testing APIs such as:

  • Stubbing other modules during unit testing.
  • Creating mocks using doubles.
  • Expecting that certain methods have been or have not been called.
  • Disabling/enabling certain callbacks before certain tests.
  • Complex data creation using factories.
  • Shared examples.
  • ...

Resources

There are some pretty good resources all over the internet but I would highly recommend checking:

#ruby-on-rails#ruby#testing#web-development#tutorial
 
Share this