GraphQL::Dataloader
provides efficient, batched access to external services, backed by Ruby’s Fiber
concurrency primitive. It has a per-query result cache and AsyncDataloader supports truly parallel execution out-of-the-box.
GraphQL::Dataloader
is inspired by @bessey
’s proof-of-concept and shopify/graphql-batch.
GraphQL::Dataloader
facilitates a two-stage approach to fetching data from external sources (like databases or APIs):
GraphQL::Dataloader
initiates actual fetches to external servicesThat cycle is repeated during execution: data requirements are gathered until no further GraphQL fields can be executed, then GraphQL::Dataloader
triggers external calls based on those requirements and GraphQL execution resumes.
GraphQL::Dataloader
uses Ruby’s Fiber
, a lightweight concurrency primitive which supports application-level scheduling within a Thread
. By using Fiber
, GraphQL::Dataloader
can pause GraphQL execution when data is requested, then resume execution after the data is fetched.
At a high level, GraphQL::Dataloader
’s usage of Fiber
looks like this:
GraphQL::Dataloader
takes the first paused Fiber and resumes it, causing the GraphQL::Dataloader::Source
to execute its #fetch(...)
call. That Fiber continues execution as far as it can.Whenever GraphQL::Dataloader
creates a new Fiber
, it copies each pair from Thread.current[...]
and reassigns them inside the new Fiber
.
AsyncDataloader
, built on top of the async
gem, supports parallel I/O operations (like network and database communication) via Ruby’s non-blocking Fiber.schedule
API. Learn more →.
To install GraphQL::Dataloader
, add it to your schema with use ...
, for example:
class MySchema < GraphQL::Schema
# ...
use GraphQL::Dataloader
end
Then, inside your schema, you can request batch-loaded objects by their lookup key with dataloader.with(...).load(...)
:
field :user, Types::User do
argument :handle, String
end
def user(handle:)
dataloader.with(Sources::UserByHandle).load(handle)
end
Or, load several objects by passing an array of lookup keys to .load_all(...)
:
field :is_following, Boolean, null: false do
argument :follower_handle, String
argument :followed_handle, String
end
def is_following(follower_handle:, followed_handle:)
follower, followed = dataloader
.with(Sources::UserByHandle)
.load_all([follower_handle, followed_handle])
followed && follower && follower.follows?(followed)
end
To prepare requests from several sources, use .request(...)
, then call .load
after all requests are registered:
class AddToList < GraphQL::Schema::Mutation
argument :handle, String
argument :list, String, as: :list_name
field :list, Types::UserList
def resolve(handle:, list_name:)
# first, register the requests:
user_request = dataloader.with(Sources::UserByHandle).request(handle)
list_request = dataloader.with(Sources::ListByName, context[:viewer]).request(list_name)
# then, use `.load` to wait for the external call and return the object:
user = user_request.load
list = list_request.load
# Now, all objects are ready.
list.add_user!(user)
{ list: list }
end
end
loads:
and object_from_id
dataloader
is also available as context.dataloader
, so you can use it to implement MySchema.object_from_id
. For example:
class MySchema < GraphQL::Schema
def self.object_from_id(id, ctx)
model_class, database_id = IdDecoder.decode(id)
ctx.dataloader.with(Sources::RecordById, model_class).load(database_id)
end
end
Then, any arguments with loads:
will use that method to fetch objects. For example:
class FollowUser < GraphQL::Schema::Mutation
argument :follow_id, ID, loads: Types::User
field :followed, Types::User
def resolve(follow:)
# `follow` was fetched using the Schema's `object_from_id` hook
context[:viewer].follow!(follow)
{ followed: follow }
end
end
To implement batch-loading data sources, see the Sources guide.
You can run I/O operations in parallel with GraphQL::Dataloader. There are two approaches: