GraphQL-Ruby 1.9.0 includes a new runtime module which you may use for your schema. It is the default runtime since 1.12.0.
GraphQL::Execution::Interpreter, read on to learn more!
The new runtime was added to address a few specific concerns:
GraphQL::InternalRepresentation::Rewrite) which could be very slow in some cases. In many cases, the overhead of that step provided no value.
ctxobject for every field, even very simple fields that didn’t need any special tracking.
In GraphQL-Ruby 1.12, the interpreter is installed by default. In older versions, you can opt in to the interpreter in your schema class:
class MySchema < GraphQL::Schema # These are default in 1.12+: use GraphQL::Execution::Interpreter use GraphQL::Analysis::AST end
Some Relay configurations must be updated too. For example:
- field :node, field: GraphQL::Relay::Node.field + include GraphQL::Types::Relay::HasNodeField
(Alternatively, consider implementing
Query.node in your own app, using
NodeField as inspiration.)
The new runtime works with class-based schemas only. Several features are no longer supported:
Proc-dependent field features:
All these depend on the memory- and time-hungry per-field
ctx object. To improve performance, only method-based resolves are supported. If need something from
ctx, you can get it with the
extras: [...] configuration option. To wrap resolve behaviors, try Field Extensions, Tracing, or GraphQL::Schema::Resolver.
Query analyzers and
These depend on the now-removed
Rewrite step, which wasted a lot of time making often-unneeded preparation. Most of the attributes you might need from an
irep_node are available with
extras: [...]. Query analyzers can be refactored to be static checks (custom validation rules) or dynamic checks, made at runtime. The built-in analyzers have been refactored to run as validators.
For a replacement, check out:
This was built on middleware, which is not supported anymore. For a replacement, see Error Handling.
The interpreter uses class-based schema definitions only, and never converts them to legacy GraphQL definition objects. Any custom definitions to GraphQL objects should be re-implemented on custom base classes.
If you customized your base field’s resolution method, it needs an update. The interpreter calls a different method:
#resolve(obj, args, ctx). There are two differences with the new method:
argsis plain ol’ Ruby Hash, with symbol keys, instead of a
GraphQL::Query::Contextinstead of a
But besides that, it’s largely the same.
Maybe this section should have been called incompatibility 🤔.
GraphQL-Ruby has “analyzers” that run before execution and may reject a query. With the interpreter, you can use AST Analyzers to get better performance.
To make the migration, convert your previous analyzers to extend
GraphQL::Analysis::AST::Analyzer as described in the guide, then add to your schema:
When you use both
Analysis::AST, GraphQL-Ruby will skip the slow process of building
All analyzers must be migrated at once; running some legacy analyzers and some AST analyzers is not supported.
In GraphQL-Ruby 1.9, you can migrate to
Interpreter before migrating to
Analysis::AST. In that case, the
irep_node tree will still be constructed and used for analysis, even though it will not be used for execution.
In GraphQL-Ruby 1.10+,
Analysis::AST and will not work without it. (Soon, these will be the default runtime modules.)
Instead of a tree of
irep_nodes, the interpreter consumes the AST directly. This removes a complicated concept from GraphQL-Ruby (
irep_nodes) and simplifies the query lifecycle. The main difference relates to how fragment spreads are resolved. In the previous runtime, the possible combinations of fields for a given object were calculated ahead of time, then some of those combinations were used during runtime, but many of them may not have been. In the new runtime, no precalculation is made; instead each object is checked against each fragment at runtime.
Instead of creating a
GraphQL::Query::Context::FieldResolutionContext for every field in the response, the interpreter uses long-lived, mutable objects for execution bookkeeping. This is more complicated to manage, since the changes to those objects can be hard to predict, but it’s worth it for the performance gain. When needed, those bookkeeping objects can be “forked”, so that two parts of an operation can be resolved independently.
Instead of calling
.to_graphql internally to convert class-based definitions to
.define-based definitions, the interpreter operates on class-based definitions directly. This simplifies the workflow for creating custom configurations and using them at runtime.