⚠ Experimental ⚠
This feature may get big changes in future releases. Check the changelog or subscribe to the newsletter for updates.
current_path ❌
@defer and @stream ❌
as:
loads:
prepare:
validates: ❌
extras:, including lookahead
raw_value 🟡
rescue_from 🟡
This guide includes tips for migrating your schema configuration and production traffic to the new engine.
Execution::Next is designed to run alongside the previous engine so that the same schema can run queries both ways. This supports an incremental migration and live toggling in production.
First, update your schema to include the necessary field configurations. If you implement new class methods in your Object type classes, you can also migrate instance methods to call “up” to those class methods, preserving a single source of truth:
field :unpublished_posts, [Types::Post], resolve_each: true
# Support batching:
def self.unpublished_posts(object, context)
object.posts.where(published: false).order("created_at DESC")
end
# Support legacy in a DRY way by calling the class method:
def unpublished_posts
self.class.unpublished_posts(object, context)
end
Test your new configurations in CI by running a new build which calls execution_next instead of execution, for example:
# test_helpers.rb
def run_graphql(...)
if ENV["GRAPHQL_EXECUTION_NEXT"]
MyAppSchema.execute_next(...)
else
MyAppSchema.execute(...)
end
end
Adopting a feature flag system (described below) can also make this easier.
When all tests pass on .execute_next, you’re ready to try it out in production.
Migrating field implementations can be automated in many cases. A script to analyze and execute these cases is in the works: Pull Request. This script will also be able to clean up unused instance methods when the migration is complete.
There are two categories of problems when migrating:
When migrating, these possibilities should be considered from three different angles. Using the new engine may…
To mitigate these possibilities, use dynamic release tools in production like feature flags and experiments.
You should use a feature flagging system so that you can shift traffic between old and new runtime engines without redeploying. A good feature-flagging system supports percentage-based flags, so that you can send 1% of traffic to new code while the other 99% uses existing code. After it runs without issues, you can increase the percentage. Or, if you discover issues in production (errors or performance), you can turn it back to 0% while you troubleshoot the problem.
For example:
# app/controllers/graphql_controller.rb
exec_method = use_graphql_next? ? :execute_next : :execute
result = MyAppSchema.public_send(exec_method, query_string, context: { ... }, variables: { ... })
render json: result
Flipper is a great gem for feature flags. You could also roll your own or pick a third-party service.
Before using .execute_next to produce results for production traffic, you might want to run an experiment as described below.
While the two runtime engines should return identical responses, it’s possible that .execute_next will return a different result than .execute due to gem bugs or schema misconfigurations. You can check for this using an “experiment” system in your application which runs both execution engines and compares the result (for queries only!).
You’ll want to use feature flagging to run the experiment on a subset of traffic, since it comes with performance overhead.
Here’s some example code for a setup like this:
# app/controllers/graphql_controller.rb
result = MySchema.execute(...)
# Use a dynamic flag, eg Flipper. This should always be true in development and test.
if use_graphql_next_experiment?
if !query_string.include?("mutation") && !query_string.include?("subscription") # easy way of checking for queries, could possibly have false negatives
batched_result = MySchema.execute_next(...)
if batched_result.to_h != result.to_h
# Log this mismatch somehow here, avoiding potential PII/passwords:
BugTracker.report <<~TXT
A GraphQL query returned a non-identical response. Sanitized query string:
#{result.query.sanitized_query_string}
User: #{current_user.id}
# Other context info here...
TXT
end
end
end
See Scientist for a full-blown production experimentation system.
A fully-managed rollout would include two flags:
use_graphql_next_experiment?: when true, build an .execute_next response and compare it to the .execute response. But always return the .execute response.use_graphql_next?: when true, use .execute_next and don’t call .execute at allThis gives you full control over how production traffic is executed without needing to redeploy. You can always turn them down to 0% to get the current behavior.
Here’s some example code:
if use_graphql_next? # again, use a dynamic feature flag
result = MySchema.execute_next(...)
else
result = MySchema.execute(...)
if use_graphql_next_experiment?
# Continue running the comparison experiment
end
end
render json: result.to_h
Performance improvements in batching execution come at the cost of removing support for many “nice-to-have” features in GraphQL-Ruby by default. Those features are addressed here.
The default, implicit field resolution behavior has changed. Previously, when a field didn’t have a specified method or hash key, GraphQL-Ruby would try a combination of object.public_send(...) and object[...] to resolve it. In Execution::Next, GraphQL-Ruby tries object.public_send(field_sym) unless another configuration is provided. This removes a lot of overhead from field execution.
Consider a field like this:
field :title, String
Previously, GraphQL-Ruby would check type_object.respond_to?(:title), object.respond_to?(:title), object.is_a?(Hash). object.key?(:title) and object.key?("title").
Now, GraphQL-Ruby simply calls object.title and allows the NoMethodError to bubble up if one is raised.
Support is identical; this runs before execution using the exact same code.
TODO: accessing loaded arguments inside analyzers may turn out to be slightly different; it still calls legacy code.
Full compatibility. def (self.)authorized? and def self.scope_items will be called as needed during execution.
Visibility works exactly as before; both runtime modules call the same methods to get type information from the schema.
Dataloader runs with new execution, but when migrating from instance methods to batch-level class methods, you may need to use Schema::Member::HasDataloader#dataload_all instead of .dataload.
Fully supported, but some legacy hooks are not called. Implement the new hooks instead (existing runtime already calls these new hooks). Not called are:
execute_field, execute_field_lazy: use begin_execute_field, end_execute_field instead. (These may be called multiple times when Dataloader pauses or a GraphQL-Batch promise is returned)execute_query, execute_query_lazy: use execute_multiplex for a top-level hook instead. (Single queries are always executed in a multiplex of size = 1.)resolve_type, authorized: use {begin,end}_resolve_type and {begin,end}_authorized instead. (May be called multiple times for Dataloader etc.)Lazy resolution runs in the new execution (GraphQL-Batch is supported). When migrating to class methods, you may need to update your library method calls to work on a set of inputs rather than a single input.
current_path ❌This is not supported because the new runtime doesn’t actually produce current_path.
It is theoretically possible to support this but it will be a ton of work. If you use this for core runtime functions, please share your use case in a GitHub issue and we can investigate future options.
@defer and @stream ❌This depends on current_path so isn’t possible yet.
Actually this probably works but I haven’t tested it.
Not supported yet. This will need some new kind of integration.
as:as: is applied: arguments are passed into Ruby methods by their as: names instead of their GraphQL names.
loads:loads: is handled as previously, except that custom def load_... methods are not called.
prepare:These methods/procs are called.
validates: ❌Built-in validators are supported. Custom validators will always receive nil as the object. (object is no longer available; this API will probably change before this is fully released.)
Field extensions are called, but it uses new methods:
def resolve_batching(objects:, arguments:, context:, &block) receives objects: instead of object: and should yield them to the given block to continue executiondef after_resolve_batching(objects:, arguments:, context:, values:, memo:) receives objects:, values:, ... instead of object:, value:, ... and should return an Array of results (isntead of a single result value).Because of their close integration with the runtime, ConnectionExtension and ScopeExtension don’t actually use after_resolve_batching. Instead, support is hard-coded inside the runtime. This might be a smell that field extensions aren’t worth supporting.
Resolver classes are called.
extras:, including lookahead:ast_node and :lookahead are already implemented. Others are possible – please raise an issue if you need one. extras: [:current_path] is not possible.
raw_value 🟡Supported but requires a manual opt-in at schema level. Support for this will probably get better somehow in a future version.
class MyAppSchema < GraphQL::Schema
uses_raw_value(true) # TODO This configuration will be improved in a future GraphQL-Ruby version
use GraphQL::Execution::Next
end
rescue_from 🟡Support is mostly in place here but not thoroughly tested
Connection arguments are automatically handled and connection wrapper objects are automatically applied to arrays and relations.
This works but if you want custom authorization or any lazy values, see notes about that compatibility.
To use the new engine to run a multiplex, use MyAppSchema.multiplex_next(...) with the same arguments.