Migrating to Execution::Next

This guide includes tips for migrating your schema configuration and production traffic to the new engine.

Migration Philosophy

Execution::Next is designed to run alongside the previous engine so that the same schema can run queries both ways. This supports an incremental migration and live toggling in production.

First, update your schema to include the necessary field configurations. If you implement new class methods in your Object type classes, you can also migrate instance methods to call “up” to those class methods, preserving a single source of truth:

field :unpublished_posts, [Types::Post], resolve_each: true

# Support batching:
def self.unpublished_posts(object, context)
  object.posts.where(published: false).order("created_at DESC")
end

# Support legacy in a DRY way by calling the class method:
def unpublished_posts
  self.class.unpublished_posts(object, context)
end

Test your new configurations in CI by running a new build which calls execution_next instead of execution, for example:

# test_helpers.rb
def run_graphql(...)
  if ENV["GRAPHQL_EXECUTION_NEXT"]
    MyAppSchema.execute_next(...)
  else
    MyAppSchema.execute(...)
  end
end

Adopting a feature flag system (described below) can also make this easier.

When all tests pass on .execute_next, you’re ready to try it out in production.

Migration and Clean-Up Script

graphql_migrate_execution is a command-line development tool that can automate many common GraphQL-Ruby field resolver patterns.

Check out its docs and try out: https://rmosolgo.github.io/graphql_migrate_execution/

Production Considerations

There are two categories of problems when migrating:

When migrating, these possibilities should be considered from three different angles. Using the new engine may…

To mitigate these possibilities, use dynamic release tools in production like feature flags and experiments.

Feature Flags

You should use a feature flagging system so that you can shift traffic between old and new runtime engines without redeploying. A good feature-flagging system supports percentage-based flags, so that you can send 1% of traffic to new code while the other 99% uses existing code. After it runs without issues, you can increase the percentage. Or, if you discover issues in production (errors or performance), you can turn it back to 0% while you troubleshoot the problem.

For example:

# app/controllers/graphql_controller.rb
exec_method = use_graphql_next? ? :execute_next : :execute
result = MyAppSchema.public_send(exec_method, query_string, context: { ... }, variables: { ... })
render json: result

Flipper is a great gem for feature flags. You could also roll your own or pick a third-party service.

Before using .execute_next to produce results for production traffic, you might want to run an experiment as described below.

Experiments

While the two runtime engines should return identical responses, it’s possible that .execute_next will return a different result than .execute due to gem bugs or schema misconfigurations. You can check for this using an “experiment” system in your application which runs both execution engines and compares the result (for queries only!).

You’ll want to use feature flagging to run the experiment on a subset of traffic, since it comes with performance overhead.

Here’s some example code for a setup like this:

# app/controllers/graphql_controller.rb
result = MySchema.execute(...)

# Use a dynamic flag, eg Flipper. This should always be true in development and test.
if use_graphql_next_experiment?
  if !query_string.include?("mutation") && !query_string.include?("subscription") # easy way of checking for queries, could possibly have false negatives
    batched_result = MySchema.execute_next(...)
    if batched_result.to_h != result.to_h
      # Log this mismatch somehow here, avoiding potential PII/passwords:
      BugTracker.report <<~TXT
        A GraphQL query returned a non-identical response. Sanitized query string:

        #{result.query.sanitized_query_string}

        User: #{current_user.id}
        # Other context info here...
      TXT
    end
  end
end

See Scientist for a full-blown production experimentation system.

Combining feature flags and experiments

A fully-managed rollout would include two flags:

This gives you full control over how production traffic is executed without needing to redeploy. You can always turn them down to 0% to get the current behavior.

Here’s some example code:

if use_graphql_next? # again, use a dynamic feature flag
  result = MySchema.execute_next(...)
else
  result = MySchema.execute(...)
  if use_graphql_next_experiment?
    # Continue running the comparison experiment
  end
end

render json: result.to_h

Compatibility Notes

Execution::Next’s new structure means that some GraphQL-Ruby features behave differently (or aren’t supported at all, at least not yet). They are discussed one-by-one below.

Implicit Field Resolution

The default, implicit field resolution behavior has changed. Previously, when a field didn’t have a specified method or hash key, GraphQL-Ruby would try a combination of object.public_send(...) and object[...] to resolve it. In Execution::Next, GraphQL-Ruby tries object.public_send(field_sym) unless another configuration is provided. This removes a lot of overhead from field execution.

Consider a field like this:

field :title, String

Previously, GraphQL-Ruby would check type_object.respond_to?(:title), object.respond_to?(:title), object.is_a?(Hash). object.key?(:title) and object.key?("title").

Now, GraphQL-Ruby simply calls object.title and allows the NoMethodError to bubble up if one is raised.

Interface Resolver Methods

Resolver methods are now class methods instead of instance methods. In order to make this work in interface modules, they must be defined in a resolver_methods do ... end block, for example:

module Node
  include BaseInterface

  field :id, ID, resolve_each: true

  resolver_methods do
    # This will define `def self.id` on Object types that implement this interface
    def id(object, context)
      GlobalId.new(object).to_s
    end
  end

  # Backwards compat instance method:
  def id
    self.class.id(object, context)
  end
end

Methods defined in resolver_methods { ... } will be copied into Object type classes as class methods, so they’ll be available for resolve_{each|static|batch} fields.

Query Analyzers, including complexity 🟡

Support is identical; this runs before execution using the exact same code.

TODO: accessing loaded arguments inside analyzers may turn out to be slightly different; it still calls legacy code.

Authorization, Scoping

def (self.)authorized? and def self.scope_items will be called as needed during execution.

One incompatibility:

Visibility, including Changesets

Visibility works exactly as before; both runtime modules call the same methods to get type information from the schema.

Dataloader

Dataloader runs with new execution, but when migrating from instance methods to batch-level class methods, you may need to use Schema::Member::HasDataloader#dataload_all instead of .dataload.

Tracing

Fully supported, but some legacy hooks are not called. Implement the new hooks instead (existing runtime already calls these new hooks). Not called are:

Additionally, object parameters to those methods will receive an Array of objects instead.

Lazy resolution (GraphQL-Batch)

Lazy resolution runs in the new execution (GraphQL-Batch is supported). When migrating to class methods, you may need to update your library method calls to work on a set of inputs rather than a single input.

current_path ❌

This is not supported because the new runtime doesn’t actually produce current_path.

It is theoretically possible to support this but it will be a ton of work. If you use this for core runtime functions, please share your use case in a GitHub issue and we can investigate future options.

Scoped context ❌

This is currently implemented with current_path. Another implementation is probably possible but not implemented yet. Please open an issue to discuss.

@defer 🟡

@defer is supported with an implementation difference that probably doesn’t affect your application: previously, @defer worked by pausing and resuming the same GraphQL::Query instance. However, with Execution::Next, @defer takes a different approach. Instead, when a GraphQL::Query encounters @defer, it notes the location in the document and stops executing that branch. Later, when you request the deferred result, that branch of the query is resumed using a new instance of GraphQL::Query::Partial.

This might matter if you’re modifying context at runtime because those new instances also have fresh Query::Context instances. The original query context will get copied into the @defer branches using Query::Context.new(**original_query.context.to_h), so any custom values will be available. But if you assign new keys after the context is copied, those keys won’t appear when running later @defered branches.

To handle this, you can refactor how you accumulate data during execution. Instead of ||=‘ing into context[...] during execution, assign a new accumulator object before starting the query, then call methods on that object to make any necessary state changes. That new object will be copied into @defer partials, and since the object is shared between the different branches, any necessary state changes will still be “seen” everywhere.

If this gives you trouble, please feel free to email me or open an issue on GitHub to discuss a migration strategy.

GraphQL-Batch support

When using Execution::Next, no custom code is required to support graphql-batch – support is built-in.

@stream

@stream is supported.

See the not above about how @defer no longer resumes the original, top-level query. The same thing applies to @stream.

GraphQL::Pro::Stream now lazily streams Enumerators. If you were using the (undocumented) GraphQL::Pro::FutureStream, you can switch to GraphQL::Pro::Stream after migrating to Execution::Next. (Once all your traffic uses the new execution module, you’ll get the same runtime behavior from GraphQL::Pro::Stream.)

ObjectCache

Supported completely.

Custom Directives ❌

There is some implementation in the code right now but it’s not stable. Please open an issue to discuss.

Query-level directives are not implemented yet, but will be. Please open an issue if you have a use case for this.

as:

as: is applied: arguments are passed into Ruby methods by their as: names instead of their GraphQL names.

loads: 🟡

loads: is handled as previously, except that custom def load_... methods are not called.

prepare: 🟡

Procs are called as before.

Methods that depend on a runtime object (such as a type instance or Mutation class) are not called, because arguments are prepared before objects are ready.

validates: 🟡

Built-in validators are supported. Custom validators will always receive nil as the object. (object is no longer available; this API will probably change before this is fully released.)

Field Extensions 🟡

Field extension methods are called with new arguments:

You can support both types of calls in your methods by changing the signature to object: nil, objects: nil (and value: nil, values: nil), then checking which argument was passed.

Resolver classes (including Mutations and Subscriptions) 🟡

Resolver classes are called, but with slightly different semantics:

Field extras:, including lookahead

:ast_node and :lookahead are already implemented. Others are possible – please raise an issue if you need one. extras: [:current_path] is not possible.

raw_value 🟡

Supported, but the raw_value call must be made on context, for example:

field :values, SomeObjectType, resolve_static: true

def self.values(context)
  context.raw_value(...)
end

Errors and rescue_from 🟡

Raising GraphQL::ExecutionError and adding rescue_from handlers are supported

Returning an array of GraphQL::ExecutionError instances is not supported anymore.

extras: [:execution_errors] and context.add_error are not supported anymore.

Connection fields

Connection arguments are automatically handled and connection wrapper objects are automatically applied to arrays and relations.

Custom Introspection

This works but if you want custom authorization or any lazy values, see notes about that compatibility.

If you’re reimplementing default values, you’ll need to add the corresponding resolve_static: true or resolve_each: true configurations. See the built-in type definitions under GraphQL::Introspection to get these configurations.

Multiplex

To use the new engine to run a multiplex, use MyAppSchema.multiplex_next(...) with the same arguments.

GraphQL::Current 🟡

current_field doesn’t work; dataloader_source works. current_operation_name doesn’t work.

This will be fixed soon but may require opt-in to avoid needless overhead.

fallback_value: ❌

fallback_value: is not supported in Execution::Next. It’s not implemented because of the overhead it adds to resultion. You’ll have to implement it by hand in resolvers.

graphql_migrate_execution creates a resolver that always returns the fallback_value. This might be right in some cases, but you’ll probably have to implement your own method, like:

field :name, String, fallback_value: "Anonymous", resolve_each: :resolve_name

def resolve_name(object, context)
  if object.respond_to?(:name)
    object.name
  elsif (is_h = object.is_a?(Hash)) && object.key?(:name)
    object[:name]
  elsif is_h && object.key?("name")
    object["name"]
  else
    "Anonymous"
  end
end

GraphQL::Backtrace

Doesn’t support Execution::Next, but it’s probably not necessary. Execution::Next includes the field name in error messages and doesn’t generate crazy-long stack traces because of its design.