Fine-Tuning Apollo Client Caching for Your Data Graph
Ben Newman
Apollo Client Architect
GraphQL Summit
30 October 2019
How to find me:
What is the job of a GraphQL client?
We call ourselves the data Graph company rather than the GraphQL company for a reason
Your organization should have a single canonical data graph, sprawling and possibly federated, but logically unified by an evolving schema
The job of a client application is to faithfully replicate a subset of this data graph, with all the same relationships and constraints that matter to you
Everything we’re doing here would still make sense if the community fell out of love with the GraphQL query language
To be clear, I do not think this is a very likely scenario
More likely: a healthy combination of cooperating query languages that mutually benefit from the same client-side data graph
Apollo Client 3.0 is all about giving you the tools to maintain a coherent, efficiently reactive client-side data graph
Borrowing concepts already introduced by Apollo Federation, such as composite key entity identity
Powerful new tools like garbage collection and a declarative API for managing cached field values
Talk itinerary
-
The job of a GraphQL client -
What's new in Apollo Client 3.0
- Single
@apollo/client
package - Cache eviction and garbage collection
- Unified declarative configuration API
- In-depth pagination example
- Underlying technologies (time permitting)
- Single
-
Getting started
- How to try the beta
- Where to find the documentation
What’s new in
Apollo Client 3.0
One @apollo/client
package for everything
The following packages have been consolidated into one:
apollo-client
apollo-utilities
apollo-cache
apollo-cache-inmemory
apollo-link
apollo-link-http
@apollo/react-hooks
graphql-tag
One @apollo/client
package for everything
import React from "react";
import { render } from "react-dom";
import {
ApolloClient,
InMemoryCache,
HttpLink,
gql,
useQuery,
ApolloProvider,
} from "@apollo/client";
const client = new ApolloClient({
link: new HttpLink(...),
cache: new InMemoryCache(...),
});
const QUERY = gql`query { ... }`;
function App() {
const { loading, data } = useQuery(QUERY);
return ...;
}
render(
<ApolloProvider client={client}>
<App />
</ApolloProvider>,
document.getElementById("root"),
);
That's it!
One @apollo/client
package for everything
import {
ApolloClient,
InMemoryCache,
HttpLink,
gql,
useQuery,
ApolloProvider,
} from "@apollo/client";
-
Greatly simplifies package installation and versioning
-
No guesswork about which packages export what
-
Enables better dead code elimination and tree shaking
-
Allows coordination between
gql
(parsing) andHttpLink
(printing)
Cache eviction & garbage collection
-
Easily the most important missing features in Apollo Client
-
Tracing garbage collection, as opposed to reference counting
-
Important IDs can be explicitly retained to protect against collection
-
Inspiration taken from Hermes
Garbage collection
const query = gql`
query {
favoriteBook {
title
author {
name
}
}
}
`;
cache.writeQuery({
query,
data: {
favoriteBook: {
__typename: 'Book',
isbn: '9781451673319',
title: 'Fahrenheit 451',
author: {
__typename: 'Author',
name: 'Ray Bradbury',
}
},
},
});
cache.writeQuery({
query,
data: {
favoriteBook: {
__typename: 'Book',
isbn: '0312429215',
title: '2666',
author: {
__typename: 'Author',
name: 'Roberto Bolaño',
},
},
},
});
What does the cache look like now?
Garbage collection
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:9781451673319": {
__typename: "Book",
title: "Fahrenheit 451",
author: {
__ref: 'Author:Ray Bradbury',
},
},
"Author:Ray Bradbury": {
__typename: "Author",
name: "Ray Bradbury",
},
What does the cache look like now?
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
Garbage collection
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:9781451673319": {
__typename: "Book",
title: "Fahrenheit 451",
author: {
__ref: 'Author:Ray Bradbury',
},
},
"Author:Ray Bradbury": {
__typename: "Author",
name: "Ray Bradbury",
},
The same query can no longer read Fahrenheit 451 or Ray Bradbury
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
Garbage collection
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:9781451673319": {
__typename: "Book",
title: "Fahrenheit 451",
author: {
__ref: 'Author:Ray Bradbury',
},
},
"Author:Ray Bradbury": {
__typename: "Author",
name: "Ray Bradbury",
},
In fact, the old data are now unreachable unless we know their IDs
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
Garbage collection
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:9781451673319": {
__typename: "Book",
title: "Fahrenheit 451",
author: {
__ref: 'Author:Ray Bradbury',
},
},
"Author:Ray Bradbury": {
__typename: "Author",
name: "Ray Bradbury",
},
What happens when we call cache.gc()
?
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
Garbage collection
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
What happens when we call cache.gc()
?
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
Garbage collection
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
What happens when we call cache.gc()
?
Garbage collection
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
Smaller!
- Calling
cache.retain(id)
with an ID like"Author:Ray Bradbury"
will protect that object from garbage collection - Calling
cache.release(id)
undoes the retainment - Top-level IDs like
ROOT_QUERY
are automatically retained, so you don't usually need to think about retainment
Cache eviction
-
Calling
cache.evict(id)
immediately removes any object with that ID, regardless of retainment -
Eviction works in tandem with garbage collection, as any data that was only reachable from an evicted entity can be automatically collected by calling
cache.gc()
Declarative cache configuration API
All things considered, it’s better to tell a computer what you want to achieve, in just one place, rather than specifying how to achieve it in lots of different places
Declarative cache configuration API
The insight of declarative programming: declare your intentions as simply as you can, and then trust the computer to find the best way of satisfying them
-
Query fragments can have type conditions
Fragment matching and possibleTypes
-
Query fragments can have type conditions:
Fragment matching and possibleTypes
query {
all_characters {
... on Character {
name
}
... on Jedi {
side
}
... on Droid {
model
}
}
}
-
Query fragments can have type conditions
-
Interfaces can be extended by subtypes
-
Union types have member types
-
Also possible: sub-subtypes, unions of interface types, unions of unions
-
Apollo Client knows nothing about these relationships unless told about them
Fragment matching and possibleTypes
query {
__schema {
types {
kind
name
possibleTypes {
name
}
}
}
}
Fragment matching and possibleTypes
Previously, you would dump the output of this schema introspection query into a JSON file:
Fragment matching and possibleTypes
Then wrap the data with an IntrospectionFragmentMatcher
:
import {
InMemoryCache,
IntrospectionFragmentMatcher,
} from 'apollo-cache-inmemory';
import introspectionQueryResultData from './fragmentTypes.json';
const cache = new InMemoryCache({
fragmentMatcher: new IntrospectionFragmentMatcher({
introspectionQueryResultData,
}),
});
{
"data": {
"__schema": {
"types": [
{
"kind": "OBJECT",
"name": "AddToFilmPlanetsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToFilmSpeciesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToFilmStarshipsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToFilmVehiclesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeopleFilmPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeoplePlanetPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeopleSpeciesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeopleStarshipsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeopleVehiclesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AssetPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "AssetSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "AssetSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AssetSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateAsset",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreatePerson",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreatePlanet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateSpecies",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateStarship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateVehicle",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "FilmPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "FilmSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmcharactersPerson",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmplanetsPlanet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmspeciesSpecies",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmstarshipsStarship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmvehiclesVehicle",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "InvokeFunctionInput",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "InvokeFunctionPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Mutation",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "PersonPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "PersonSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonfilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonhomeworldPlanet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonspeciesSpecies",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonstarshipsStarship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonvehiclesVehicle",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "PlanetPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "PlanetSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetfilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetresidentsPerson",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromFilmPlanetsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromFilmSpeciesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromFilmStarshipsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromFilmVehiclesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeopleFilmPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeoplePlanetPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeopleSpeciesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeopleStarshipsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeopleVehiclesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "SpeciesPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciesSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciesSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "SpeciesSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciesfilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciespeoplePerson",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "StarshipPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshipSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshipSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "StarshipSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshipfilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshippilotsPerson",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Subscription",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateAsset",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdatePerson",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdatePlanet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateSpecies",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateStarship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateVehicle",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "VehiclePreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehicleSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehicleSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "VehicleSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehiclefilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehiclepilotsPerson",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "_ModelMutationType",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Asset",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "AssetFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "AssetOrderBy",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "DateTime",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Film",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "FilmOrderBy",
"possibleTypes": null
},
{
"kind": "INTERFACE",
"name": "Node",
"possibleTypes": [
{
"name": "Asset"
},
{
"name": "Film"
},
{
"name": "Person"
},
{
"name": "Planet"
},
{
"name": "Species"
},
{
"name": "Starship"
},
{
"name": "Vehicle"
}
]
},
{
"kind": "ENUM",
"name": "PERSON_EYE_COLOR",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PERSON_GENDER",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PERSON_HAIR_COLOR",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PERSON_SKIN_COLOR",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Person",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PersonOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Planet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PlanetOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Query",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "SPECIES_EYE_COLOR",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "SPECIES_HAIR_COLOR",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "SPECIES_SKIN_COLOR",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Species",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciesFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "SpeciesOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Starship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshipFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "StarshipOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Vehicle",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehicleFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "VehicleOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "_QueryMeta",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__Directive",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "__DirectiveLocation",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__EnumValue",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__Field",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__InputValue",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__Schema",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__Type",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "__TypeKind",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "Boolean",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "Float",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "ID",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "Int",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "String",
"possibleTypes": null
}
]
}
}
}
Fragment matching and possibleTypes
So much useless information!
const cache = new InMemoryCache({
possibleTypes: {
Character: ["Jedi", "Droid"],
Test: ["PassingTest", "FailingTest", "SkippedTest"],
Snake: ["Viper", "Python", "Asp"],
Python: ["BallPython", "ReticulatedPython"],
},
});
Fragment matching and possibleTypes
Apollo Client 3.0 allows you to provide just the necessary information:
Easy to generate manually or programmatically
Type policies
The standard
apollo-cache-inmemory
cache implementation promises to normalize your data, to enable efficient cache updates and repeated cache reads. However, not all data benefits from normalization, and the logic for deciding whether separate objects should be normalized together varies across use cases. Future versions of the Apollo Client cache will unify several related features—dataIdFromObject
, the@connection
directive, andcacheRedirects
—so Apollo developers can implement specialized normalization logic, or even disable normalization for certain queries.
As promised in the Apollo Client 2.6 blog post:
Type policies
-
An object passed to the
InMemoryCache
constructor via thetypePolicies
option -
Each key corresponds to a
__typename
, and each value is aTypePolicy
object that provides configuration for that type -
Configuration is optional, as most types can get by with default behavior
-
Because configuration coincides with cache creation, the rules are enforced for the entire lifetime of the cache
Entity identity
import {
InMemoryCache,
defaultDataIdFromObject,
} from 'apollo-cache-inmemory';
const cache = new InMemoryCache({
dataIdFromObject(object) {
switch (object.__typename) {
case 'Product': return `Product:${object.upc}`;
case 'Person': return `Person:${object.name}:${object.email}`;
case 'Book': return `Book:${object.title}:${object.author.name}`;
default: return defaultDataIdFromObject(object);
}
},
});
The Apollo Client 2.x way:
By default, the ID will include the __typename
and the value of the id
or _id
field
Entity identity
type Product @key(fields: "upc") {
upc: String!
}
type Person @key(fields: "name email") {
name: String!
email: String!
}
type Book @key(fields: "title author { name }") {
title: String!
author: Author!
}
The Apollo Federation way:
Important note: this is schema syntax, not directly applicable to the client
Entity identity
The Apollo Client 3.0 way:
Behind the scenes, a typical Book ID might look like
'Book:{"title":"Fahrenheit 451","author":{"name":"Ray Bradbury"}}'
import { InMemoryCache } from '@apollo/client';
const cache = new InMemoryCache({
typePolicies: {
Product: {
keyFields: ["upc"],
},
Person: {
keyFields: ["name", "email"],
},
Book: {
keyFields: ["title", "author", ["name"]],
},
},
});
Entity identity
What problems does this new API solve?
- Field names are reliably included along with their values
- The order of fields is fixed by the
keyFields
array order, rather than by object property creation order - No time is wasted executing logic for unrelated types
- Not an opaque function like
dataIdFromObject
, so Apollo Client can warn about missingkeyFields
- Never confused by query field aliasing
- Configuration can be code-generated from your schema
Not all data needs normalization!
const cache = new InMemoryCache({
typePolicies: {
},
});
In the keynote this morning, Matt used the example of search results, which may not benefit from normalization
Not all data needs normalization!
const cache = new InMemoryCache({
typePolicies: {
SearchQueryWithResults: {
// If we want the search results to be normalized and saved,
// we might use the query string to identify them in the cache.
keyFields: ["query"],
},
},
});
In the keynote this morning, Matt used the example of search results, which may not benefit from normalization
Not all data needs normalization!
const cache = new InMemoryCache({
typePolicies: {
SearchQueryWithResults: {
// If we want the search results to be normalized and saved,
// we might use the query string to identify them in the cache.
keyFields: ["query"],
},
SearchResult: {
// However, the individual search result objects might not
// have meaningful independent identities, even though they
// have a __typename. We can store them directly within the
// SearchQueryWithResults object by disabling normalization:
keyFields: false,
},
},
});
Disabling normalization improves colocation of data for objects that do not have stable identities
Field policies
-
Fields are the properties of entity objects, like the title of a Book object
-
However, in GraphQL, query fields can receive arguments, so a field’s value cannot always be uniquely identified by its name
-
In other words, the cache may need to store multiple distinct values for a single field, depending on the arguments
Field identity
query Feed($type: FeedType!, $offset: Int, $limit: Int) {
feed(type: $type, offset: $offset, limit: $limit) {
...FeedEntry
}
}
By default, the cache stores separate values for each unique combination of arguments, since it has no knowledge of what the arguments mean, or which ones might be important
This default behavior is problematic for arguments like offset
and limit
, since those arguments should not alter the underlying data, but merely filter it
Field identity
query Feed($type: FeedType!, $offset: Int, $limit: Int) {
feed(type: $type, offset: $offset, limit: $limit) @connection(
key: "feed",
filter: ["type"]
) {
...FeedEntry
}
}
Since Apollo Client 1.6, the recommended solution has been to include a @connection
directive in any query that requests the feed
field, to specify which arguments are important
Field identity
query Feed($type: FeedType!, $offset: Int, $limit: Int) {
feed(type: $type, offset: $offset, limit: $limit) @connection(
key: "feed",
filter: ["type"]
) {
...FeedEntry
}
}
While this solves the field identity problem, it’s repetitive!
Nothing stops different feed queries from using different @connection
directives, or neglecting to include a @connection
directive, which can cause duplication and inconsistencies within the cache
Field identity
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
},
},
},
},
});
Apollo Client 3.0 allows specifying key arguments in one place, when you create the InMemoryCache
:
Once you provide this information to the cache, you never have to repeat it anywhere else, as it will be uniformly applied to every query that asks for the feed
field within a Query
object
Reading and merging field values
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
},
},
},
},
});
This configuration means feed data will be stored according to its type
, ignoring any other arguments like offset
and limit
That’s nice and all, but how does this actually promote consistency or efficiency of the cache?
Reading and merging field values
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
},
},
},
},
});
When you exclude arguments from the field’s identity, you can still use those arguments to implement a custom read
function
Reading and merging field values
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
read(feedData, { args }) {
return feedData.slice(args.offset, args.offset + args.limit);
},
},
},
},
},
});
When you exclude arguments from the field’s identity, you can still use those arguments to implement a custom read
function
Reading and merging field values
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
read(feedData, { args }) {
return feedData.slice(args.offset, args.offset + args.limit);
},
},
},
},
},
});
A custom merge
function controls how new data should be combined with existing data
Reading and merging field values
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
read(feedData, { args }) {
return feedData.slice(args.offset, args.offset + args.limit);
},
merge(existingData, incomingData, { args }) {
return mergeFeedData(existingData, incomingData, args);
},
},
},
},
},
});
A custom merge
function controls how new data should be combined with existing data
Reading and merging field values
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
read(feedData, { args }) {
return feedData.slice(args.offset, args.offset + args.limit);
},
merge(existingData, incomingData, { args }) {
return mergeFeedData(existingData, incomingData, args);
},
},
},
},
},
});
This gives you complete control over the internal representation of the field’s value, as long as read
and merge
cooperate with each other
Pagination revisited
-
Pagination is the pattern of requesting large lists of data in multiple smaller “pages”
-
Currently achieved through a combination of
@connection
,fetchMore
, andupdateQuery
-
Apollo Client 3.0 has a new solution for this important but tricky pattern
- And you’ve already seen all the necessary ingredients!
Pagination revisited
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
Pagination revisited
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
This code needs to be repeated anywhere books are consumed in the application
Pagination revisited
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
Worse, it has to update the whole query result, when all we want to paginate are the books
Pagination revisited
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
If we wanted the update to do something more complicated than array concatenation, this code would get much uglier
Pagination revisited
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
With Apollo Client 3.0, you can delete this code, and implement custom a merge
function instead
Pagination revisited
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
})
}
/>
)}
</Query>
With Apollo Client 3.0, you can delete this code, and implement custom a merge
function instead
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
}
}
}
}
})
What goes in the Query.books
field policy?
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
}
}
}
}
})
First: let the cache know you want only one copy of the books
field within the Query
object
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
},
}
}
}
}
})
Now define what happens whenever the cache receives additional books
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
return [...(existing || []), ...incoming];
},
}
}
}
}
})
Now define what happens whenever the cache receives additional books
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
return [...(existing || []), ...incoming];
},
read(existing: Book[], { args }) {
return existing.slice(args.offset, args.offset + args.limit);
},
}
}
}
}
})
Provide a complementary read
function to specify what happens when we ask for an arbitrary range of books we already have
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
return [...(existing || []), ...incoming];
},
read(existing: Book[], { args }) {
return existing.slice(args.offset, args.offset + args.limit);
},
}
}
}
}
})
This reimplements the updateQuery
function we started with... but it has the same bugs
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
return [...(existing || []), ...incoming];
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
Reading needs to work when there are no existing books, by returning undefined
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
We can also do a better job handling arbitrary args.offset
and args.limit
values
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
const merged = existing ? existing.slice(0) : [];
for (let i = args.offset; i < args.offset + args.limit; ++i) {
merged[i] = incoming[i - args.offset];
}
return merged;
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
Yes, this code is a bit more complicated, but that's the cost of correctness, and you only have to get it right once
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
const merged = existing ? existing.slice(0) : [];
for (let i = args.offset; i < args.offset + args.limit; ++i) {
merged[i] = incoming[i - args.offset];
}
return merged;
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
Once per InMemoryCache
is an improvement, but what about once… ever, period?
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
const merged = existing ? existing.slice(0) : [];
for (let i = args.offset; i < args.offset + args.limit; ++i) {
merged[i] = incoming[i - args.offset];
}
return merged;
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
Nothing about this code is specific to books!
Pagination revisited
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
}
}
}
})
Reusability wins the day!
Pagination revisited
export function offsetLimitPaginatedField<T>() {
return {
keyArgs: false,
merge(existing: T[] | undefined, incoming: T[], { args }) {
const merged = existing ? existing.slice(0) : [];
for (let i = args.offset; i < args.offset + args.limit; ++i) {
merged[i] = incoming[i - args.offset];
}
return merged;
},
read(existing: T[] | undefined, { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
};
}
Pagination policies can be tricky to get right, but the same approach works no matter how fancy you get: cursors, deduplication, sorting, merging, et cetera…
Pagination revisited
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
}
}
}
})
One more thing…
Pagination revisited
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
book: {
},
}
}
}
})
What if we also have a Query
field for reading individual books by their ISBN numbers?
Pagination revisited
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
book: {
read(existing, { args, toReference }) {
return existing || toReference({
__typename: 'Book',
isbn: args.isbn,
});
},
},
}
}
}
})
A read
function can intercept this field and return an existing reference from the cache
Similar to the old cacheRedirects
API, which has been removed in Apollo Client 3.0
Pagination revisited
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
book(existing, { args, toReference }) {
return existing || toReference({
__typename: 'Book',
isbn: args.isbn,
});
},
}
}
}
})
This pattern is common enough to have a convenient shorthand syntax
Pagination revisited
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
book(existing, { args, toReference }) {
return existing || toReference({
__typename: 'Book',
isbn: args.isbn,
});
},
}
}
}
})
Field policy functions are good for a lot more than just pagination!
What about local state?
-
If you use an
@client
field without providing a local state resolver function, the client reads the field from the cache -
Now that you can define arbitrary field
read
functions, you may not even need to define a separate resolver function!- Could we delete the local state implementation??? 🤑
-
In order to deliver asynchronous results, synchronous
read
functions need a way to invalidate their own results (TODO)
Underlying technologies
-
Normalized entity objects stored in the
EntityCache
are treated as immutable data and updated non-destructively -
Possible thanks to the
DeepMerger
abstraction, which preserves object identity whenever merged data causes no changes to existing data- Within a single write operation, no modified object is shallow-copied more than once
-
Entity references can be lazily computed for garbage collection
-
Trivial to take immutable snapshots of the cache
Getting started
How to try the beta
-
npm install @apollo/client@beta
-
Start using
@apollo/client
exports rather than the Apollo Client 2.0 packages -
When you're ready:
npm remove apollo-client apollo-utilities apollo-cache apollo-cache-inmemory apollo-link apollo-link-http react-apollo @apollo/react graphql-tag
-
Follow the Release 3.0 pull request
Documentation
-
Latest docs can be found on the
release-3.0
branch:-
git checkout -t origin/release-3.0 git log --stat docs/source
- Cache configuration
- Fragment matching
-
-
A work in progress, as always
-
A little less incomplete than ever before, thanks to some crucial new hires on the documentation team:
Stephen Barlow
(@barlow_vo)
Deprecation cheat sheet
Existing feature | Status | Replacement |
---|---|---|
fragmentMatcher |
removed | possibleTypes |
dataIdFromObject |
deprecated | keyFields |
@connection |
deprecated | keyArgs |
cacheRedirects |
removed | field read function |
updateQuery |
avoidable | field merge function |
resetStore |
avoidable | GC, eviction |
local state | avoidable* | field read function |
separate packages | consolidated | @apollo/client |
The replacements are superior because they are declarative, non-repetitive, and consistently applied
Let’s do this
Fine-Tuning Apollo Client Caching for Your Data Graph
By Ben Newman
Fine-Tuning Apollo Client Caching for Your Data Graph
- 5,527