Apollo Client Architect
GraphQL Summit
30 October 2019
Your organization should have a single canonical data graph, sprawling and possibly federated, but logically unified by an evolving schema
The job of a client application is to faithfully replicate a subset of this data graph, with all the same relationships and constraints that matter to you
To be clear, I do not think this is a very likely scenario
More likely: a healthy combination of cooperating query languages that mutually benefit from the same client-side data graph
Borrowing concepts already introduced by Apollo Federation, such as composite key entity identity
Powerful new tools like garbage collection and a declarative API for managing cached field values
@apollo/client
package@apollo/client
package for everythingapollo-client
apollo-utilities
apollo-cache
apollo-cache-inmemory
apollo-link
apollo-link-http
@apollo/react-hooks
graphql-tag
@apollo/client
package for everythingimport React from "react";
import { render } from "react-dom";
import {
ApolloClient,
InMemoryCache,
HttpLink,
gql,
useQuery,
ApolloProvider,
} from "@apollo/client";
const client = new ApolloClient({
link: new HttpLink(...),
cache: new InMemoryCache(...),
});
const QUERY = gql`query { ... }`;
function App() {
const { loading, data } = useQuery(QUERY);
return ...;
}
render(
<ApolloProvider client={client}>
<App />
</ApolloProvider>,
document.getElementById("root"),
);
@apollo/client
package for everything
import {
ApolloClient,
InMemoryCache,
HttpLink,
gql,
useQuery,
ApolloProvider,
} from "@apollo/client";
Greatly simplifies package installation and versioning
No guesswork about which packages export what
Enables better dead code elimination and tree shaking
Allows coordination between gql
(parsing) and HttpLink
(printing)
const query = gql`
query {
favoriteBook {
title
author {
name
}
}
}
`;
cache.writeQuery({
query,
data: {
favoriteBook: {
__typename: 'Book',
isbn: '9781451673319',
title: 'Fahrenheit 451',
author: {
__typename: 'Author',
name: 'Ray Bradbury',
}
},
},
});
cache.writeQuery({
query,
data: {
favoriteBook: {
__typename: 'Book',
isbn: '0312429215',
title: '2666',
author: {
__typename: 'Author',
name: 'Roberto Bolaño',
},
},
},
});
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:9781451673319": {
__typename: "Book",
title: "Fahrenheit 451",
author: {
__ref: 'Author:Ray Bradbury',
},
},
"Author:Ray Bradbury": {
__typename: "Author",
name: "Ray Bradbury",
},
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:9781451673319": {
__typename: "Book",
title: "Fahrenheit 451",
author: {
__ref: 'Author:Ray Bradbury',
},
},
"Author:Ray Bradbury": {
__typename: "Author",
name: "Ray Bradbury",
},
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:9781451673319": {
__typename: "Book",
title: "Fahrenheit 451",
author: {
__ref: 'Author:Ray Bradbury',
},
},
"Author:Ray Bradbury": {
__typename: "Author",
name: "Ray Bradbury",
},
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:9781451673319": {
__typename: "Book",
title: "Fahrenheit 451",
author: {
__ref: 'Author:Ray Bradbury',
},
},
"Author:Ray Bradbury": {
__typename: "Author",
name: "Ray Bradbury",
},
cache.gc()
?
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
cache.gc()
?
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
cache.gc()
?expect(cache.extract()).toEqual({
ROOT_QUERY: {
__typename: "Query",
book: { __ref: "Book:0312429215" },
},
"Book:0312429215": {
__typename: "Book",
title: "2666",
author: {
__ref: "Author:Roberto Bolaño"
},
},
"Author:Roberto Bolaño": {
__typename: "Author",
name: "Roberto Bolaño",
},
});
cache.retain(id)
with an ID like "Author:Ray Bradbury"
will protect that object from garbage collectioncache.release(id)
undoes the retainmentROOT_QUERY
are automatically retained, so you don't usually need to think about retainmentcache.evict(id)
immediately removes any object with that ID, regardless of retainmentcache.gc()
possibleTypes
possibleTypes
query {
all_characters {
... on Character {
name
}
... on Jedi {
side
}
... on Droid {
model
}
}
}
possibleTypes
query {
__schema {
types {
kind
name
possibleTypes {
name
}
}
}
}
possibleTypes
possibleTypes
IntrospectionFragmentMatcher
:import {
InMemoryCache,
IntrospectionFragmentMatcher,
} from 'apollo-cache-inmemory';
import introspectionQueryResultData from './fragmentTypes.json';
const cache = new InMemoryCache({
fragmentMatcher: new IntrospectionFragmentMatcher({
introspectionQueryResultData,
}),
});
{
"data": {
"__schema": {
"types": [
{
"kind": "OBJECT",
"name": "AddToFilmPlanetsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToFilmSpeciesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToFilmStarshipsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToFilmVehiclesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeopleFilmPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeoplePlanetPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeopleSpeciesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeopleStarshipsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AddToPeopleVehiclesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AssetPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "AssetSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "AssetSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "AssetSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateAsset",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreatePerson",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreatePlanet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateSpecies",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateStarship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "CreateVehicle",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "FilmPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "FilmSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmcharactersPerson",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmplanetsPlanet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmspeciesSpecies",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmstarshipsStarship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmvehiclesVehicle",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "InvokeFunctionInput",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "InvokeFunctionPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Mutation",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "PersonPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "PersonSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonfilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonhomeworldPlanet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonspeciesSpecies",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonstarshipsStarship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonvehiclesVehicle",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "PlanetPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "PlanetSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetfilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetresidentsPerson",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromFilmPlanetsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromFilmSpeciesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromFilmStarshipsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromFilmVehiclesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeopleFilmPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeoplePlanetPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeopleSpeciesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeopleStarshipsPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "RemoveFromPeopleVehiclesPayload",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "SpeciesPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciesSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciesSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "SpeciesSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciesfilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciespeoplePerson",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "StarshipPreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshipSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshipSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "StarshipSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshipfilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshippilotsPerson",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Subscription",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateAsset",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdatePerson",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdatePlanet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateSpecies",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateStarship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "UpdateVehicle",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "VehiclePreviousValues",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehicleSubscriptionFilter",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehicleSubscriptionFilterNode",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "VehicleSubscriptionPayload",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehiclefilmsFilm",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehiclepilotsPerson",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "_ModelMutationType",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Asset",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "AssetFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "AssetOrderBy",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "DateTime",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Film",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "FilmFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "FilmOrderBy",
"possibleTypes": null
},
{
"kind": "INTERFACE",
"name": "Node",
"possibleTypes": [
{
"name": "Asset"
},
{
"name": "Film"
},
{
"name": "Person"
},
{
"name": "Planet"
},
{
"name": "Species"
},
{
"name": "Starship"
},
{
"name": "Vehicle"
}
]
},
{
"kind": "ENUM",
"name": "PERSON_EYE_COLOR",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PERSON_GENDER",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PERSON_HAIR_COLOR",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PERSON_SKIN_COLOR",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Person",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PersonFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PersonOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Planet",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "PlanetFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "PlanetOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Query",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "SPECIES_EYE_COLOR",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "SPECIES_HAIR_COLOR",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "SPECIES_SKIN_COLOR",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Species",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "SpeciesFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "SpeciesOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Starship",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "StarshipFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "StarshipOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "Vehicle",
"possibleTypes": null
},
{
"kind": "INPUT_OBJECT",
"name": "VehicleFilter",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "VehicleOrderBy",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "_QueryMeta",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__Directive",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "__DirectiveLocation",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__EnumValue",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__Field",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__InputValue",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__Schema",
"possibleTypes": null
},
{
"kind": "OBJECT",
"name": "__Type",
"possibleTypes": null
},
{
"kind": "ENUM",
"name": "__TypeKind",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "Boolean",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "Float",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "ID",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "Int",
"possibleTypes": null
},
{
"kind": "SCALAR",
"name": "String",
"possibleTypes": null
}
]
}
}
}
possibleTypes
const cache = new InMemoryCache({
possibleTypes: {
Character: ["Jedi", "Droid"],
Test: ["PassingTest", "FailingTest", "SkippedTest"],
Snake: ["Viper", "Python", "Asp"],
Python: ["BallPython", "ReticulatedPython"],
},
});
possibleTypes
The standard
apollo-cache-inmemory
cache implementation promises to normalize your data, to enable efficient cache updates and repeated cache reads. However, not all data benefits from normalization, and the logic for deciding whether separate objects should be normalized together varies across use cases. Future versions of the Apollo Client cache will unify several related features—dataIdFromObject
, the@connection
directive, andcacheRedirects
—so Apollo developers can implement specialized normalization logic, or even disable normalization for certain queries.
InMemoryCache
constructor via the typePolicies
option__typename
, and each value is a TypePolicy
object that provides configuration for that typeimport {
InMemoryCache,
defaultDataIdFromObject,
} from 'apollo-cache-inmemory';
const cache = new InMemoryCache({
dataIdFromObject(object) {
switch (object.__typename) {
case 'Product': return `Product:${object.upc}`;
case 'Person': return `Person:${object.name}:${object.email}`;
case 'Book': return `Book:${object.title}:${object.author.name}`;
default: return defaultDataIdFromObject(object);
}
},
});
By default, the ID will include the __typename
and the value of the id
or _id
field
type Product @key(fields: "upc") {
upc: String!
}
type Person @key(fields: "name email") {
name: String!
email: String!
}
type Book @key(fields: "title author { name }") {
title: String!
author: Author!
}
Important note: this is schema syntax, not directly applicable to the client
Behind the scenes, a typical Book ID might look like
'Book:{"title":"Fahrenheit 451","author":{"name":"Ray Bradbury"}}'
import { InMemoryCache } from '@apollo/client';
const cache = new InMemoryCache({
typePolicies: {
Product: {
keyFields: ["upc"],
},
Person: {
keyFields: ["name", "email"],
},
Book: {
keyFields: ["title", "author", ["name"]],
},
},
});
keyFields
array order, rather than by object property creation orderdataIdFromObject
, so Apollo Client can warn about missing keyFields
const cache = new InMemoryCache({
typePolicies: {
},
});
In the keynote this morning, Matt used the example of search results, which may not benefit from normalization
const cache = new InMemoryCache({
typePolicies: {
SearchQueryWithResults: {
// If we want the search results to be normalized and saved,
// we might use the query string to identify them in the cache.
keyFields: ["query"],
},
},
});
In the keynote this morning, Matt used the example of search results, which may not benefit from normalization
const cache = new InMemoryCache({
typePolicies: {
SearchQueryWithResults: {
// If we want the search results to be normalized and saved,
// we might use the query string to identify them in the cache.
keyFields: ["query"],
},
SearchResult: {
// However, the individual search result objects might not
// have meaningful independent identities, even though they
// have a __typename. We can store them directly within the
// SearchQueryWithResults object by disabling normalization:
keyFields: false,
},
},
});
Disabling normalization improves colocation of data for objects that do not have stable identities
query Feed($type: FeedType!, $offset: Int, $limit: Int) {
feed(type: $type, offset: $offset, limit: $limit) {
...FeedEntry
}
}
By default, the cache stores separate values for each unique combination of arguments, since it has no knowledge of what the arguments mean, or which ones might be important
This default behavior is problematic for arguments like offset
and limit
, since those arguments should not alter the underlying data, but merely filter it
query Feed($type: FeedType!, $offset: Int, $limit: Int) {
feed(type: $type, offset: $offset, limit: $limit) @connection(
key: "feed",
filter: ["type"]
) {
...FeedEntry
}
}
Since Apollo Client 1.6, the recommended solution has been to include a @connection
directive in any query that requests the feed
field, to specify which arguments are important
query Feed($type: FeedType!, $offset: Int, $limit: Int) {
feed(type: $type, offset: $offset, limit: $limit) @connection(
key: "feed",
filter: ["type"]
) {
...FeedEntry
}
}
While this solves the field identity problem, it’s repetitive!
Nothing stops different feed queries from using different @connection
directives, or neglecting to include a @connection
directive, which can cause duplication and inconsistencies within the cache
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
},
},
},
},
});
Apollo Client 3.0 allows specifying key arguments in one place, when you create the InMemoryCache
:
Once you provide this information to the cache, you never have to repeat it anywhere else, as it will be uniformly applied to every query that asks for the feed
field within a Query
object
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
},
},
},
},
});
This configuration means feed data will be stored according to its type
, ignoring any other arguments like offset
and limit
That’s nice and all, but how does this actually promote consistency or efficiency of the cache?
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
},
},
},
},
});
When you exclude arguments from the field’s identity, you can still use those arguments to implement a custom read
function
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
read(feedData, { args }) {
return feedData.slice(args.offset, args.offset + args.limit);
},
},
},
},
},
});
When you exclude arguments from the field’s identity, you can still use those arguments to implement a custom read
function
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
read(feedData, { args }) {
return feedData.slice(args.offset, args.offset + args.limit);
},
},
},
},
},
});
A custom merge
function controls how new data should be combined with existing data
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
read(feedData, { args }) {
return feedData.slice(args.offset, args.offset + args.limit);
},
merge(existingData, incomingData, { args }) {
return mergeFeedData(existingData, incomingData, args);
},
},
},
},
},
});
A custom merge
function controls how new data should be combined with existing data
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
keyArgs: ["type"],
read(feedData, { args }) {
return feedData.slice(args.offset, args.offset + args.limit);
},
merge(existingData, incomingData, { args }) {
return mergeFeedData(existingData, incomingData, args);
},
},
},
},
},
});
This gives you complete control over the internal representation of the field’s value, as long as read
and merge
cooperate with each other
@connection
, fetchMore
, and updateQuery
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
This code needs to be repeated anywhere books are consumed in the application
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
Worse, it has to update the whole query result, when all we want to paginate are the books
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
If we wanted the update to do something more complicated than array concatenation, this code would get much uglier
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
updateQuery(prev, { fetchMoreResult }) {
if (!fetchMoreResult) return prev;
return Object.assign({}, prev, {
feed: [...prev.books, ...fetchMoreResult.books]
});
}
})
}
/>
)}
</Query>
With Apollo Client 3.0, you can delete this code, and implement custom a merge
function instead
<Query query={BOOKS_QUERY} variables={{
offset: 0,
limit: 10,
}} fetchPolicy="cache-and-network">
{({ data, fetchMore }) => (
<Library
entries={data.books || []}
onLoadMore={() =>
fetchMore({
variables: {
offset: data.books.length
},
})
}
/>
)}
</Query>
With Apollo Client 3.0, you can delete this code, and implement custom a merge
function instead
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
}
}
}
}
})
What goes in the Query.books
field policy?
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
}
}
}
}
})
First: let the cache know you want only one copy of the books
field within the Query
object
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
},
}
}
}
}
})
Now define what happens whenever the cache receives additional books
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
return [...(existing || []), ...incoming];
},
}
}
}
}
})
Now define what happens whenever the cache receives additional books
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
return [...(existing || []), ...incoming];
},
read(existing: Book[], { args }) {
return existing.slice(args.offset, args.offset + args.limit);
},
}
}
}
}
})
Provide a complementary read
function to specify what happens when we ask for an arbitrary range of books we already have
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
return [...(existing || []), ...incoming];
},
read(existing: Book[], { args }) {
return existing.slice(args.offset, args.offset + args.limit);
},
}
}
}
}
})
This reimplements the updateQuery
function we started with... but it has the same bugs
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
return [...(existing || []), ...incoming];
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
Reading needs to work when there are no existing books, by returning undefined
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
We can also do a better job handling arbitrary args.offset
and args.limit
values
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
const merged = existing ? existing.slice(0) : [];
for (let i = args.offset; i < args.offset + args.limit; ++i) {
merged[i] = incoming[i - args.offset];
}
return merged;
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
Yes, this code is a bit more complicated, but that's the cost of correctness, and you only have to get it right once
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
const merged = existing ? existing.slice(0) : [];
for (let i = args.offset; i < args.offset + args.limit; ++i) {
merged[i] = incoming[i - args.offset];
}
return merged;
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
Once per InMemoryCache
is an improvement, but what about once… ever, period?
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: {
keyArgs: false, // Take full control over this field
merge(existing: Book[], incoming: Book[], { args }) {
const merged = existing ? existing.slice(0) : [];
for (let i = args.offset; i < args.offset + args.limit; ++i) {
merged[i] = incoming[i - args.offset];
}
return merged;
},
read(existing: Book[], { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
}
}
}
}
})
Nothing about this code is specific to books!
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
}
}
}
})
Reusability wins the day!
export function offsetLimitPaginatedField<T>() {
return {
keyArgs: false,
merge(existing: T[] | undefined, incoming: T[], { args }) {
const merged = existing ? existing.slice(0) : [];
for (let i = args.offset; i < args.offset + args.limit; ++i) {
merged[i] = incoming[i - args.offset];
}
return merged;
},
read(existing: T[] | undefined, { args }) {
return existing && existing.slice(
args.offset,
args.offset + args.limit,
);
},
};
}
Pagination policies can be tricky to get right, but the same approach works no matter how fancy you get: cursors, deduplication, sorting, merging, et cetera…
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
}
}
}
})
One more thing…
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
book: {
},
}
}
}
})
What if we also have a Query
field for reading individual books by their ISBN numbers?
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
book: {
read(existing, { args, toReference }) {
return existing || toReference({
__typename: 'Book',
isbn: args.isbn,
});
},
},
}
}
}
})
A read
function can intercept this field and return an existing reference from the cache
Similar to the old cacheRedirects
API, which has been removed in Apollo Client 3.0
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
book(existing, { args, toReference }) {
return existing || toReference({
__typename: 'Book',
isbn: args.isbn,
});
},
}
}
}
})
This pattern is common enough to have a convenient shorthand syntax
import { offsetLimitPaginatedField } from "./helpers/pagination";
new InMemoryCache({
typePolicies: {
Query: {
fields: {
books: offsetLimitPaginatedField<Book>(),
book(existing, { args, toReference }) {
return existing || toReference({
__typename: 'Book',
isbn: args.isbn,
});
},
}
}
}
})
@client
field without providing a local state resolver function, the client reads the field from the cacheread
functions, you may not even need to define a separate resolver function!read
functions need a way to invalidate their own results (TODO)EntityCache
are treated as immutable data and updated non-destructivelyDeepMerger
abstraction, which preserves object identity whenever merged data causes no changes to existing datanpm install @apollo/client@beta
@apollo/client
exports rather than the Apollo Client 2.0 packages
npm remove apollo-client apollo-utilities apollo-cache apollo-cache-inmemory apollo-link apollo-link-http react-apollo @apollo/react graphql-tag
release-3.0
branch:git checkout -t origin/release-3.0 git log --stat docs/source
Stephen Barlow
(@barlow_vo)
Existing feature | Status | Replacement |
---|---|---|
fragmentMatcher |
removed | possibleTypes |
dataIdFromObject |
deprecated | keyFields |
@connection |
deprecated | keyArgs |
cacheRedirects |
removed | field read function |
updateQuery |
avoidable | field merge function |
resetStore |
avoidable | GC, eviction |
local state | avoidable* | field read function |
separate packages | consolidated | @apollo/client |