What's new in v0.17
This is a distillation of what's new in Orbit v0.17, intended as a reference for developers who need to upgrade their apps and libraries from v0.16.
If you're brand new to Orbit yourself, you may wish to skip this section in order to explore Orbit's latest features in a broader context.
New Site + API Referenceβ
v0.17 is Orbit's first release that comes with API docs for all its packages. These docs are generated by TypeDoc from Orbit's typings and code annotations. Although a bit sparse for now, this reference should only improve with time and help from the community. Contributions will be most appreciated!
Improved, strict typings throughoutβ
The TypeScript in all of Orbit's packages has been improved to the extent that it is now all compiled with the strict flag. This has allowed us to refactor more confidently, improve our documentation, and provide a better developer experience all around.
Extraction of @orbit/records
from @orbit/data
β
As part of the push to improve typings, it became clear that
@orbit/data
contains a number of interfaces and classes
that could prove useful for any type of data, not just records. Thus,
record-specific types and classes were extracted into a new package:
@orbit/records
.
Please review the exports from @orbit/records
for
a complete listing of classes, interfaces, and other types that have been moved
to this new package.
Be aware that several exports have been renamed to be explicit about being
record-specific. For instance, Schema
is now RecordSchema
, so you'll want to
make this refactor:
- import { Schema } from '@orbit/data';
+ import { RecordSchema } from '@orbit/records';
Apologies for this breaking change and the refactoring it requires. We're trying to settle the scope of each package prior to v1.0.
Breaking change
Please review all your direct imports from @orbit/data
and replace them as
needed with imports from @orbit/records
.
Singular vs. multi-expression queriesβ
In v0.16, each Query
could only have a single expression
:
// v0.16
export interface Query {
id: string;
expression: QueryExpression;
options?: any;
}
Now, Query
is typed as follows, with
expressions
that can be singular or an array of query expressions:
// v0.17
export interface Query<QE extends QueryExpression> {
id: string;
expressions: QE | QE[];
options?: RequestOptions;
}
This allows sources, such as
JSONAPISource
, to optionally perform
these expressions in parallel, which it does now by default.
Now that queries can contain multiple expressions just like transforms can contain multiple operations, there needs to be a clear and consistent way to build them. And likewise, the expectation needs to be clear about the form in which results should be returned.
Here's a single expression to a query builder, which can be expected to return a single result:
const earth = await source.query((q) =>
q.findRecord({ type: 'planet', id: 'earth' })
);
That same expression could be passed in an array, which will cause results to be returned in an array:
const [earth] = await source.query((q) => [
q.findRecord({ type: 'planet', id: 'earth' })
]);
And of course, that array could be expanded to include more than one expression:
const [earth, jupiter, saturn] = await source.query((q) => [
q.findRecord({ type: 'planet', id: 'earth' }),
q.findRecord({ type: 'planet', id: 'jupiter' }),
q.findRecord({ type: 'planet', id: 'saturn' })
]);
As mentioned above, this query may be handled with 3 parallel requests, but will only resolve when all have completed successfully.
Breaking change
Although most developers typically do not interact with queries directly, if
you do it's important to note the change from expression
-> expressions
.
Singular vs. multi-operation transformsβ
All the patterns mentioned above for queries also apply to transforms.
A single operation provided to a transform builder will return a single result:
const earth = await source.update((t) =>
t.addRecord({ type: 'planet', id: 'earth' })
);
The same expression passed in an array will cause results to be returned in an array:
const [earth] = await source.update((t) => [
t.addRecord({ type: 'planet', id: 'earth' })
]);
And as before, multi-operation transforms will produce an array of results:
const [earth, jupiter, saturn] = await source.update((t) => [
t.addRecord({ type: 'planet', id: 'earth' }),
t.addRecord({ type: 'planet', id: 'jupiter' }),
t.addRecord({ type: 'planet', id: 'saturn' })
]);
The Transform
interface has changed
subtly such that operations
can now be singular or an array, to parallel
Query#expressions
:
// v0.17
export interface Transform<O extends Operation> {
id: string;
operations: O | O[];
options?: RequestOptions;
}
Breaking changes
The change that allows Transform
's operations
to be singular is breaking.
You may wish to use a utility function such as
toArray
to interact with operations
uniformly as an array.
Also note that, in v0.16, calling update
with a single operation in an array
would return a singular result. It will now return that same result as the
single member of an array.
Full vs. data-only responsesβ
All requests (queries and updates) can now be made with a { fullResponse: true
}
option to receive responses as a
FullResponse
. Full responses include
the following members:
data
- the primary data that would be returned without thefullResponse
optiondetails
- response details particular to the source. For aMemorySource
, this will include applied and inverse operations. For aJSONAPISource
, this will includeResponse
objects and documents.transforms
- these are the transforms applied as a result of this request. They are always emitted with atransform
event, which hooks into Orbit's sync flow.sources
- a map of source-specific response details from downstream sources that were engaged in fulfilling this request.
It's now up to you just how much of this information you want at the call site. The following requests will be handled the same internally:
// Just the data
const planets = await source.query((q) => q.findRecords('planet'));
// All the details
const { data, details, transforms, sources } = await source.query(
(q) => q.findRecords('planet'),
{ fullResponse: true }
);
Improved response typingsβ
Speaking of responses, it's now possible to type them using TypeScript
generics instead
of relying on type coercion (i.e. response as Type
).
Standard data requests can type the response data:
// query<RequestData>(queryOrExpressions, options, id?): Promise<RequestData>
const planets = await source.query<Planet[]>((q) => q.findRecords('planet'));
Full data requests can type the response data, details, and operation:
// query<RequestData, RequestDetails, RequestOperation>(queryOrExpressions, options, id?): Promise<FullResponse<RequestData, RequestDetails, RequestOperation>>;
const { data, details, transforms, sources } = await source.query<
Planet[],
JSONAPIResponse[],
RecordOperation
>((q) => q.findRecords('planet'), { fullResponse: true });
Deprecation of Pullable
and Pushable
interfacesβ
Now that responses can include full processing details, everything that was
unique to the pull
and push
methods on source is redundant. The Pullable
and Pushable
interfaces have been deprecated to focus on the more capable
Queryable
and Updatable
interfaces for making requests.
One common use case for pull
/ push
was restoring from backup:
const transform = await backup.pull((q) => q.findRecords());
await memory.push(transform);
This can be achieved as follows with query
/ sync
(or update
):
const allRecords = await backup.query((q) => q.findRecords());
await memory.sync((t) => allRecords.map((r) => t.addRecord(r)));
And if you do want access to the transforms that result from a request, specify that you want a full response:
const { transforms } = await source.update((t) => [
t.addRecord(type: 'planet', attributes: { name: 'Earth' }),
t.addRecord(type: 'planet', attributes: { name: 'Jupiter' })
],
{ fullResponse: true }
);
Transform buffers for faster cache processingβ
Record-cache-based sources that interact with browser storage have had
performance issues when dealing with large datasets, especially when paired with
read/write heavy processors that ensure relationship tracking and correctness. A
new paradigm has been developed, the RecordTransformBuffer
, that acts as a
memory buffer for these operations.
For now, using this buffer is opt-in, with the { useBuffer: true }
option:
await indexeddbSource.update((t) => [
t.addRecord(type: 'planet', attributes: { name: 'Earth' }),
t.addRecord(type: 'planet', attributes: { name: 'Jupiter' })
],
{ useBuffer: true }
);
Performance improvements are quite promising, and stability seems solid.
caution
The only edge cases we've found to be concerned about are related to cascading
deletes, which are triggered when record relationships are defined with
dependent: delete
. In those cases, the cascade may not be as complete in the
buffer as in the actual cache, so we recommend avoiding transform buffers for
now.
New serializersβ
Concepts of serialization have, up until now, been very specific to usage by the
JSONAPISource
, and particularly the JSONAPISerializer
class. This class has
been deprecated and replaced with a series of composable serializers all build
upon a simple and flexible
Serializer
interface. This
interface, as well as some serializers for primitives (booleans, dates,
date-times, etc.) have been published in a new package,
@orbit/serializers
.
New serializers particular to JSON:API have also been added to
@orbit/jsonapi
, including:
JSONAPIDocumentSerializer
JSONAPIResourceSerializer
JSONAPIResourceIdentitySerializer
JSONAPIResourceFieldSerializer
JSONAPIOperationSerializer
JSONAPIOperationsDocumentSerializer
These new serializers remove some of the default behaviors present in v0.16 - resource fields and types in documents are no longer dasherized and pluralized, but are left "as is" in camelized form. This lines up with the new recommendations for the JSON:API spec and creates much less work by default.
Each of these classes can be overridden to provide custom serialization behavior. You could then provide those custom classes when creating your source:
const source = new JSONAPISource({
schema,
serializerClassFor: buildSerializerClassFor({
[JSONAPISerializers.Resource]: MyCustomResourceSerializer,
[JSONAPISerializers.ResourceType]: MyCustomResourceTypeSerializer
})
});
Alternatively, you can use the standard serializers but provide custom settings for those serializers. For example, here are settings that match the previous default serialization options:
const source = new JSONAPISource({
schema,
serializerSettingsFor: buildSerializerSettingsFor({
sharedSettings: {
// Optional: Custom `pluralize` / `singularize` inflectors that know about
// your app's unique data.
inflectors: {
pluralize: buildInflector(
{ person: 'people' }, // custom mappings
(input) => `${input}s` // naive pluralizer, specified as a fallback
),
singularize: buildInflector(
{ people: 'person' }, // custom mappings
(arg) => arg.substring(0, arg.length - 1) // naive singularizer, specified as a fallback
)
}
},
// Serialization settings according to the type of serializer
settingsByType: {
[JSONAPISerializers.ResourceField]: {
serializationOptions: { inflectors: ['dasherize'] }
},
[JSONAPISerializers.ResourceFieldParam]: {
serializationOptions: { inflectors: ['dasherize'] }
},
[JSONAPISerializers.ResourceFieldPath]: {
serializationOptions: { inflectors: ['dasherize'] }
},
[JSONAPISerializers.ResourceType]: {
serializationOptions: { inflectors: ['pluralize', 'dasherize'] }
},
[JSONAPISerializers.ResourceTypePath]: {
serializationOptions: { inflectors: ['pluralize', 'dasherize'] }
}
}
})
});
New validatorsβ
A common source of problems for Orbit developers has been using data that is malformed or doesn't align with a schema's expectations. This can cause confusing errors during processing by a cache or downstream source.
To address this problem, we're introducing "validators", which are shipped in a
new package @orbit/validators
that includes some
validators for primitive types. Validators that are record-specific have also
been included in @orbit/records
.
By default, each source will build its own set of validators and use them
automatically. You can instead share a common set of validators via the
validatorFor
settings. And you can opt-out of using validators entirely by
configuring your sources with { autoValidate: false }
.
Record normalizersβ
When building queries and transforms, some scenarios have been more tedious than
necessary: identifying records by a key instead of id
, for instance, or using
a model class from a lib like ember-orbit to reference a record instead of its
json identity.
A new abstraction has been added to make query and transform builders more
flexible: record normalizers. Record normalizers implement the
RecordNormalizer
interface and
convert record identities and/or data into a normalized form.
The new base normalizer now allows { type, key, value }
to be used anywhere
that { type, id }
identities can be used, which significantly reduces the
annoyance of working with remote keys.
Synchronous change tracking in memory forksβ
Previously, memory source forks behaved precisely like other memory sources: every trackable update applied at the source level (and thus async). Now, the default (but overrideable) behavior is to track changes at the cache level in forks. Thus synchronous changes can be made to a forked cache and then merged back into the base source.
This improves the DX for the most common use case for forks: editing form data in isolation before merging coalesced changes back to the base. For example:
// (sync) fork a base memory source
let fork = source.fork();
// (sync) add jupiter synchronously to the forked source's cache
fork.cache.update((t) =>
t.addRecord({
type: 'planet',
id: 'jupiter',
attributes: { name: 'Jupiter' }
})
);
// (async) merge changes from the fork back to its base
await source.merge(fork);
// (async) jupiter should now be in the base source (as well as its cache)
let jupiter = await source.query((q) =>
q.findRecord({ type: 'planet', id: 'jupiter' })
);
If you want to continue to track changes only at the source-level and have
merge
work only with those changes, pass the following configuration setting
when you fork a source:
let fork = source.fork({
cacheSettings: { trackUpdateOperations: false }
});
This will prevent update tracking at the cache level and will signal to merge
that only transforms applied at the source-level should be merged.
New memory cache capabilitiesβ
In addition to the above improvements to memory sources, v0.17 also adds the
following methods to MemoryCache
:
fork
- creates a new cache based on this one.merge
- merges changes from a forked cache back into this cache.rebase
- resets this cache's state to that of itsbase
and then replays any update operations.
Memory cache forking / merging / rebasing is a lighter-weight way of "branching" changes, that can ultimately be merged back into a source. Cache-level forking can be paired with source-level forking for a lot of flexibility and power.
Debug modeβ
A new debug
setting has been added to the
Orbit
global, that toggles between
using a more verbose, developer-friendly "debug" mode of Orbit vs. a leaner,
more performant production mode.
Debug mode is enabled by default. Some standard features of debug mode include deprecation warnings and extra debug-friendly verifications and messaging.
To disable debug mode:
import { Orbit } from '@orbit/core';
// disable debug mode
Orbit.debug = false;
info
For several releases in the v0.17 beta cycle, debug mode was used to control
whether validators would be created by default. This is no longer the case
β validators will now always be used within sources and caches unless
disabled using the autoValidate: false
setting described above. This provides
more fine-grained control over validation settings throughout your app and its
sources.
Increased reliance on The Platformβ’β
Orbit's codebase continues to evolve with the web, adopting new ES language and
web platform features as they are released. Custom utilities have been gradually
deprecated and phased out of the codebase (e.g. isArray
-> Array.isArray
),
new language features such as nullish coalescing and optional chaining have been
adopted, and platform features such as
crypto.randomUUID
have been adopted (with a fallback implementation if unavailable).
Contributorsβ
Many thanks to the contributors who made v0.17 possible:
- PaweΕ Bator (@jembezmamy)
- Philipp Brumm (@brumm)
- Christian (@makepanic)
- Miguel Camba (@cibernox)
- Paul Chavard (@tchak)
- Michiel de Vos (@Michiel87)
- Dan Gebhardt (@dgeb)
- Brad Jones (@bradjones1)
- Andreas Minnich (@enspandi)
- Clemens Mueller (@pangratz)