Skip to content

ORM and QuerySets

Once you have defined your models, Tango gives you a database access API that lets application code create, retrieve, update, and delete stored records. This topic explains how that API works through Model.objects and QuerySet.

The examples use a blog application with models such as PostModel and UserModel. Blog posts make the common query patterns concrete: published posts, posts by one author, and the newest posts first.

Model.objects

Every Tango model exposes a manager at Model.objects.

The manager is the main entry point for model-backed work. If application code wants to create a post, retrieve one post, or begin a query for many posts, it usually starts from PostModel.objects.

The same manager is also where application code begins read queries:

ts
const queryset = PostModel.objects.all();

all() matches Django naming and returns the same lazy queryset as query().

PostModel describes what a stored blog post is. PostModel.objects is the API you use when you want to work with stored blog posts in the database. The manager lives on the model class because it represents table-level work for the post table, while one row instance represents one stored post.

Tango wires this up for you when the ORM runtime is loaded. Each model gains an objects property, and the first time application code reads PostModel.objects, Tango creates a ModelManager for that model against the active Tango runtime. Later reads reuse that manager while the same runtime is active, so application code gets one consistent manager entry point without instantiating managers by hand.

Creating records

Creating one record is usually as direct as calling create(...) on the manager:

ts
const post = await PostModel.objects.create({
    title: 'Hello, Tango',
    slug: 'hello-tango',
    content: '...',
    published: true,
});

When application code needs to insert several rows together, the manager also provides bulkCreate(...):

ts
await PostModel.objects.bulkCreate([
    {
        title: 'Hello, Tango',
        slug: 'hello-tango',
        content: '...',
        published: true,
    },
    {
        title: 'Second post',
        slug: 'second-post',
        content: '...',
        published: false,
    },
]);

These are the normal write paths through the ORM. If a model owns write-time behavior such as defaults or lifecycle hooks, writing through Model.objects keeps that behavior in one consistent place.

QuerySet

A QuerySet represents a database query before it is executed.

Sometimes that query means "all posts." Sometimes it means "all published posts from this author, ordered by creation date." Sometimes it means "one post with this identifier, if it exists." In each case, the query can keep being refined before Tango asks the database to return rows.

A queryset begins with query() or all():

ts
const allPosts = PostModel.objects.all();

At that point, no filtering, ordering, or limiting has been applied. The queryset represents the base table query for posts.

Refining queries

Most queryset work comes down to refining that base query.

filter(...) narrows the result set to rows that match the given conditions. exclude(...) removes rows that match the given conditions. orderBy(...) controls result ordering. limit(...) and offset(...) define which slice of the result set should be returned.

For example:

ts
const recentPosts = PostModel.objects.query().filter({ published: true }).orderBy('-createdAt').limit(20);

You can read that queryset from top to bottom as one sentence about the data the application wants: start from posts, keep the published ones, order them by newest first, and return the first twenty.

Each refinement returns a new queryset

Refining a queryset does not mutate the previous queryset. Each refinement returns a new QuerySet.

ts
const allPosts = PostModel.objects.all();
const publishedPosts = allPosts.filter({ published: true });
const newestPublishedPosts = publishedPosts.orderBy('-createdAt');

allPosts still means "all posts." publishedPosts means "published posts." newestPublishedPosts means "published posts ordered by newest first."

Queryset code becomes easier to reuse this way. One part of the application can keep a base query while another part builds on it without changing the original.

QuerySets are lazy

Internally, a queryset can be constructed, filtered, ordered, and passed around without hitting the database. No SQL runs until something evaluates the queryset.

You evaluate a queryset when you:

  • Call fetch(), fetchOne(), first(), last(), get(), count(), or exists().
  • Run for await (const … of queryset). That performs one fetch() for the current queryset state and yields each value from the returned result.
ts
const queryset = PostModel.objects.all().filter({ published: true }).orderBy('-createdAt').limit(10);

const posts = await queryset.fetch();

The filter(...), orderBy(...), and limit(...) calls only refine the query. fetch() is one explicit terminal where Tango sends SQL; for await...of, first(), last(), and get() are others.

After the first row-returning evaluation, the same queryset instance reuses its cached materialized result on later fetch() or async-iteration calls. Build a refined queryset when you want a different SQL query and a separate cache.

Retrieving records

Different retrieval methods communicate different expectations about the result.

If you want a flexible query that may return many rows, start with query() or all() and finish with fetch():

ts
const publishedPosts = await PostModel.objects.all().filter({ published: true }).fetch();

If you expect at most one row from a refined queryset, use fetchOne() or first():

ts
const latestPost = await PostModel.objects.all().filter({ published: true }).orderBy('-createdAt').first();

For Django-style strictness when exactly one row must exist, use get(...) on the queryset; it raises NotFoundError or MultipleObjectsReturned instead of returning null. For the last row under the current ordering, use last().

If you already know the identifier of the row you want, findById(...) or getOrThrow(...) often expresses that intent more directly:

ts
const post = await PostModel.objects.findById(42);
const requiredPost = await PostModel.objects.getOrThrow(42);

These choices let the code describe what kind of answer it expects from the database, in addition to which table it wants to query.

For lookup-or-create flows, the manager exposes getOrCreate(...) and updateOrCreate(...), which mirror common Django patterns while keeping Tango's explicit execution model for reads. Those helpers still expect the lookup to identify at most one existing record; if it matches more than one, Tango raises MultipleObjectsReturned instead of choosing one arbitrarily.

Using Q for more complex conditions

Simple object filters are often enough. When the query needs explicit boolean composition, use Q.

Q lets you express combinations such as "title contains this term or content contains this term," or "this condition must hold, but that other condition must not."

ts
import { Q } from '@danceroutine/tango-orm';

const searchResults = await PostModel.objects
    .query()
    .filter(Q.or({ title__icontains: 'tango' }, { content__icontains: 'tango' }))
    .fetch();

You reach for Q when one plain filter object no longer captures the query clearly.

Shaping query results

Sometimes application code wants the full model row. Sometimes it only needs a few columns, or it wants to transform the returned rows into another shape.

select(...) narrows the selected columns:

ts
const postHeaders = await PostModel.objects.query().select(['id', 'title', 'slug']).orderBy('-createdAt').fetch();

At execution time, that changes the SQL projection, so the database returns only the selected columns. In other words, postHeaders contains rows with id, title, and slug, not complete post records with every model field still present.

The fetched TypeScript row type narrows with that projection as well when the selected keys are known precisely at the call site. Inline literals, readonly tuples, and as const arrays preserve that narrowing automatically.

ts
const postHeaders = await PostModel.objects
    .query()
    .select(['id', 'title', 'slug'] as const)
    .orderBy('-createdAt')
    .fetch();

In that example, each object in postHeaders is typed as { id, title, slug }.

Widened arrays still work for SQL projection, but they fall back to the full row type because TypeScript can no longer prove which exact keys are present:

ts
const columns: ReadonlyArray<'id' | 'title' | 'slug'> = ['id', 'title', 'slug'];

const projected = await PostModel.objects.query().select(columns).fetch();

select([]) resets back to the full row, and a later select(...) call replaces the earlier projection rather than composing with it.

fetch(...) can also accept a shaping function or parser when the calling code wants to project the returned rows into another form:

ts
const titles = await PostModel.objects
    .query()
    .filter({ published: true })
    .fetch((row) => row.title);

This keeps the query definition and the final application-facing shape close together when a caller wants something narrower or more specialized than the selected columns alone.

Relations declared in the model layer also influence queryset behavior.

Use selectRelated(...) when the path stays single-valued from hop to hop. In the blog example, each PostModel has one author, and each author may have one profile, so the queryset can attach both through one joined traversal:

ts
const posts = await PostModel.objects.query().filter({ published: true }).selectRelated('author__profile').fetch();

const [firstPost] = posts;
firstPost?.author?.profile?.displayName;

selectRelated(...) is for single-valued relations such as belongsTo, hasOne, and reverse one-to-one paths. A missing related row is returned as null at the point where the path stops matching.

In contrast, use prefetchRelated(...) when the path includes a collection edge such as hasMany or a join-table-backed many-to-many relation. Prefetch paths may continue beyond that collection edge, so one prefetch branch can still hydrate deeper related objects:

ts
const users = await UserModel.objects.query().prefetchRelated('posts__author', 'posts__comments').fetch();

const [firstUser] = users;
firstUser?.posts[0]?.author?.email;
firstUser?.posts[0]?.comments[0]?.body;

In that form, a user with no posts still receives posts: [], while a post with no comments receives comments: [].

Persisted records returned by the manager carry a related-manager accessor for each many-to-many relation declared on the source model. The accessor is named after the published forward relation name, so a field such as tagIds: t.manyToMany(..., { name: 'tags' }) exposes post.tags.add(...), post.tags.remove(...), post.tags.set(...), and post.tags.all(). add(...), remove(...), and set(...) all accept one or more targets, and duplicate links are ignored so repeated add(...) calls stay idempotent. set(...) replaces the membership with exactly the supplied targets. The accessor is attached non-enumerably so persistence-style helpers such as JSON.stringify(post) continue to focus on the persisted columns.

ts
const post = await PostModel.objects.getOrThrow(postId);

await post.tags.add(tag, featuredTag);
await post.tags.set(featuredTag);
const linked = await post.tags.all().fetch();

post.tags stays a related manager on the model instance. prefetchRelated('tags') only warms that manager's cache, so application code still reads through post.tags.all() rather than expecting post.tags itself to become an array.

When prefetchRelated('tags') ran in the same fetch, post.tags.all() reads from the prefetched cache. A successful add(...), remove(...), or set(...) invalidates that cache so the next read returns fresh data. If an API response or page helper needs an array-shaped value, materialize it explicitly with await post.tags.all().fetch().

Forward many-to-many prefetch path typing comes from the generated relation registry. Without generated relation typing, the older explicit target-model generic still only describes reverse hasMany paths even though the runtime can execute the many-to-many prefetch.

Hydrated relation properties stay attached even when the selected model fields change:

ts
const postCards = await PostModel.objects
    .query()
    .selectRelated('author')
    .select(['id', 'title'] as const)
    .fetch();

const [firstCard] = postCards;
firstCard?.author?.email;

The selected PostModel fields in that example are id and title, while the hydrated author model remains available.

Generated relation typing and fallback generics

Tango supports a generated ambient relation registry for nested path typing. In the common case, that means application code can write reverse and multi-hop hydration paths without explicit target-model generics:

ts
const users = await UserModel.objects.query().prefetchRelated('posts__author').fetch();

Keep the generated registry current through the normal migration workflow or by running tango codegen relations directly when relation metadata changes without a schema diff.

Explicit target-model generics still remain available as a fallback when generated typing is intentionally absent or temporarily stale:

ts
const users = await UserModel.objects.query().prefetchRelated<typeof PostModel>('posts').fetch();

Cyclic paths and scalar methods

Finite cyclic paths are valid at runtime. If a model graph supports a path such as manager__manager, Tango validates that concrete path segment-by-segment and executes it like any other nested traversal.

Generated path typing intentionally stops at a bounded cyclic expansion horizon. The bound exists because recursive path unions can grow quickly in TypeScript and degrade editor responsiveness long before runtime traversal becomes invalid. Common recursive paths stay strongly typed, while deeper cyclic paths fall back to weaker typing instead of becoming runtime-invalid.

count() and exists() also stay scalar. They ignore eager-loading directives entirely, which means you can derive one immutable queryset shape and ask scalar questions about it before later calling fetch() on the same snapshot.

Updating and deleting records

The manager also owns the common update and delete path.

ts
await PostModel.objects.update(42, {
    title: 'Updated title',
});

await PostModel.objects.delete(42);

As with create(...), these methods matter because they are the ordinary application path through model-owned persistence behavior. If the model applies defaults or lifecycle hooks during writes, manager-based writes keep that behavior consistent.

Transactions for multi-step workflows

Ordinary manager writes already work well for one-step create, update, and delete operations. When one workflow needs several writes to succeed or fail together, transaction.atomic(...) provides that boundary.

ts
import { transaction } from '@danceroutine/tango-orm';

await transaction.atomic(async (tx) => {
    const user = await UserModel.objects.create({
        email: 'author@example.com',
    });

    await ProfileModel.objects.create({
        userId: user.id,
    });

    tx.onCommit(() => {
        sendWelcomeEmail(user.email);
    });
});

Outside atomic(...), Tango uses the normal autocommit path. Inside atomic(...), the same Model.objects and QuerySet code uses the active transaction lease for the current async chain.

Nested transactions use savepoints

Nested atomic() blocks do not open independent transactions. They create savepoints inside the active outer transaction.

ts
await transaction.atomic(async () => {
    await AuditLogModel.objects.create({ event: 'outer-start' });

    try {
        await transaction.atomic(async () => {
            await DraftModel.objects.create({ title: 'temporary draft' });
            throw new Error('discard this draft');
        });
    } catch {
        // The outer transaction is still active here.
    }

    await AuditLogModel.objects.create({ event: 'outer-finished' });
});

If the nested block throws, Tango rolls back only to that savepoint. If that error keeps propagating, the outer atomic(...) call fails as well.

When the outer transaction should continue without a local try/catch, use tx.savepoint(...). It opens a nested savepoint and returns a structured result instead of throwing.

ts
await transaction.atomic(async (tx) => {
    const draft = await tx.savepoint(async (nested) => {
        const post = await PostModel.objects.create({
            title: 'Temporary draft',
            slug: 'temporary-draft',
        });

        nested.onCommit(() => {
            publishPostEvent(post.id);
        });

        throw new Error('discard this draft');
    });

    if (!draft.ok) {
        await AuditLogModel.objects.create({ event: 'draft-discarded' });
    }

    await AuditLogModel.objects.create({ event: 'outer-finished' });
});

In that form, the savepoint still rolls back the nested work, but the error comes back as { ok: false, error } instead of aborting the outer transaction. Pass { throwOnError: true } when the nested savepoint should rethrow instead.

Post-commit work uses tx.onCommit(...)

Some side effects should happen only after the database commit is durable. Cache invalidation, background job enqueueing, and domain events are common examples. Register that work through tx.onCommit(...).

ts
await transaction.atomic(async (tx) => {
    const post = await PostModel.objects.create({
        title: 'Queued publish',
        slug: 'queued-publish',
    });

    tx.onCommit(() => {
        publishPostEvent(post.id);
    });
});

Nested atomic() blocks keep their own callback frame. If a nested savepoint rolls back, only that nested frame's callbacks are discarded. A successful nested block merges its callbacks into the parent in registration order.

Why Tango uses tx.onCommit(...) instead of a global helper

Django exposes transaction.on_commit(...) as a package-level helper because ambient transaction state is a natural fit in Python code. Tango keeps reads and writes ambient inside atomic(...), but it does not make post-commit registration ambient.

Tango intentionally makes that tradeoff so the two concerns stay separate. Reads and writes already have a natural ambient behavior: once the transaction boundary exists, ORM calls should just use it. Post-commit callbacks are different because they register new work that depends on the commit outcome. Keeping onCommit(...) on tx means helper code that only talks to the ORM needs no extra argument, while helper code that must schedule commit-aware side effects can accept that narrow contract explicitly.

Hooks participate in the active transaction

Model write hooks run on the same transactional client when they are triggered inside atomic(...). Hook args also receive an optional transaction handle so model-owned write behavior can register post-commit work without importing ORM internals into the schema package.

ts
hooks: {
    afterCreate({ record, transaction }) {
        transaction?.onCommit(() => {
            auditUserCreation(record.id);
        });
    },
}

Outside atomic(...), hook args receive transaction: undefined.

Database notes

The ORM transaction contract stays the same across supported SQL backends. The runtime notes below are split by dialect because connection handling and operational limits still differ.

PostgreSQL

PostgreSQL leases one dedicated client for each outer atomic() block while ordinary autocommit work continues through the pool.

SQLite

SQLite supports transaction.atomic(...) only on file-backed databases in this milestone.

:memory: SQLite still works for ordinary autocommit queries and tests, but atomic(...) rejects because the transaction workflow needs a second handle to the same database file.

Released under the MIT License.