Personal knowledge mapping

Know where you are
in knowledge

Your notes have a shape. Most tools hide it.

Laminar maps your knowledge against the full landscape of human understanding. Capture without friction. Prune with intention. Then see the gaps — the concepts your knowledge reaches for but doesn't yet contain.

solid — you own this
shallow — captured, unpruned
ghost — your gap
skeleton — the map

The loop

Three modes.
One continuous cycle.

Most PKM tools blur reading, thinking, and reviewing into one undifferentiated activity. Laminar treats them as distinct cognitive modes — each with its own interface, its own purpose, its own moment.

"Laminar flow doesn't mix layers. Three clean streams. No turbulence."

01

Capture

Passive · ambient · zero friction

Happens in the background of your reading life. Highlight text on any screen, leave a voice note as you read, or import from Kindle and Readwise. You never leave the source to feed the system. Raw material flows in — unstructured, uncommitted.

Design rule: capture must cost nothing. If it interrupts the learning, it won't happen. The system accepts everything without judgment. Cleaning comes later.

02

Prune

Active · deliberate · effortful

A feed of suggested merges, connections, and orphan alerts. You make decisions — merge these two nodes, keep them separate, add a voice note explaining the distinction. You cannot prune what you don't understand. The pruning is the comprehension.

Design rule: pruning is the learning act itself. A graph you've pruned is a graph you own. This is where passive consumption becomes actual knowledge.

03

See gaps

Revelatory · earned · directional

After pruning, the graph is clean enough to trust. Now Laminar shows you the ghost nodes — concepts your knowledge points toward but doesn't contain. Located on the global skeleton, so you know exactly where they sit in the landscape of the field.

Design rule: gaps are earned, not given. They only become visible after pruning. A noisy graph has false gaps. A clean graph reveals true absence.

↩ gaps become the agenda for the next capture session

Capture

Never leave
the flow

The moment of learning and the moment of capture should be the same moment. Laminar captures from wherever you are — no tab switching, no copy-pasting, no breaking the reading to feed the system.

Screen highlight

Select any text on any screen. Browser extension captures with full source context — the URL, the date, the surrounding paragraph. One click, done.

The capture is the highlight. Nothing more.

Voice note

Tap once, speak your insight. Transcribed and linked to whatever you were reading when you said it. The thought and its context, together.

Your voice, at the moment of understanding.

Import

Readwise, Kindle highlights, Notion pages, Obsidian vaults, plain text, PDFs. Everything you've already annotated, brought into the graph.

Your existing annotations, finally connected.

Write

A minimal editor for synthesis. Write your own understanding of something. Laminar watches as you type and integrates your formulation into the graph alongside your source material.

Your own words as a node, not just a quote.

nature.com/articles/attention-is-all-you-need

The dominant sequence transduction models are based on complex recurrent or convolutional neural networks. The Transformer architecture relies entirely on an attention mechanism to draw global dependencies between input and output, dispensing with recurrence and convolutions entirely.✓ Captured to graph

Unlike recurrent models which process tokens sequentially, the Transformer allows for significantly more parallelisation and requires significantly less time to train✓ Captured to graph on large datasets.

Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions. With a single attention head, averaging inhibits this.

Just captured → graph

2 new nodes: Attention mechanism, Parallelisation in transformers

Connected to: Transformer architecture (existing) · Self-attention (existing)

Ghost detected: Linear algebra foundations →

Prune

The graph you
prune is the graph
you own

A feed of suggested merges, connections, and orphaned nodes. For each one, you decide. Accept, reject, or speak — add a voice note that explains the nuance. That note becomes part of the edge itself.

The algorithm can suggest connections. It cannot decide which ones are meaningful to you. That decision is the comprehension. A graph you haven't pruned is just a collection of things you've read. A pruned graph is what you understand.

"The pruning session generates new material. As you decide whether two nodes belong together, you often discover a third thing you need to capture."

Prune session 9 remaining · est. 14 min

Merge suggestion

Attention mechanism

Vaswani et al. · 4 connections

Self-attention

Karpathy notes · 2 connections

Both appear to describe the same mechanism captured from different sources. Merging would unify 6 connections under one node.

listening...

voice note added

"Self-attention is the specific mechanism — attention is broader, keep separate but connect"

3 merged · 1 connected · 0 discarded skip →

See gaps

The most valuable part
of your graph is
what's missing

After pruning, the graph is clean enough to trust. Now Laminar surfaces the ghost nodes — concepts your existing knowledge actively reaches toward but doesn't yet contain.

These aren't random suggestions. They're inferred from your own graph. Your ML notes reference Linear Algebra 14 times without you ever having captured it directly. That's a ghost. That's your next learning direction.

Solid — you own this

Multiple captures, pruned, well-connected. You understand this concept at depth.

Shallow — present, unpruned

Captured but not yet worked through. You've encountered this but not integrated it.

Seed — captured, unconnected

An orphan waiting for context. You captured it before the surrounding concepts existed.

Ghost — your gap

Not captured. Inferred from what surrounds it. This is what to learn next.

Skeleton — the map

Exists in the global knowledge taxonomy. Not a gap — just unexplored territory you may never need.

your knowledge graph · hover to explore

The skeleton

Your knowledge,
located

Without a skeleton, your graph is self-referential — it only finds gaps relative to what you already know. With it, your knowledge has absolute position. You can see not just the gaps between your nodes, but where you sit in the full landscape of a field.

The skeleton comes from Wikipedia's category graph — the most comprehensive, maintained, neutral taxonomy of human knowledge ever assembled. Curated to three levels: ~12 root domains, ~120 major subfields, ~1,200 topics.

L1 · 12

Root domains

Mathematics, Sciences, Humanities, Engineering, Arts… Always visible. Global orientation.

L2 · 120

Major subfields

Machine Learning, Linear Algebra, Cognitive Science… Where most users locate themselves.

L3 · 1200

Topics

Transformers, Backpropagation, Attention… Faint until your notes get close. Brighten on proximity.

L4 · ∞

Your graph

Everything you've captured, pruned, and connected. The brightest layer. Anchored to L3.

Knowledge map — your notes on skeleton L1–L3 · notes highlighted
Computer Science 47 notes
Machine Learning
deep
Linear Algebra ← gap
ghost
Distributed Systems
present
Programming Languages
skeleton
Probability Theory ← gap
ghost
Cognitive Science 11 notes
Learning & Memory
present
Attention ← bridge
present
Neuroscience ← gap
ghost
Mathematics 6 notes
Statistics
shallow
Linear Algebra ← gap
ghost
Calculus / Analysis
skeleton
Philosophy no notes
Epistemology
skeleton
Philosophy of Mind
skeleton

You don't start with a blank canvas

When you open Laminar for the first time, the full skeleton is already there. You locate yourself immediately — "I work here and here." That declaration seeds your graph before you've captured a single note.

Why Laminar

Most tools give you your knowledge.
Laminar gives you your knowledge
located within all knowledge.

You can see not just what you know, but where you are in the larger landscape — and therefore which direction to walk next. The difference between a note collection and a map is the difference between memory and navigation.

Laminar flow: smooth, ordered, non-turbulent. The opposite of how most people's notes work. Three clean streams — capture, prune, see gaps — that never mix, never create chaos, and compound over time into something that actually resembles how you think.

Capture

ambient · passive

Prune

deliberate · active

See gaps

revelatory · earned

Laminar

knowledge in flow

Map your
knowledge

Laminar is in private beta. Early access is open to learners who want to help shape the gap detection model and skeleton taxonomy.

Your early graphs will inform how Laminar identifies gaps across different domains and learning styles.