Use Cases
Real Problems NORDON Solves
Every developer who uses AI coding assistants hits the same walls. Here are the everyday scenarios where NORDON makes the biggest difference.
Multi-Session Debugging
You're debugging a tricky issue. You spend an hour with your AI assistant, try several approaches, and narrow it down to a race condition. But your session times out. When you start a new session, the AI has no idea what you already tried. You re-explain the error, re-share the stack trace, and watch it suggest the same fixes you already ruled out.
NORDON remembers every failed attempt. When your next session starts, your AI already knows: the error message, what was tried, what didn't work, and what the leading hypothesis was. It picks up exactly where you left off.
What your AI sees at session start
<nordon-context>
<memory type="failure" importance="0.95">
## Race condition in OrderService.checkout()
Intermittent 500 error when two requests hit checkout simultaneously.
Tried: mutex lock (deadlocked), optimistic locking (still races on read).
Root cause narrowed to: stale cache read between validateInventory()
and deductInventory(). Next step: try read-through cache invalidation.
</memory>
</nordon-context>See it in action
// What your AI sees at session start:
// "Previous session found a race condition in OrderService.checkout().
// Mutex and optimistic locking both failed. Leading hypothesis:
// stale cache between validateInventory() and deductInventory().
// Suggested next step: read-through cache invalidation."
async function checkout(orderId: string) {
// AI immediately suggests the right fix:
const inventory = await cache.readThrough(
`inventory:${orderId}`,
() => db.inventory.findUnique({ where: { orderId } })
);
// No more stale reads between validate and deduct
}Onboarding to a New Codebase
A new developer joins your team. They spend days asking questions: Where's the deployment config? Why is this service split into two? What's the naming convention for API routes? Every answer lives in someone's head or a Slack thread from six months ago.
Your team's accumulated knowledge -- architecture decisions, deployment procedures, naming conventions, gotchas -- is already stored as NORDON memories. The new dev's AI assistant automatically gets all of it from day one.
What your AI sees at session start
<nordon-context>
<memory type="procedure" importance="0.85">
## Deployment to Production
1. Run tests: npm run test:ci
2. Build: npm run build:prod
3. Deploy via: kubectl apply -f k8s/prod/
4. IMPORTANT: Always run db:migrate BEFORE deploying the app container
5. Verify: curl https://api.example.com/health
Note: Never deploy on Fridays. See incident #247.
</memory>
</nordon-context>See it in action
# New developer asks their AI: "How do I deploy?"
# AI responds with the exact procedure from team memory:
$ npm run test:ci # Step 1: Run tests
$ npm run build:prod # Step 2: Build
$ kubectl apply -f k8s/prod/migrations.yaml # Step 3: Migrate FIRST
$ kubectl apply -f k8s/prod/app.yaml # Step 4: Then deploy
$ curl https://api.example.com/health # Step 5: Verify
# WARNING: Never deploy on Fridays (incident #247)Maintaining Consistency Across PRs
Developer A decides to use REST for the new payments API. Two weeks later, Developer B starts building the notifications API using GraphQL. Nobody remembers (or communicates) the original decision. Now you have two API styles and a mess to clean up.
NORDON surfaces past architecture decisions automatically. When Developer B starts a new API, their AI assistant already knows the team chose REST and why. Conflicts get caught before code is written.
What your AI sees at session start
<nordon-context>
<memory type="decision" importance="0.9">
## REST over GraphQL for all public APIs
Decision date: 2025-11-15
Chose REST for API layer. Benchmarks showed 3x better performance
on our read-heavy workload with proper caching. GraphQL overhead
not justified for our use case. Team agreed unanimously.
Applies to: all new API endpoints in /api/v2/
</memory>
</nordon-context>See it in action
// Developer B starts a new API endpoint.
// AI automatically warns about the team decision:
// "Team decision (2025-11-15): REST over GraphQL for all
// public APIs. Benchmarks showed 3x better performance.
// Applies to all new endpoints in /api/v2/."
// routes/notifications.ts
export const notificationRoutes = new Hono()
.get("/api/v2/notifications", listNotifications) // REST, consistent
.get("/api/v2/notifications/:id", getNotification) // with team decision
.post("/api/v2/notifications", createNotification);Avoiding Repeated Mistakes
Every few weeks, someone on the team hits the same deployment issue: the migration runs after the app container starts, causing 500 errors for 30 seconds. Each time, someone debugs it from scratch, wastes an hour, and fixes it the same way.
NORDON stores failure memories. When similar patterns are detected -- like someone writing a deployment script or modifying the CI pipeline -- the relevant failure memory surfaces automatically with the fix.
What your AI sees at session start
<nordon-context>
<memory type="failure" importance="0.92">
## Migration timing bug in CI/CD pipeline
Problem: App container starts before DB migration completes.
Symptom: 500 errors for 30-60s after deploy.
Fix: Add initContainer to k8s deployment that runs migrations
before app pod starts. See commit abc123.
WARNING: This will recur if anyone modifies deploy.yaml
without the initContainer dependency.
</memory>
</nordon-context>See it in action
# Developer modifies deploy.yaml
# NORDON automatically surfaces the failure memory:
# deploy.yaml
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
# AI reminds: "CRITICAL: Keep this initContainer.
# Removing it causes 500 errors for 30-60s after deploy.
# See failure memory from incident on 2025-10-03."
initContainers:
- name: run-migrations
command: ["npm", "run", "db:migrate"]
containers:
- name: app
image: myapp:latestComplex Feature Development
You're building a multi-file feature over several days. Day one, you set up the data model. Day two, the API layer. Day three, the frontend. But by day three, your AI doesn't remember the data model decisions from day one, or the API contract from day two.
NORDON's branch-aware memory keeps all feature context scoped to the branch. Every session on that branch gets the full picture: what was built, what's left, and what decisions shaped the architecture.
What your AI sees at session start
<nordon-context>
<memory type="context" importance="0.88">
## Feature: User permissions system (branch: feat/permissions)
Day 1: Created Permission and Role models with RBAC schema.
Day 2: Built /api/v2/permissions endpoints. Uses middleware
pattern for route-level checks. Added caching layer.
Remaining: Frontend permission gates, admin UI for role mgmt.
Key constraint: Must be backward-compatible with existing
session tokens. See constraint memory #142.
</memory>
</nordon-context>See it in action
// Day 3: Building frontend permission gates
// AI already knows the full context from day 1 and 2:
// "Permission model uses RBAC schema (day 1).
// API endpoints at /api/v2/permissions (day 2).
// Middleware pattern for route-level checks.
// Must be backward-compatible with session tokens."
import { usePermissions } from "@/hooks/usePermissions";
function AdminPanel() {
const { hasPermission } = usePermissions();
// AI suggests the right pattern, matching the RBAC schema
if (!hasPermission("admin:manage_roles")) {
return <AccessDenied />;
}
return <RoleManagement />;
}Visualizing Project Knowledge
Your project has hundreds of implicit decisions, patterns, and constraints scattered across commit messages, Slack threads, and people's heads. Nobody has a complete picture. New features accidentally violate old decisions because nobody remembered they existed.
NORDON's Knowledge Graph visualizes every decision, pattern, failure, and constraint as connected nodes. You can see at a glance how architecture decisions connect to constraints, which failures led to new patterns, and where knowledge gaps exist in your codebase.
What your AI sees at session start
<nordon-context>
<memory type="decision" importance="0.9">
## REST over GraphQL for all public APIs
Connected to: constraint #42 (ALB connection limits),
pattern #18 (API response format), failure #7 (GraphQL N+1).
Graph cluster: API Architecture (12 connected memories)
</memory>
</nordon-context>See it in action
# Knowledge Graph reveals connected memories:
#
# [Decision: REST over GraphQL] ──── [Constraint: ALB limits]
# │ │
# ├──── [Pattern: API response format] │
# │ │
# └──── [Failure: GraphQL N+1] ────────┘
#
# Clicking any node shows full context.
# Filtering by "DECISION" highlights all architecture choices.
# Cluster view groups related memories by topic.Code Review Context
You're reviewing a PR and see an unusual implementation choice -- why did they use polling instead of WebSockets? Why is there a 500ms delay hardcoded? Without context, you either approve blindly or leave a comment that wastes everyone's time.
NORDON's decision memories capture the 'why' behind implementation choices. Reviewers (and their AI assistants) can see the reasoning that led to each decision, making reviews faster and more informed.
What your AI sees at session start
<nordon-context>
<memory type="decision" importance="0.82">
## Polling over WebSockets for dashboard updates
WebSockets caused connection storms behind our load balancer
(ALB has 100-connection limit per target). Polling every 5s
with ETag caching gives us near-real-time updates with 90%
fewer connections. Acceptable tradeoff for our scale.
Related: constraint memory about ALB connection limits.
</memory>
</nordon-context>See it in action
// Reviewer sees polling in the PR and asks their AI:
// "Why is this using polling instead of WebSockets?"
// AI responds with the decision context:
// "Team decision: Polling over WebSockets for dashboard.
// WebSockets caused connection storms behind ALB
// (100-connection limit per target). Polling every 5s
// with ETag caching = 90% fewer connections."
useEffect(() => {
const interval = setInterval(async () => {
const res = await fetch("/api/dashboard", {
headers: { "If-None-Match": etag }, // ETag caching
});
if (res.status !== 304) {
setData(await res.json());
setEtag(res.headers.get("ETag"));
}
}, 5000); // 5s polling — documented tradeoff
return () => clearInterval(interval);
}, []);Get Started
Stop repeating yourself
Install NORDON and let your AI assistant remember what matters.