Server-Side API
NukeBase is a managed service that provides instant provisioning and deployment. Your project structure includes:
- server/database.js: The core database engine
- server/data/: Your database directory (a tree of
data.jsonfiles joined by$splitmarkers — see Storage & Backups) - server/rules.js: Security rules configuration
- server/app.js: Your application configuration file
- public/: Frontend files (index.html, css, js, etc.)
- sys/deploy.js: Deploy program
- sys/config.json: Deployment configuration
- node_modules/: Dependencies (auto-generated)
- package.json: NPM package configuration
- package-lock.json: Dependency lock file
Setup and Initialization
Getting started with NukeBase is simple - provision your project through our managed service and start developing immediately.
Step 1: Create Your Project
Getting started is as simple as visiting a URL in your browser:
1. Visit: https://nukebase.com/createuser
2. Fill in your project details (username, project name)
3. Click "Provision & Download"
4. Your project zip will download automatically
Instant Deployment: Your project is automatically provisioned, deployed, and live at:
https://username-project.nukebase.com
No build steps, no server configuration - just download the zip and start coding!
Step 2: Local Development Setup
After provisioning, extract the downloaded zip file and set up your VS Code workspace:
# 1. Extract the downloaded project zip file
# Right-click the .zip file and select "Extract All"
# 2. Open VS Code
# File → Add Folder to Workspace → Select your extracted project folder
# 3. Open Terminal in VS Code
# Terminal → New Terminal → Select Folder As Directory
# 4. Install NukeBase CLI globally
npm install -g
# 5. Install NukeBase NPM Packages
npm install
# 6. Now you can use NukeBase commands:
nukebase push # Push local changes to live server
nukebase pull # Pull live server changes to local
NukeBase CLI Commands:
nukebase push- Upload your local changes to the live servernukebase pull- Download the latest changes from the live server
Changes are synced in real-time, allowing you to develop locally and deploy instantly.
Push or pull will instantly remove or add folder/files to server/client unless sys/config "exclude": ["sys", "server/data.json"]
Step 3: Start Developing
Your project structure is ready to use:
- /public: Edit your frontend files (HTML, CSS, JavaScript)
- /server/app.js: Configure backend logic, domains, and database triggers
- /server/rules.js: Define security rules for data access
- /server/data/: Your real-time database directory (auto-synced)
Hot Reload: Changes to your /public files are instantly reflected on your live site. Backend changes in /server/app.js are automatically deployed.
Basic Server Configuration Structure
Your server/app.js file uses a module export pattern that provides access to all NukeBase APIs:
module.exports = ({
addDbTrigger,
addCallable,
addConnectionTrigger,
get,
set,
update,
remove,
query,
generateRequestId,
data,
addDomain,
startDB,
checkAuth
}) => {
const path = require("path");
const nukebase = addDomain({
authPath: ["users"],
host: "127.0.0.1", // optional - defaults to "127.0.0.1"
port: 3000 // optional - defaults to 3000
});
nukebase.app.serveStatic("/*", path.join(__dirname, "../public"),
(res, req) => { return true; }
);
startDB(nukebase);
}
Starting the Database
Start the NukeBase server with configuration options by calling startDB() once at the end of your configuration:
// Basic setup - pass the domain object to startDB
const nukebase = addDomain({
authPath: ["users"],
host: "127.0.0.1", // optional
port: 3000 // optional
});
startDB(nukebase);
addDomain Configuration Options:
- authPath: Array - path to user authentication data (e.g.,
["users"]) - host: String (optional) - the IP address to bind to
- Use "127.0.0.1" to accept connections only from the local machine (default)
- Use a specific IP address like "126.23.45.1" to bind to that server address
- Use "0.0.0.0" to accept connections from any IP
- port: Number (optional) - the port to listen on (default: 3000)
- sendGridKey: String (optional) - SendGrid API key. Required if you want to send magic-link sign-in emails (
/magic-linkand the passwordless/createusermode). Without it, those flows will fail silently insgMail.send. - magicLinkRedirect: String (optional, default
"/") - path the browser is redirected to after a successful magic-link click. Combined withprocess.env.DOMAINto form the full URL.
Environment variables read by the server:
DOMAIN— public origin used to build magic-link URLs and console output (e.g.https://your-app.nukebase.com).SOCKET— if set, the server listens on this Unix domain socket instead ofhost/port(useful behind nginx/Caddy).TRUST_PROXY— set to"true"or"1"to honorx-real-ip/x-forwarded-forwhen computing the client IP for rate limiting. Untrusted by default — without this, rotating those headers cannot bypass the login limiter.DB_ALLOW_MISSING_SPLITS— set to"1"to downgrade a missing/corrupt$splitsubdirectory from a fatal startup error to a logged warning. Default is fail-fast so a bad disk state can't silently produce data loss on the next flush.
Serving Static Files
Serve files from a local directory using app.serveStatic. This is typically the very next thing you'll register after addDomain — it powers your frontend, assets, and any other files the browser needs. Registers both GET and HEAD handlers automatically:
// Serve everything in ../public at the root
nukebase.app.serveStatic("/*", path.join(__dirname, "../public"));
// Serve a private directory (no auth callback = open access)
nukebase.app.serveStatic("/assets/*", path.join(__dirname, "assets"));
serveStatic auth callback receives (res, req) — res first, then req. This matches the underlying uWebSockets convention and is the opposite of Express.
Signature
app.serveStatic(routePattern, rootDir, auth?)
routePattern- URL pattern with trailing/*(e.g.,"/*","/admin/*"). The matched portion before/*is treated as the mount point and stripped before resolving againstrootDir.rootDir- Absolute path to the directory to serve from. Usepath.join(__dirname, "...").auth(optional) - Async callback returning a boolean. If provided, runs before any file is served. Returntrueto allow,falseto respond with 401.
Auth Callback
The auth callback receives (res, req) — same convention as postWithBody:
// Only logged-in users can access /private/*
nukebase.app.serveStatic("/private/*", path.join(__dirname, "../private"),
async (res, req) => {
return Boolean(req.uid);
}
);
// Admins only
nukebase.app.serveStatic("/admin/*", path.join(__dirname, "../admin"),
async (res, req) => {
return req.claims?.role === "admin";
}
);
The req object passed to the auth callback contains the auth fields populated by checkAuth (uid, username, claims, cookies, urlParams, referer, userAgent, ip, url) plus host and method. Identity fields are present only when the user is authenticated. Returning a falsy value sends a 401 response automatically.
Behavior Details
- Index files: Requests ending in
/automatically serveindex.htmlfrom that directory. - Path traversal protection: Any URL that resolves outside
rootDir(e.g., via../) returns 403 Forbidden. - Missing files: Return 404 Not Found.
- Streaming: Files are streamed in 16KB chunks rather than buffered into memory — safe for large files.
- Content-Type: Set automatically from the file extension via the
mimepackage; falls back toapplication/octet-streamfor unknown types. - Image caching: Files with
image/*content types receiveCache-Control: public, max-age=31536000(1 year). Other file types are served without cache headers.
Data Operations
The same five methods — get, set, update, remove, query — work identically on both client and server. The only difference is how results are returned:
Sync vs Async:
- Client: Returns a
Promise. Useawaitor.then(). - Server: Returns the result directly. No
awaitneeded (the in-memory database is accessed without a network round-trip).
// Client (async)
const user = await get(["users", "john"]);
console.log(user.data);
// Server (sync)
const user = get(["users", "john"]);
console.log(user.data);
Examples in this section use the client (async) form. To use any of these on the server, drop await / .then() — the method calls themselves are unchanged.
Path argument shape: path must be an array for all data operations (get, set, update, remove, query) and their subscription variants — even for a single segment, write ["users"], not "users". (Internally a string path is only meaningful as the name of a registered callable when invoking callableFunction; it is not a one-segment shorthand for data ops, and would be iterated character-by-character.)
Hard limits (enforced server-side — operations that exceed any of these are rejected):
- Path depth: ≤ 64 segments
- Path segment length: ≤ 256 characters per string segment
- Numeric segments: non-negative integers ≤ 100,000 (no
NaN,Infinity, negatives, or non-integers) - Forbidden segments:
"",".","..", anything containing/,\, or null bytes, or any prototype-pollution key (__proto__,constructor,prototype, etc.) - Update merge depth: ≤ 64 levels (deeper merges are rejected)
- Query string length: ≤ 512 characters
- Subscriptions per WebSocket session: ≤ 256
requestIdlength: ≤ 128 characters (echoed back in replies)
These limits prevent individual clients from holding open too many subscriptions or amplifying server CPU/memory with adversarial payloads. Stay well under them in normal use.
Setting Data
The set() function creates or replaces data at a specific path:
Auto-creation: The set() function automatically creates any missing parent containers in the path. The type of each created container is chosen by the next segment:
- String segment → object (
{}) - Integer segment → array (
[])
Numeric path segments must be actual integers, not numeric strings — 0 and "0" behave differently. The path validator only accepts non-negative integers as numeric segments; numeric strings are treated as object keys.
// Integer segment → array container is auto-created
set(["messages", 0], "hi");
// Result: { messages: ["hi"] }
// String segment → object container is auto-created
set(["messages", "0"], "hi");
// Result: { messages: { "0": "hi" } }
// Mixed: a nested integer segment creates an array inside an object
set(["users", "matt", "scores", 0], 100);
// Result: { users: { matt: { scores: [100] } } }
// Set a complete object
set(["users", "john"], { name: "John Doe", age: 32 }).then(response => {
console.log("User created successfully");
});
// Set a single value
set(["users", "john", "email"], "john@example.com").then(response => {
console.log(response);
});
// Auto-creates parent objects - even if 'users' doesn't exist
set(["users", "alice", "profile", "preferences", "theme"], "dark").then(response => {
// Creates: { users: { alice: { profile: { preferences: { theme: "dark" } } } } }
console.log("Theme set with auto-created parent objects");
});
Getting Data
Retrieve data with the get() function:
// Get a single user
get(["users", "john"]).then(response => {
console.log(response.data); // User data
});
// Get entire collection
get(["users"]).then(response => {
const users = response.data;
// Process users...
});
Updating Data
Update existing data without replacing unspecified fields:
Auto-creation: Same behavior as set() — missing parent containers are created automatically, with the type chosen by the next segment (string → object, integer → array). See the array-vs-object example under Setting Data above.
// Update specific fields
update(["users", "john"], {
lastLogin: Date.now(),
loginCount: 42
}).then(response => {
console.log(response);
});
// Update a single property
update(["users", "john", "status"], "online").then(response => {
console.log(response);
});
// Auto-creates missing parent objects
update(["settings", "app", "notifications", "email"], true).then(response => {
// If 'settings' doesn't exist, creates the entire path
console.log("Setting created with auto-generated parents");
});
Removing Data
Delete data at a specific path:
// Remove a user
remove(["users", "john"]).then(response => {
console.log("User deleted");
});
// Remove a specific field
remove(["users", "john", "temporaryToken"]).then(response => {
console.log(response);
});
Querying Data
Query allows you to search through collections and find items that match specific conditions. The query string uses JavaScript expressions where child represents each item being evaluated:
How queries work: NukeBase iterates through each child at the specified path and evaluates your condition. Items where the condition returns true are included in the results.
// Basic equality check
query({
path: ["users"],
query: "child.age == 32"
}).then(response => {
console.log(response.data); // All users who are exactly 32
});
// Using comparison operators
query({
path: ["products"],
query: "child.price < 50"
}).then(response => {
console.log(response.data); // All products under $50
});
// Compound conditions with AND (&&)
query({
path: ["products"],
query: "child.price < 100 && child.category == 'electronics'"
}).then(response => {
console.log(response.data); // Affordable electronics
});
// Compound conditions with OR (||)
query({
path: ["users"],
query: "child.role == 'admin' || child.role == 'moderator'"
}).then(response => {
console.log(response.data); // All admins and moderators
});
// Text search with includes()
query({
path: ["posts"],
query: "child.title.includes('JavaScript')"
}).then(response => {
console.log(response.data); // Posts with "JavaScript" in the title
});
// Checking nested properties with childPath
query({
path: ["users"],
childPath: ["profile", "location"],
query: "child == 'New York'" // child refers to the location value
}).then(response => {
// Returns: { matt123: { profile: { location: "New York" } } }
// `child` in the query is the value at childPath; the response wraps
// that value back in the childPath structure to mirror the DB shape.
console.log(response.data);
});
// Combining multiple conditions
query({
path: ["orders"],
query: "child.status == 'pending' && child.total > 100 && child.items.length > 2"
}).then(response => {
console.log(response.data); // Large pending orders with multiple items
});
// Checking if a property exists
query({
path: ["users"],
query: "child.premiumAccount == true"
}).then(response => {
console.log(response.data); // All premium users
});
// Using NOT operator
query({
path: ["tasks"],
query: "child.completed != true"
}).then(response => {
console.log(response.data); // All incomplete tasks
});
// Date comparisons (assuming timestamps)
query({
path: ["events"],
query: "child.date > " + Date.now()
}).then(response => {
console.log(response.data); // Future events
});
Query Syntax Reference
Queries are evaluated by a restricted safe expression engine — not full JavaScript. The query string is parsed by a hand-written evaluator that supports only the operators and methods listed below. Anything outside this list (arithmetic, regex, typeof, Array.isArray, .startsWith, .toLowerCase, ternaries, function calls other than .includes(), array indexing with []) will not parse correctly and will fail or return an empty result.
This is different from Security Rules, which compile via new Function and have access to the full JavaScript language. Don't copy a complex rule expression into a query and expect it to work.
Supported in queries:
- Boolean:
||,&&,! - Comparison:
===,!==,==,!=,>,<,>=,<= - Method:
.includes(arg)on strings or arrays (one literal or path argument) - Property access: dotted paths only (
child.foo.bar);.lengthworks as a plain property read on arrays/strings - Literals: numbers, single- or double-quoted strings,
true,false,null,undefined - Parentheses for grouping
Queries support these operators and methods:
| Operator/Method | Description | Example |
|---|---|---|
== |
Equal to | child.status == 'active' |
!= |
Not equal to | child.deleted != true |
<, >, <=, >= |
Comparison | child.age >= 18 |
&& |
Logical AND | child.active && child.verified |
|| |
Logical OR | child.role == 'admin' || child.role == 'mod' |
.includes() |
String contains | child.email.includes('@gmail.com') |
.length |
Array/string length | child.tags.length > 3 |
Important: The child variable represents each item at the path you're querying. For example, when querying "users", child represents each individual user object.
Using childPath to Query Nested Data
The childPath parameter allows you to query and return only specific nested portions of your data. This is especially useful for separating public and private data, improving performance, or working with complex data structures.
How childPath works:
- Navigation: childPath navigates to a nested position in your data
- Query context: The
childvariable in your query refers to the data at that nested position - Response structure: Results include the full path with childPath, so you know which parent item matched
// Data structure:
// {
// users: {
// matt123: {
// public: { name: "Matt", age: 25, city: "NYC" },
// private: { ssn: "123-45-6789", salary: 80000 }
// },
// john456: {
// public: { name: "John", age: 30, city: "LA" },
// private: { ssn: "987-65-4321", salary: 90000 }
// }
// }
// }
// Query WITHOUT childPath - queries full user objects
query({
path: ["users"],
query: "child.public.age > 21"
}).then(response => {
console.log(response.data);
// Returns: {
// matt123: { public: {...}, private: {...} },
// john456: { public: {...}, private: {...} }
// }
// You get FULL user objects including private data
});
// Query WITH childPath - queries only public portion.
// `child` inside the query refers to the data AT childPath ("public").
// The response value MIRRORS the database shape: each match is wrapped
// in the same childPath structure, so you can read it the same way you'd
// read the original tree.
query({
path: ["users"],
childPath: ["public"],
query: "child.age > 21" // child still refers to the "public" object
}).then(response => {
console.log(response.data);
// Returns: {
// matt123: { public: { name: "Matt", age: 25, city: "NYC" } },
// john456: { public: { name: "John", age: 30, city: "LA" } }
// }
// The "public" wrapper is preserved; "private" is not present because
// the query never walked into it.
});
// Multiple childPath levels — wrapper is nested to match
query({
path: ["users"],
childPath: ["public", "address"],
query: "child.city == 'NYC'" // child refers to the "address" object
}).then(response => {
console.log(response.data);
// Returns: {
// matt123: { public: { address: { city: "NYC", state: "NY" } } }
// }
// Each childPath segment shows up in the response, in order.
});
childPath Use Cases
// Use Case 1: Security - Exclude private data
// If users.matt123.private is blocked by read rules, childPath ensures
// you only query the accessible portion
query({
path: ["users"],
childPath: ["public"],
query: "child.verified == true"
}).then(response => {
// Only returns public data, won't fail if private is restricted
displayPublicProfiles(response.data);
});
// Use Case 2: Performance - Return only needed data
// When clients only need profile info, not full user objects
query({
path: ["users"],
childPath: ["profile"],
query: "child.country == 'USA'"
}).then(response => {
// Smaller response payload, faster transmission
renderUserProfiles(response.data);
});
// Use Case 3: Complex filtering on nested arrays
// Query specific nested collections
query({
path: ["orders"],
childPath: ["items"],
query: "child.quantity > 5"
}).then(response => {
// Returns: {
// order123: { items: { itemA: {quantity: 10, ...}, ... } }
// }
// The "items" wrapper is preserved so the result mirrors the DB shape.
console.log("Orders with high-quantity items:", response.data);
});
// Use Case 4: Separating data concerns
// Different parts of your app query different data sections
query({
path: ["products"],
childPath: ["inventory"],
query: "child.stock < 10"
}).then(response => {
// Warehouse dashboard only needs inventory data
showLowStockAlert(response.data);
});
When to use childPath:
- You want to exclude certain fields from results (public vs private data)
- You need to improve query performance by returning less data
- Your read rules block certain paths, and childPath ensures you only query accessible data
- You're querying nested collections or arrays within parent objects
Important: When using childPath, remember that child in your query refers to the data AT the childPath position, not the root object. Adjust your query conditions accordingly.
Numeric childPath segments preserve array shape. Segments inside childPath can be non-negative integers, and they walk into arrays in your data. The response wrapper is built to match: numeric segments produce arrays, string segments produce objects.
// Data structure:
// users.matt123 = { scores: [42, 87, 99], name: "Matt" }
// users.john456 = { scores: [10, 20, 30], name: "John" }
// childPath ending at index 0 of "scores"
query({
path: ["users"],
childPath: ["scores", 0],
query: "child > 25" // child refers to the integer at scores[0]
}).then(response => {
console.log(response.data);
// Returns: {
// matt123: { scores: [42] }
// }
// The wrapper preserves the array shape from the DB.
});
// childPath ending at index 1 of "scores"
query({
path: ["users"],
childPath: ["scores", 1],
query: "child > 25"
}).then(response => {
console.log(response.data);
// Returns: {
// matt123: { scores: [null, 87] }
// }
// Index 1 is preserved. The leading slot is null because the query only
// matched scores[1] — sparse-array slots become explicit nulls when JSON
// is sent over the WebSocket. The matched value is still at index 1.
});
JSON null padding for non-zero indices: When a numeric childPath segment is greater than 0, the slots before the matched index are filled with null in transit. This keeps the index correct on the client (so response.data.matt123.scores[1] works) at the cost of leading nulls. Iterate with care, or filter null entries if your code can't tolerate them.
Server-Side: Direct Access via data
On the server only, the data export gives you direct read access to the raw in-memory database object. This skips the overhead of get() for fast lookups inside triggers, callables, and middleware:
module.exports = ({ data, get, set, ... }) => {
// Direct read — access the raw database object
const userName = data.users?.john?.name; // "John"
const allUsers = data.users; // { john: {...}, alice: {...} }
// Compared to using get():
const user = get(["users", "john"]);
console.log(user.data.name); // "John"
};
Read-only. Always use set(), update(), and remove() to modify data — these run subscriptions, triggers, security rules, and persistence. Writing directly to data bypasses all of these.
Security Rules
NukeBase uses a JSON-based security rules system to control access to your database. Rules are defined in
server/rules.js and are evaluated for every database operation.
Available Variables in Rules:
admin- The standard auth context for the caller (admin.uid,admin.claims, etc.) — see Auth Context for the full shaperoot- The database object at the top leveldata- The current/old value at the path being accessednewData- The new value being written (for write/validate rules)$variables- Wildcard captures like$userId,$postId
Rule Types
Three types of rules control different aspects of data access:
- read - Controls who can read data at a path (triggered by
get()operations) - write - Controls who can create, update, or delete data (triggered by
set(),update(), andremove()operations) - validate - Ensures data meets specific requirements (triggered by
set()andupdate()operations)
How Rules Are Checked:
Read and write rules grant access — they do not revoke it. When you read or write at a path like users.john.email, NukeBase walks from the root toward that path and evaluates each level that has a matching rule:
- Check
users— if its rule returnstrue, ALLOWED (stop here) - Check
users.john— if its rule returnstrue, ALLOWED (stop here) - Check
users.john.email— if its rule returnstrue, ALLOWED
Any single level returning true grants access. The operation is denied only if no level along the path grants. A "read": "false" at a parent does NOT prevent a child rule from granting access at a deeper path — it just means that level didn't grant on its own.
Validate rules behave differently. They cascade through every level along the write path AND into the new value, and ALL applicable rules must pass. Any single failure denies the write.
Rule Matching at Same Level:
- Read/Write rules: If you have both exact (
pets) and wildcard ($other) rules at the same level, BOTH must pass for access topets. - Validate rules: Only the most specific rule matches. Exact match (
pets) takes priority over wildcard ($other).
// These two rules are at the SAME LEVEL (both are direct children of the parent)
module.exports = {
"pets": {
"read": "true", // Rule 1: Anyone can read pets
"write": "admin.claims.role == 'petOwner'", // Rule 2: Must be pet owner
"validate": "newData.type == 'cat' || newData.type == 'dog'" // Only cats/dogs
},
"$other": { // ← This is at the SAME LEVEL as "pets" above
"read": "admin.claims.role == 'admin'", // Rule 3: Must be admin
"write": "false", // Rule 4: No writes allowed
"validate": "newData != null" // Not empty
}
}
// When accessing "pets":
// Read: BOTH "true" AND "admin.claims.role == 'admin'" must pass → Fails for non-admins!
// Write: BOTH "admin.claims.role == 'petOwner'" AND "false" must pass → Always fails!
// Validate: ONLY the "pets" rule applies (most specific)
Basic Example
module.exports = {
"users": {
"$userId": {
// Don't grant a blanket read at $userId — that grant cascades down
// and would override the deeper email rule. Grant read on the
// public-facing fields instead.
"write": "admin.uid == $userId", // Only the user can edit their profile
"name": { "read": "true" }, // Public
"bio": { "read": "true" }, // Public
"email": { "read": "admin.uid == $userId" } // Private — only the user
}
}
};
Path Patterns
Rules support different path patterns to match your data structure:
| Pattern | Description | Example |
|---|---|---|
users.john |
Exact path matching | Matches only users.john |
users.$userId |
Wildcard matching | Matches users.alice, users.bob, etc.The $userId variable captures the actual key |
posts.$postId |
Wildcard for collections | Matches any child: posts.abc, posts.xyz, etc. |
messages.$msgId |
Works with arrays too | Arrays are objects with numeric keys Matches messages.0, messages.1, messages.2 |
Arrays and Path Matching:
JavaScript arrays like ["red", "blue", "green"] are stored as objects with numeric keys:
{ "0": "red", "1": "blue", "2": "green" }
This means:
colors.0- Exact match for first elementcolors.$index- Wildcard matches all elements (0, 1, 2, etc.)colors- Matches the array itself
Operations and Their Rules
Different database operations trigger different combinations of rules:
| Operation | Rules Triggered | Description |
|---|---|---|
get() |
read | Only read rules are checked when retrieving data |
set() |
write + validate | Both write permission and data validation are required |
update() |
write + validate | Same as set() - must have permission and valid data |
remove() |
write | Only write rules are checked (newData is null) |
query() |
read | Read rules filter which items are returned |
Rule Evaluation by Path Depth
The set of rules that actually applies to a given operation is determined dynamically by the depth of the path you're targeting. Two writes against the same rules file can hit completely different rules depending on how deep the operation lands. Designing security correctly means knowing exactly which rules will be evaluated for each call.
Read / Write — walked from root toward the target path
NukeBase iterates each level along the path and evaluates any matching rule. If any one level returns true, access is granted and evaluation stops. Deeper rules are not consulted past a grant. The operation is denied only if no level along the path grants.
Validate — cascades through every level and into the new value
Validate runs at every prefix of the write path AND at every leaf inside the new value. All applicable validate rules must pass; a single failure denies the write.
Given the rules below, here's what gets evaluated for writes at different depths:
module.exports = {
"store": {
"write": "admin.claims.role == 'admin'",
"products": {
"write": "admin.claims.role == 'manager'",
"$productId": {
"write": "admin.uid == data.ownerId",
"validate": "newData.name && newData.price > 0"
}
}
}
};
| Operation | Rules evaluated | Outcome |
|---|---|---|
set(["store"], {...}) |
Write: store.write onlyValidate: any validate rule reachable from the new value (e.g. store.products.$productId.validate for each product in the payload) |
Allowed only if admin. Note: only the top-level write rule is checked — the deeper write rules are not consulted, because the operation targets ["store"]. |
set(["store","products"], {...}) |
Write: store.write, then store.products.writeValidate: store.products.$productId.validate for each product leaf in the payload |
Allowed if the caller is admin OR a manager (any one returning true grants). Validate must also pass for every product written. |
set(["store","products","abc"], {name:"X", price:5}) |
Write: store.write, store.products.write, store.products.$productId.writeValidate: store.products.$productId.validate |
Allowed if admin OR manager OR the caller owns "abc". Validate runs against the new value. |
get(["store","products","abc"]) |
Read: store.read, store.products.read, store.products.$productId.read (none are defined here, so the call is denied) |
Denied — no level along the path grants read. |
Common pitfall — a blanket grant at a parent cascades. Writing "users.$userId.read": "true" means any deeper read rule like "users.$userId.email.read": "admin.uid == $userId" is effectively bypassed: the parent's true grants access first, and the email rule never runs. To restrict deeper data, don't grant blanket access at the parent — split the data into subnodes (e.g. public / private) and grant read only on the part you want exposed.
Mental model: read/write rules answer the question "is there any reason to allow this?" — one yes is enough. Validate rules answer "does the new data satisfy every constraint?" — one no is enough.
Rule Types in Detail
Read Rules
Control who can read data at a specific path:
// Simple read rule
// Don't put "read": "true" at the $postId level — it would cascade and
// override the draft restriction. Grant read on the published fields only.
"posts": {
"$postId": {
"title": { "read": "true" },
"body": { "read": "true" },
"draft": { "read": "admin.uid == data.authorId" } // Only author can read drafts
}
}
// Using variables in paths
"users": {
"$userId": {
"name": { "read": "true" }, // Public
"email": { "read": "admin.uid == $userId" } // Only the user can read their own email
}
}
Write Rules
Control who can create, update, or delete data:
// Basic write rule
"posts": {
"$postId": {
"write": "admin.uid == data.authorId", // Only author can edit
"createdAt": {
"write": "!data" // Can only set createdAt when creating (no previous data)
}
}
}
// How write rules cascade along the target path
// (any rule along the path that returns true is sufficient)
"store": {
"write": "false", // Blocks writes that TARGET ["store"] directly
"products": {
"write": "admin.claims.role == 'manager'", // Applies when writing AT ["store","products"] or deeper
"$productId": {
"write": "admin.uid == data.ownerId" // Applies when writing AT ["store","products",]
}
}
}
// What actually happens:
// set(["store"], ...) → only store.write applies → DENIED
// set(["store","products"], ...) → store.write OR store.products.write
// → ALLOWED if user is a manager
// set(["store","products","abc"], ...) → store.write OR store.products.write
// OR store.products.$productId.write
// → ALLOWED if manager OR uid == data.ownerId
Validate Rules
Ensure data integrity and format requirements:
// Simple field validation
"users": {
"$userId": {
"age": {
"validate": "newData >= 13 && newData <= 120"
},
"email": {
"validate": "newData.includes('@') && newData.includes('.')"
}
}
}
// Validating objects with required fields
"posts": {
"$postId": {
"validate": "newData.title && newData.content && newData.title.length <= 200"
}
}
// Using data and newData to compare old and new values
"users": {
"$userId": {
"credits": {
// Ensure credits can only increase, not decrease
"validate": "newData >= data"
}
}
}
// Complex validation with multiple conditions
"products": {
"$productId": {
"validate": "newData.name && newData.price > 0 && newData.stock >= 0"
}
}
Array Validation
Arrays are validated using the same rule system, but understanding how paths are generated is essential for proper validation.
How Array Validation Works:
When you set/update an array, NukeBase generates validation paths for:
- The array itself - Path to the array as a whole
- Each array element - Individual paths like
["tags", "0"],["tags", "1"]
Arrays are treated as objects with numeric keys: ["red", "blue"] becomes {"0": "red", "1": "blue"}
// Example: update(["users", "john", "tags"], ["red", "blue", "green"])
// This generates paths:
// 1. ["users", "john", "tags"] ← Entire array
// 2. ["users", "john", "tags", "0"] ← Element 0: "red"
// 3. ["users", "john", "tags", "1"] ← Element 1: "blue"
// 4. ["users", "john", "tags", "2"] ← Element 2: "green"
// METHOD 1: Validate the ENTIRE array
"users": {
"$userId": {
"tags": {
// newData = entire array ["red", "blue", "green"]
"validate": "Array.isArray(newData) && newData.length <= 5"
}
}
}
// METHOD 2: Validate EACH element using wildcard
"users": {
"$userId": {
"tags": {
"$index": { // $index matches "0", "1", "2", etc.
// newData = individual element ("red", "blue", or "green")
"validate": "typeof newData === 'string' && newData.length < 20"
}
}
}
}
// METHOD 3: COMBINE both approaches
"users": {
"$userId": {
"tags": {
// Validate array properties
"validate": "Array.isArray(newData) && newData.length <= 5",
"$index": {
// Validate each element
"validate": "typeof newData === 'string' && newData.length < 20"
}
}
}
}
// Complex array validation with element uniqueness check
"users": {
"$userId": {
"favoriteColors": {
"$index": {
// Each color must be a valid hex code
"validate": "typeof newData === 'string' && /^#[0-9A-F]{6}$/i.test(newData)"
}
}
}
}
Important: Both the array-level rule AND element-level rules must pass. If you have rules at both levels, all of them are checked.
Available Variables
Rules have access to several context variables:
| Variable | Description | Available In |
|---|---|---|
data |
Current value at the path (before changes) | All rule types |
newData |
Value after the write operation | write, validate |
root |
Current database root | All rule types |
admin |
Auth context (see Auth Context) | All rule types |
$variables |
Values from wildcard path segments | All rule types |
Best Practices
- Start with restrictive rules, then add exceptions as needed
- Use validate rules to ensure data integrity
- Test rules thoroughly before deploying to production
- Keep rules simple and readable
- Only one validate rule per path - combine conditions with
&&or|| - Read/write rules grant access — any single rule along the path that returns
trueis sufficient. Don't put a blanket"read": "true"at a parent if you intend to restrict child paths; the parent grant cascades and the deeper rule never gets the chance to deny. - Validate rules only match the most specific rule at a given path
Common Mistakes to Avoid
Mistake 1: Multiple validate rules on same path
// WRONG - Only the last validate rule will be used!
"email": {
"validate": "newData.includes('@')",
"validate": "newData.includes('.')" // This overwrites the first rule!
}
// CORRECT - Combine with &&
"email": {
"validate": "newData.includes('@') && newData.includes('.')"
}
The admin Auth Context
Several server-side APIs receive an admin object describing the current caller. The shape is the same everywhere — only the calling context differs.
Where you'll see it
- Security Rules — referenced as
admininside rule expressions (e.g.,"admin.uid == $userId") - Database Triggers — not directly received; triggers see the data change, not the caller
- Callable Functions — second argument:
function(data, admin, sessionId) - Connection Triggers — first argument:
function(admin, sessionId) - postWithBody handlers — exposed as
req.admin - Raw
posthandlers — returned bycheckAuth(req, res)
Object Shape
The admin object always contains request metadata. Identity fields (uid, username, token, claims) are present only when the caller has a valid session cookie.
| Property | When Authenticated | When Not Authenticated |
|---|---|---|
uid |
User's unique ID | undefined |
username |
User's username | undefined |
token |
Session token from cookie | undefined |
claims |
Custom claims object (e.g., { role: "admin" }) |
undefined |
urlParams |
Parsed query string parameters | Parsed query string parameters |
cookies |
Parsed cookies object | Parsed cookies object |
referer |
Referer header (or "") |
Referer header (or "") |
userAgent |
User-Agent header (or "") |
User-Agent header (or "") |
ip |
Client IP address | Client IP address |
url |
Request URL path | Request URL path |
Common Patterns
// In a callable
addCallable("getProfile", (data, admin, sessionId) => {
if (!admin.uid) return { status: "Failed", message: "Login required" };
return get(["users", admin.uid]).data;
});
// In a connection trigger
addConnectionTrigger("open", (admin, sessionId) => {
console.log("Connected:", admin.uid || "anonymous", "from", admin.ip);
});
// In a postWithBody handler (req.admin)
nukebase.app.postWithBody("/api/me", (res, req) => {
if (!req.admin.uid) return res.send(JSON.stringify({ status: "Failed" }), "401 Unauthorized");
res.send(JSON.stringify({ uid: req.admin.uid, claims: req.admin.claims }));
});
// In a raw post handler (manual checkAuth)
nukebase.app.post("/api/me-raw", (res, req) => {
const admin = checkAuth(req, res);
res.end(JSON.stringify({ uid: admin.uid }));
});
// In a security rule (rules.js)
module.exports = {
"users": {
"$userId": {
"write": "admin.uid == $userId",
"private": { "read": "admin.uid == $userId" }
},
"adminPanel": {
"read": "admin.claims.role == 'admin'"
}
}
};
Anonymous callers still get an admin object. Identity fields will be undefined, but request metadata (ip, userAgent, cookies, etc.) is always populated. Always check admin.uid before assuming the caller is logged in.
Database Triggers
Run server code in response to database changes using addDbTrigger:
// Create a trigger for when a request is updated
addDbTrigger("update", ["requests", "$requestId"], function(context) {
// The context object contains all relevant information about the change
const beforeNotes = context.dataBefore?.notes;
const afterNotes = context.dataAfter?.notes;
// Replace "pizza" with pizza emoji
const newNotes = afterNotes.replaceAll("pizza", "🍕");
// Avoid infinite loop by checking if we already replaced
if (newNotes === afterNotes) {
return;
}
// Update the data with our modified version
update(context.path, { notes: newNotes });
});
Key components of database triggers:
addDbTrigger(eventType, pathArray, callbackFunction)- Path arrays use wildcards like
$userIdto match any value at that position
Event Types
"set"- Triggered when data is created or completely replaced"update"- Triggered when data is partially updated"remove"- Triggered when data is deleted"value"- Triggered for all changes (set, update, remove)
Path Patterns
Use an array path with wildcards to match specific data paths:
["users", "$userId"]- Matches any user path like ["users", "john"] or ["users", "alice"]["posts", "$postId", "comments", "$commentId"]- Matches any comment on any post
Context Object
Your callback function receives a context object containing:
context.path— The path matched by the trigger pattern, truncated to the trigger pattern's depth. If you register a trigger on["users", "$userId"]and a write happens at["users", "john", "email"],context.pathis["users", "john"]— not the deeper write path. Wildcard segments are filled in with the actual key from the write.context.dataAfter— The post-write state at the trigger's path (i.e. atcontext.path, not the originating write path). May beundefinedfor remove operations or when the path no longer exists.context.dataBefore— A synthesized partial snapshot, not the full prior subtree. It is built from the leaves of the new value (so it captures prior values at the locations the write actually touched), then any unchanged sibling fields are filled in from the post-write state. Do not use!context.dataBeforeto detect a freshly-created resource — for a brand-newset,dataBeforeis populated with the new values rather than beingnull.
Detecting "is this new?": Because context.dataBefore mirrors the leaves of the new value and falls back to the post-write state for unchanged keys, it is rarely null in practice — even for first-time creation. If you need a one-time-init pattern, register the trigger on the "set" action and check for the absence of a marker field on context.dataAfter (a field your trigger itself sets), rather than testing dataBefore.
Deletions are not visible in dataBefore: Because dataBefore is keyed by the leaves of the new value, fields that existed before the write but are absent from the new value will not appear in dataBefore at all. For example, replacing {item:"y", price:5} with set(..., {item:"x"}) yields dataBefore = {item:"y"} — the removed price field is not surfaced.
Important: When modifying data within a trigger that affects the same path you're watching, always implement safeguards to prevent infinite loops, as shown in the example.
Complete Example: Order Processing
// React to orders being set (created OR fully replaced).
// Because context.dataBefore is unreliable for "is this new?" (see warnings
// above), use a marker field on dataAfter to ensure one-time initialization.
// Bonus: registering on "set" (not "update") prevents this from refiring
// when the trigger's own update() call below runs.
addDbTrigger("set", ["orders", "$orderId"], function(context) {
if (context.dataAfter && !context.dataAfter.processingStart) {
const orderId = context.path[1]; // wildcard segment, filled in
update(context.path, {
status: "processing",
processingStart: Date.now()
});
}
});
Callable Functions
Define server functions that clients can invoke remotely using addCallable. Clients call them via callableFunction(name, data):
addCallable("getUsersCount", async function (data, admin, sessionId) {
//get all users
var res = get(["users"])
//Count how many users
count = Object.keys(res.data).length
//return number
return count
});
Callback Arguments
Your callable receives (data, admin, sessionId):
data- Payload sent by the client (second argument tocallableFunction())admin- The standard auth context object — see Auth Context for the full shapesessionId- The caller's WebSocket session ID
Return Value
Callables may return synchronously or as a Promise (use async). The return value is delivered to the client as response.data.
Connection Triggers
Run server code when a client connects or disconnects using addConnectionTrigger:
// When a client connects
addConnectionTrigger("open", function (admin, sessionId) {
// Record session start time
update(["sessions", admin.uid, sessionId], {
start: Date.now()
});
});
// When a client disconnects
addConnectionTrigger("close", function (admin, sessionId) {
// Record session end time
update(["sessions", admin.uid, sessionId], {
end: Date.now()
});
});
Action Types
"open"- Fires when a client establishes a WebSocket connection"close"- Fires when a client disconnects (browser close, network drop, or explicit close)
Callback Arguments
Your callback receives (admin, sessionId):
admin- The standard auth context object — see Auth Context for the full shapesessionId- Unique ID for this WebSocket session
Note: Connection triggers fire for every WebSocket session, including unauthenticated visitors. Check admin.uid if you only care about logged-in users.
Custom POST Endpoints
Define POST routes by attaching handlers to nukebase.app. Two flavors are available:
app.postWithBody(path, handler)- POST endpoint with automatic body parsing and authentication.app.post(path, handler)- Lightweight POST handler with no automatic parsing.
(res, req) — res first, then req. This matches the underlying uWebSockets convention and is the opposite of Express.
For serving static files (HTML, CSS, JS, images), see Serving Static Files.
postWithBody (POST with Body Parsing)
Use postWithBody to create POST endpoints with automatic body parsing and authentication. It automatically calls checkAuth(req, res) and populates req.admin with the authenticated user's information.
// postWithBody automatically parses the body AND runs checkAuth()
nukebase.app.postWithBody('/api/contact', (res, req) => {
// req.admin is automatically populated by checkAuth()
if (!req.admin.uid) {
return res.send(JSON.stringify({ status: "Failed", message: "Not authenticated" }));
}
const { name, email, message } = req.body;
// Save to database with the authenticated user's ID
set(["contactForms", generateRequestId()], {
name,
email,
message,
userId: req.admin.uid,
timestamp: Date.now()
});
res.send(JSON.stringify({ status: "Success" }));
});
// req object includes:
// req.admin - Auth object from checkAuth() (always present)
// req.body - Parsed request body
// req.host - Request host header
// req.method - HTTP method
// req.getHeader(name) - Get any request header
//
// req.admin includes (always present):
// req.admin.cookies - Parsed cookies object
// req.admin.urlParams - Parsed query string parameters
// req.admin.referer, req.admin.userAgent, req.admin.ip, req.admin.url
//
// req.admin includes (only when authenticated):
// req.admin.uid, req.admin.username, req.admin.token, req.admin.claims
Supported content types for postWithBody:
application/json- Parsed as JSON objectapplication/x-www-form-urlencoded- Parsed as key-value pairsmultipart/form-data- Parsed with file upload support (file fields become arrays of{ filename, type, data: Buffer })text/plain- Available asreq.body.text
res.send(body, status) writes the status (default "200 OK") and ends the response in one call. Only available in postWithBody handlers — raw post uses res.writeStatus() + res.end() directly.
Raw post (Lightweight)
Use raw post for lightweight endpoints that don't need body parsing. You must manually call checkAuth(req, res) to get authentication information:
// Raw post — no automatic parsing, no req.admin
nukebase.app.post('/api/status', (res, req) => {
// Must manually call checkAuth(req, res) to get auth info
const auth = checkAuth(req, res);
if (!auth.uid) {
res.writeStatus("401 Unauthorized");
return res.end(JSON.stringify({ status: "Failed", message: "Not authenticated" }));
}
res.end(JSON.stringify({
status: "Success",
user: auth.username,
role: auth.claims.role
}));
});
// Raw post only has access to raw uWebSockets methods:
// req.getHeader(name) - Get a request header
// req.getUrl() - Get the URL path
// req.getQuery() - Get the raw query string
// req.getMethod() - Get the HTTP method
Auth Context
Both req.admin (in postWithBody) and the return value of checkAuth(req, res) (in raw post) return the standard auth context object — see Auth Context for the full property list and shape.
When to use which?
- Use
postWithBodywhen you need to read the request body (JSON, form data, file uploads). Authentication is handled automatically viareq.admin. - Use raw
postwhen you don't need body parsing (simple status checks, redirects, lightweight responses). CallcheckAuth(req, res)manually to get auth info.
Connecting to External Databases
If you need to connect to another NukeBase database from your server (for example, a shared service or microservice architecture), you can use the server/serversdk.js module.
When to use serversdk.js:
- Connecting to a separate NukeBase instance
- Building microservices that communicate with each other
- Aggregating data from multiple database servers
- Server-to-server real-time synchronization
module.exports = ({ get, set, update, addCallable, startDB, addDomain, ... }) => {
// Import the server SDK for external connections
const createServerClient = require('../sys/serversdk.js');
// Connect to an external NukeBase database
// Note: External connections ARE async (like client-side)
createServerClient('wss://other-project.nukebase.com').then(externalDb => {
console.log('Connected to external database');
// Use the external database with async operations
addCallable("getExternalData", async function(data, admin, sessionId) {
// Local database (sync)
const localUser = get(["users", admin.uid]);
// External database (async - requires await)
const externalData = await externalDb.get(["sharedData", data.itemId]);
return {
local: localUser.data,
external: externalData.data
};
});
// Subscribe to changes on external database
externalDb.getSub({
event: "value@",
path: ["notifications"]
}, (event) => {
// When external data changes, update local database
set(["cache", "externalNotifications"], event.data);
});
}).catch(err => {
console.error('Failed to connect to external database:', err);
});
// Set up local domain
const nukebase = addDomain({
authPath: ["users"]
});
startDB(nukebase);
};
Important differences:
- Local operations (via destructured
get,set, etc.) are synchronous - External operations (via
serversdk.js) are asynchronous and requireawait
This is because external connections go over the network via WebSocket, just like client connections.
Storage & Backups
Your database lives in server/data/ as a tree of data.json files. The server keeps the entire tree in memory and writes dirty subtrees back to disk on a 5-second debounce (or sooner under load). You don't normally need to touch these files — but it helps to know how they're laid out.
The split tree
To keep individual JSON files from growing unbounded, NukeBase splits any object that would exceed roughly 400 MB serialized into a subdirectory. The parent file references the child via a $split marker:
server/data/
├── data.json # { "users": "$split", "posts": [...] }
└── users/
├── data.json # { "alice": {...}, "bob": "$split" }
└── bob/
└── data.json # { ...bob's subtree... }
Splits happen automatically when an object grows past the threshold, and the inverse — coalescing back into the parent file — happens automatically on startup if a child has shrunk under the threshold. Migration from a legacy single server/data.json file is also automatic on first run.
Don't hand-edit while the server is running. The in-memory tree is the source of truth; on the next flush, your edits will be overwritten. Stop the server first, edit the relevant data.json, then restart.
Atomic writes & crash safety
Each data.json is written via a tmp + fsync + rename sequence, so a crash mid-write leaves either the previous file or the new file on disk — never a partially-written one. Any leftover .tmp files from a crashed write are swept on startup before the data is loaded. Pending in-memory writes are flushed on SIGTERM/SIGINT for graceful shutdown.
Daily backup
Every 24 hours the server flushes pending writes and copies server/data/ to server/backup/<YYYY-MM-DD>/. Same-day re-runs overwrite the existing snapshot. No configuration is required.
Bad-disk recovery: If a $split subdirectory is missing or corrupt at startup, the server fails fast by default — it won't load partial data and silently re-flush, which would commit data loss. Set DB_ALLOW_MISSING_SPLITS=1 to downgrade this to a logged warning and treat the missing subtree as empty (useful for recovering from a partial restore, but use with care).
Complete Server Example
Here's a minimal but complete server setup:
module.exports = ({
addDbTrigger,
addCallable,
addConnectionTrigger,
get,
set,
update,
remove,
query,
generateRequestId,
data,
addDomain,
startDB,
checkAuth
}) => {
// Set up a domain
const nukebase = addDomain({
authPath: ["users"], // Path where user authentication data is stored
host: "127.0.0.1", // optional
port: 3000 // optional
});
// Configure middleware for serving static files
const path = require('path');
nukebase.app.serveStatic("/*", path.join(__dirname, "../public"),
(req, res) => { return true; }
);
// Add a database trigger for important changes
addDbTrigger("value", ["orders", "$orderId"], function(context) {
// Only trigger if data has actually changed
if (JSON.stringify(context.dataAfter) !== JSON.stringify(context.dataBefore)) {
set(["logs", generateRequestId()], {
path: context.path,
timestamp: Date.now(),
oldValue: context.dataBefore,
newValue: context.dataAfter,
change: "Important data changed"
});
}
});
// Add a callable for client calculations
addCallable("addNumbers", function(data, admin, sessionId) {
// Extract numbers from the request
const { num1, num2 } = data;
// Perform the calculation on the server
const sum = num1 + num2;
// Return the result to the client
return sum;
});
// Track user connections
addConnectionTrigger("open", function(admin, sessionId) {
// Record when user connects
update(["sessions", admin.uid, sessionId], {
start: Date.now()
});
// Update user status
update(["users", admin.uid], {
online: true,
lastSeen: Date.now()
});
});
// Handle user disconnections
addConnectionTrigger("close", function(admin, sessionId) {
// Record when user disconnects
update(["sessions", admin.uid, sessionId], {
end: Date.now()
});
// Update user status
update(["users", admin.uid], {
online: false,
lastSeen: Date.now()
});
});
startDB(nukebase);
console.log("🚀 NukeBase server running on http://127.0.0.1:3000");
};
Note: This example demonstrates best practices including:
- Domain setup with authPath, host, and port configuration
- Static file serving with serveStatic
- Real-time database triggers
- Custom WebSocket functions
- Connection tracking
- Server initialization with startDB(nukebase)
Client-Side API
NukeBase's client library provides a real-time connection to your database through WebSockets. The client handles connection management, request tracking, and event dispatching automatically.
Looking for get/set/update/remove/query? Those work the same on client and server and are documented once in Data Operations. The client returns Promises (use await); otherwise the API is identical.
Connection Setup
The client automatically establishes a secure WebSocket connection:
<script type="module">
import createClient from './sdkmod.js';
// ============================================
// PATTERN 1: Full client object
// ============================================
const db = await createClient();
// Use methods with db. prefix
await db.set(['users', 'john'], { name: 'John', age: 30 });
const user = await db.get(['users', 'john']);
// ============================================
// PATTERN 2: Destructured methods (recommended)
// All examples below use this pattern
// ============================================
const { set, get, update, remove, query, getSub, querySub,
getSubChanged, querySubChanged, callableFunction,
login, logout, createUser, changePassword, magicLink } = await createClient();
console.log("Connected and ready to use NukeBase");
// Use methods directly without prefix
await set(['users', 'alice'], { name: 'Alice', age: 28 });
const userData = await get(['users', 'alice']);
// ============================================
// PATTERN 3: Attach to window (global access)
// Useful for multi-file apps or console debugging
// ============================================
const client = await createClient();
// Attach full client object
window.db = client;
// Optional: expose individual helpers directly
Object.assign(window, client);
// Now use from anywhere: window.db.get(...) or just get(...)
</script>
Important: The example above shows all three patterns for demonstration. In practice, choose ONE pattern for your application. Each pattern creates its own WebSocket connection, so using multiple would create multiple connections.
Key Features:
- Promise-based initialization: Wait for connection before using the client
- Automatic Reconnection: Reconnects every 5 seconds after disconnection
- Subscription Restoration: Automatically restores all active subscriptions after reconnect
- Tab Focus Recovery: Reconnects when browser tab regains focus
- Encapsulated State: Multiple client instances can coexist independently
Connection State Indicators
The SDK provides console messages to track connection state:
- ✅ Connected to [url] - WebSocket connection established
- ❌ Disconnected from [url] - Connection lost
- 🔁 Reconnecting... - Attempting to reconnect
- 🔄 Restoring subscriptions... - Resubscribing after reconnect
Real-time Subscriptions
Looking for get/set/update/remove/query? Those work the same on client and server — see Data Operations. The subscriptions below build on those primitives to deliver live updates.
Important: All subscription functions (getSub, getSubChanged,
querySub, and querySubChanged) immediately send the current data when the
subscription is created. This ensures your UI can display the current state right away, before any changes
occur.
Basic Subscriptions
Get real-time updates when data changes. All subscription functions immediately send the current data when the subscription is created, then continue to send updates whenever the data changes:
// Subscribe to changes on a path
const unsubscribe = getSub({
event: "value@",
path: ["users", "john"]
}, event => {
// This fires immediately with current data, then on every change
console.log("User data:", event.data);
});
// When finished listening
unsubscribe();
Query Subscriptions
Subscribe to data matching specific conditions:
// Subscribe to active users
const unsubscribe = querySub({
event: "value@",
path: ["users"],
query: "child.status == 'online'"
}, event => {
// Receives all currently online users immediately, then updates
const onlineUsers = event.data;
updateOnlineUsersList(onlineUsers);
});
Query Subscriptions with childPath
Just like regular queries, subscriptions can use childPath to subscribe only to specific nested portions of your data:
// Subscribe to public profiles only (excludes private data)
const unsubscribe = querySub({
event: "value@",
path: ["users"],
childPath: ["public"],
query: "child.verified == true"
}, event => {
// Receives the public subtree wrapped under its childPath key,
// mirroring the DB shape. "private" is never traversed or sent.
// Response: { matt123: { public: { verified: true, name: "Matt", ... } } }
displayVerifiedUsers(event.data);
});
// Subscribe to inventory changes for low stock items
const unsubscribe2 = querySub({
event: "value@",
path: ["products"],
childPath: ["inventory"],
query: "child.stock < 10"
}, event => {
// Only the inventory subtree is fetched and pushed; product details
// outside "inventory" never enter the payload.
// Response: { productA: { inventory: { stock: 5, ... } } }
showLowStockAlert(event.data);
});
// Use with querySubChanged for efficient updates
const unsubscribe3 = querySubChanged({
event: "value@",
path: ["users"],
childPath: ["profile"],
query: "child.country == 'USA'"
}, event => {
// Only fires when USA profiles change
// Only returns the profile portion that changed
console.log("Updated USA profiles:", event.data);
});
Benefits of childPath with subscriptions:
- Reduced bandwidth: Only transmit the data portions you need
- Security: Never receive data that might be blocked by read rules
- Performance: Smaller payloads mean faster real-time updates
- Clean data: Clients receive exactly the structure they expect
Changed-Only Subscriptions
Despite the name, these subscriptions ALSO receive the initial data immediately when created, then only fire again when data actually changes:
Important for getSubChanged and querySubChanged: What you receive depends on what path you're watching:
- If watching "users" and John updates his name, you get John's COMPLETE object (all fields)
- If watching "users.john" and a field changes, you get ONLY the changed field (e.g., just {name: "New Name"})
- If watching "users.john.name" and it changes, you get just the new name value
- The deeper your watch path, the more specific the change data
// getSubChanged - watching a collection
const unsubscribe = getSubChanged({
event: "value@",
path: ["users"]
}, event => {
// Initial: all users
// If John updates his email:
// event.data = { john: { name: "John", email: "new@email.com", age: 25 } }
// You get John's COMPLETE object
updateChangedUsers(event.data);
});
// getSubChanged - watching a specific user
const unsubscribe2 = getSubChanged({
event: "value@",
path: ["users", "john"]
}, event => {
// Initial: John's complete data
// If John's email changes:
// event.data = { email: "new@email.com" }
// You get ONLY the changed field
Object.assign(currentUser, event.data); // Merge changes
});
// getSubChanged - watching a specific field
const unsubscribe3 = getSubChanged({
event: "value@",
path: ["users", "john", "status"]
}, event => {
// Initial: "online"
// If status changes:
// event.data = "offline"
// You get just the new value
updateStatusIndicator(event.data);
});
// With query filtering - returns only the changed items
const unsubscribe4 = querySubChanged({
event: "value@",
path: ["users"],
query: "child.age > 21"
}, event => {
// If user John (age 25) updates only his name:
// event.data = { john: { name: "John Doe", age: 25, email: "john@example.com" } }
// You get John's COMPLETE object, not just the changed name field
console.log("Users that changed:", event.data);
});
// Example: monitoring low stock products
const unsubscribe5 = querySubChanged({
event: "value@",
path: ["products"],
query: "child.stock < 5"
}, event => {
// If product ABC updates its price, you get:
// { ABC: { name: "Widget", stock: 3, price: 29.99 } }
// The complete product object for ONLY the product that changed
Object.keys(event.data).forEach(productId => {
updateSingleProduct(productId, event.data[productId]);
});
});
Operation-Specific Subscriptions
Listen for specific types of operations by prefixing your path with an operation type:
Available operation types:
value@- Fires on any change (set, update, or remove)set@- Fires only when data is created or completely replacedupdate@- Fires only when existing data is partially updatedremove@- Fires only when data is deleted
Compatibility: Operation prefixes work with all subscription functions:
getSub, getSubChanged, querySub, and querySubChanged.
// Listen only for updates to user data
const unsubscribe = getSub({
event: "update@",
path: ["users", "john"]
}, event => {
console.log("User was updated:", event.data);
});
// Listen for new data being set
const unsubscribe2 = getSub({
event: "set@",
path: ["orders"]
}, event => {
console.log("New order created:", event.data);
});
// Listen for data removal
const unsubscribe3 = getSub({
event: "remove@",
path: ["users"]
}, event => {
console.log("A user was deleted:", event.path);
});
// Operation-specific with getSubChanged
const unsubscribe4 = getSubChanged({
event: "set@",
path: ["products"]
}, event => {
// Only fires when NEW products are created (not updates)
console.log("New products added:", event.data);
});
// Operation-specific with queries
const unsubscribe5 = querySub({
event: "update@",
path: ["users"],
query: "child.status == 'premium'"
}, event => {
// Only fires when premium users are UPDATED (not created or deleted)
console.log("Premium users updated:", event.data);
});
// Combining with querySubChanged
const unsubscribe6 = querySubChanged({
event: "remove@",
path: ["tasks"],
query: "child.completed == true"
}, event => {
// Only fires when completed tasks are DELETED
console.log("Completed tasks removed:", event.data);
});
// Default behavior without prefix (same as value@)
const unsubscribe7 = getSub({
path: ["users", "john"]
}, event => {
// Fires on ANY change: set, update, or remove
// event parameter defaults to "value@" if not specified
console.log("Something changed:", event.data);
});
Subscription Bubble-Up Behavior
Understanding how subscription changes propagate is crucial for designing efficient real-time applications. NukeBase subscriptions follow a "bubble-up" pattern:
Key Concept: Changes Bubble UP, Not DOWN
- Bubble UP ✅: Changes at child paths trigger parent subscriptions
- No Trickle DOWN ❌: Changes at parent paths do NOT trigger child subscriptions
// Set up subscriptions at different levels
getSub({
event: "value@",
path: ["calls"]
}, (event) => {
console.log("1. Calls level:", event.data);
});
getSub({
event: "value@",
path: ["calls", "123"]
}, (event) => {
console.log("2. Specific call:", event.data);
});
getSub({
event: "value@",
path: ["calls", "123", "answer"]
}, (event) => {
console.log("3. Answer level:", event.data);
});
// Scenario 1: Change at deep level (bubbles UP)
await set(["calls", "123", "answer"], { type: "answer", sdp: "..." });
// ✅ Fires: 1. Calls level (bubbled up)
// ✅ Fires: 2. Specific call (bubbled up)
// ✅ Fires: 3. Answer level (direct match)
// Scenario 2: Change at middle level (bubbles UP, not DOWN)
await update(["calls", "123"], { status: "active" });
// ✅ Fires: 1. Calls level (bubbled up)
// ✅ Fires: 2. Specific call (direct match)
// ❌ NOT fired: 3. Answer level (no trickle down)
// Scenario 3: Change at top level (no trickle DOWN)
await set(["calls"], { "456": { offer: {...} } });
// ✅ Fires: 1. Calls level (direct match)
// ❌ NOT fired: 2. Specific call (no trickle down)
// ❌ NOT fired: 3. Answer level (no trickle down)
Practical Implications:
- Parent subscriptions are "catch-all": Watching
userswill fire for ANY change in ANY user or their properties - Child subscriptions are specific: Watching
users.john.emailonly fires when that exact path or its children change - Performance consideration: Higher-level subscriptions fire more frequently due to bubble-up
- Data replacement warning: If you
set()at a parent level, child subscriptions may stop working as their paths no longer exist
Calling Callables
Invoke server-side callables from the client using callableFunction(name, data). Callables are defined on the server with addCallable:
// Call the server callable
const response = await callableFunction("addNumbers", { num1: 5, num2: 7 });
console.log(`The sum is: ${response.data}`); // Output: The sum is: 12
The first argument is the callable's name (the string you passed to addCallable on the server). The second is any payload — it arrives as the data argument inside the callable. The server's return value is delivered as response.data.
Ultra-Low Latency Performance
Callables run over the existing WebSocket connection, providing the fastest possible way to communicate with your server. No HTTP handshake, no new connection — perfect for real-time games, live collaboration, and any application where milliseconds matter.
Callables are especially powerful when you need to:
- Aggregate data from multiple database paths
- Perform complex calculations server-side
- Validate game moves or business logic
- Return processed results without exposing raw data
Example Use Cases: Game state calculations, leaderboard generation, real-time analytics, complex permission checks, or any scenario where you need to fetch multiple database values, process them, and return a calculated result.
Authentication
NukeBase provides a built-in cookie-based authentication system. When you configure
authPath: ["users"]
in your domain setup, authentication endpoints are automatically available and cookies are handled
seamlessly.
How it works:
- Configure
authPath: ["users"]in your domain setup - Use the built-in authentication endpoints from your client
- Server automatically sets HTTP cookies (uid, token)
- WebSocket connections automatically use these cookies
- User information populates the
adminobject for security rules
Authentication Endpoints
NukeBase automatically provides these authentication endpoints when authPath is configured:
Available Endpoints:
- POST /login - Login with username/password, or resume session via cookies (can also upgrade a demo account that has not yet set credentials by providing username/password)
- POST /createuser - Create a new account. Three modes:
- No credentials → demo account (anonymous, immediate login)
- Username (email) only → passwordless account; the server emails a magic-link sign-in
- Username + password → full account, immediate login
- POST /logout - Clear authentication cookies and revoke the current session token
- POST /changepassword - Change user password (requires an active password-backed session)
- POST /magic-link - Email a one-time sign-in link to an existing account (15-minute expiry). See Magic Link Authentication.
- GET /magiclink?token=... - Public landing URL the magic-link email points to. Validates the token, sets session cookies, and 302-redirects to
magicLinkRedirect.
Cookies set on success: all auth endpoints that establish a session set two cookies — uid and token — with attributes HttpOnly; Secure; SameSite=Strict; Max-Age=86400; Path=/. That gives you a 24-hour session backed by a server-side hashed token. /logout sets matching Max-Age=0 cookies to clear them.
Because Secure is required, the cookies will not be set over plain http://. Use https:// in production and http://localhost in development (browsers exempt localhost from the Secure requirement).
Built-in security — you don't need to add these yourself:
- Rate limiting: 5 attempts per 60-second window per client IP on
/login,/createuser, and/magic-link. A successful login clears the limiter for that IP. - Argon2 password hashing: passwords are stored as argon2id hashes. Plaintext passwords from legacy data are auto-migrated on first successful login.
- SHA-256 hashed session tokens: the cookie holds the raw token, but only its SHA-256 hash is stored under
auth.tokens. A leak of the database does not leak usable session tokens. - Hourly token cleanup: a background job sweeps expired entries from
auth.tokensand from the rate-limit map. No manual housekeeping is required. - Username enumeration resistance:
/magic-linkreturns the same response whether or not the address corresponds to an existing account.
Username and password format constraints:
- Username: matches
/^[a-z0-9@.]{1,32}$/. Inputs are lowercased and trimmed before validation, so case-variant duplicates can't coexist. - Password: 8–128 characters. Enforced on both
/createuserand/changepassword.
The same regex powers the email-based magic-link flow, so any address you accept must also fit it (no +, no uppercase, no longer than 32 chars).
Behind a reverse proxy? By default the rate limiter uses the socket-level peer address. If your server sits behind nginx, Caddy, or a load balancer, set the TRUST_PROXY environment variable to "true" or "1" — the server will then honor x-real-ip / x-forwarded-for when computing the client IP. Without TRUST_PROXY set, those headers are intentionally ignored so a client cannot rotate them per request to bypass the limiter.
Login
Use the /login endpoint to login with a username and password. If valid auth cookies are already on the request, the session is resumed automatically — no DB lookup against the password. The endpoint can also upgrade a demo account: if the cookie-session belongs to an account that has no username and no password yet (i.e. a true demo account that has never been upgraded), passing (username, password) attaches credentials to that same UID. Once an account has a username or password, this upgrade path is no longer available — subsequent calls just resume the existing session.
// Import and destructure the methods you need
import createClient from './sdkmod.js';
const { login } = await createClient();
// Login with username and password
const result = await login("username", "password");
if (result && result.status === "Success") {
console.log('Authenticated as:', result.username);
}
// Resume session (if cookies are already set)
const result2 = await login();
if (result2 && result2.status === "Success") {
console.log('Session resumed:', result2.uid);
}
// Upgrade a demo account to a full account
// Preconditions: valid demo cookies AND the account currently has
// no username and no password set. After the first upgrade, calling
// login(...) again just resumes the existing session — it will not
// overwrite the username/password.
const result3 = await login("newUsername", "newPassword");
if (result3 && result3.status === "Success") {
console.log('Demo account upgraded:', result3.username);
}
Create Account
Create a new account using the /createuser endpoint. Three modes are supported:
- No arguments — creates a demo (anonymous) account and immediately logs the user in.
- Username only — creates a passwordless account and emails a magic-link sign-in. The response does not log the user in; they have to click the link in their email. See Magic Link Authentication.
- Username + password — creates a full account and immediately logs the user in.
All three modes fail with "Already signed in, logout first" if the caller already has a valid session cookie.
import createClient from './sdkmod.js';
const { createUser } = await createClient();
// Create a demo/anonymous account (no credentials) — logs in immediately
const result = await createUser();
if (result && result.status === "Success") {
console.log('Demo account created:', result.uid);
}
// Create a passwordless account (email only) — emails a magic-link sign-in.
// The user is NOT logged in by this call; they have to click the link.
const result2 = await createUser("matt@example.com");
if (result2 && result2.status === "Success") {
console.log(result2.message); // "Account created. Check your email for a sign-in link."
}
// Create a full account with username and password — logs in immediately
const result3 = await createUser("myUsername", "myPassword");
if (result3 && result3.status === "Success") {
console.log('Account created:', result3.username);
}
// Note: Fails if already signed in - logout first
Logout
Clear authentication cookies to log out the user:
import createClient from './sdkmod.js';
const { logout } = await createClient();
const result = await logout();
if (result && result.status === "Success") {
console.log('Logged out successfully');
}
Change Password
Allow authenticated users to change their password:
import createClient from './sdkmod.js';
const { changePassword } = await createClient();
const result = await changePassword("newPassword123");
if (result && result.status === "Success") {
console.log('Password changed successfully');
} else {
console.log('Failed to change password');
}
Magic Link Authentication
NukeBase has built-in passwordless sign-in via emailed one-time links. Two endpoints power the flow, and account creation can also bootstrap into it.
Setup requirement: magic-link emails are sent through SendGrid. Configure your domain with a sendGridKey:
const nukebase = addDomain({
authPath: ["users"],
sendGridKey: process.env.SENDGRID_API_KEY,
magicLinkRedirect: "/dashboard" // optional, defaults to "/"
});
Without sendGridKey, the endpoints still exist but the email send will fail. Without process.env.DOMAIN set, the link URL will be malformed.
POST /magic-link — request a sign-in link for an existing account
const r = await fetch("/magic-link", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ email: "matt@example.com" })
});
const { status, message } = await r.json();
// status === "Success" whether or not the address exists (anti-enumeration)
Body: { email: string }. The address must match the standard username regex (lowercased internally). The response is the same regardless of whether the account exists, so a bad actor cannot use this endpoint to enumerate users. The IP-based rate limiter applies (5 / 60s).
GET /magiclink?token=... — consume a sign-in link
This is the URL that the email points at; users don't call it from code. On a valid, unexpired token the server:
- Generates a new 32-byte session token, stores its SHA-256 hash under
users.<uid>.auth.tokenswith a 24-hour expiry, - Sets
uidandtokencookies (HttpOnly; Secure; SameSite=Strict), - Issues a
302 Foundredirect to${process.env.DOMAIN}${magicLinkRedirect}.
Tokens are single-use (deleted on consumption) and expire 15 minutes after issue. An invalid or expired token redirects to ?error=invalid_or_expired; a missing token redirects to ?error=missing_token.
Passwordless account creation
Pass a username to /createuser with no password to create an account that exists only for magic-link sign-in. The server creates the account and immediately sends a sign-in link — the response does not log the user in; they have to click the link in their email.
// Email-only signup — user must click the link in their inbox to log in
const result = await createUser("matt@example.com");
// result.status === "Success", result.message tells the user to check email
// (no uid / token returned here — the session is established by /magiclink)
Using Authentication in Security Rules
Once authenticated, the admin object is available in your security rules:
// In your rules.js
module.exports = {
"users": {
"$userId": {
// Only the user themselves can edit
"write": "admin.uid == $userId",
// Don't grant read at $userId — it would cascade and expose "private".
// Split the data into public/private subnodes and grant read on each.
"public": { "read": "true" }, // Anyone can read public profile
"private": { "read": "admin.uid == $userId" } // Only the user
}
},
"adminPanel": {
// Only users with admin role can access
"read": "admin.claims.role == 'admin'",
"write": "admin.claims.role == 'admin'"
}
};
Security Notes:
- Use HTTPS in production — the auth cookies are
Secure-flagged and will not be set over plain HTTP (browsers exemptlocalhostfor development). - Rate limiting on
/login,/createuser, and/magic-linkis built in (5 attempts per 60-second window per client IP). If you sit behind a reverse proxy, setTRUST_PROXY=trueso the limiter sees the real client IP. - Expired session tokens are swept automatically every hour — no cleanup script needed.
- Passwords are hashed with argon2id; session tokens are SHA-256 hashed before storage. Legacy plaintext passwords auto-upgrade on first successful login.
generateRequestId(bytes)usescrypto.randomBytesand returns a hex string. Defaults to 8 bytes (16 hex chars); session tokens use 32 bytes (64 hex chars).
Custom Claims
Custom claims let you attach arbitrary data (roles, permissions, plan tiers, etc.) to a user's auth record. Claims are available in security rules and callable functions via admin.claims.
// Set all claims at once
set(["users", uid, "auth", "claims"], { role: "admin", plan: "pro" });
// Update or add a single claim
update(["users", uid, "auth", "claims"], { role: "editor" });
// Remove a single claim
remove(["users", uid, "auth", "claims", "role"]);
module.exports = {
"adminPanel": {
"read": "admin.claims.role == 'admin'",
"write": "admin.claims.role == 'admin'"
},
"premiumContent": {
"read": "admin.claims.plan == 'pro'"
}
};
Important: Claims are read when a WebSocket connection is established. If you change a user's claims while they are connected, the changes won't take effect until their next connection (page reload, reconnect, or new login). Users without any claims will have admin.claims default to an empty object {}.
Database Structure for Authentication
The authentication system expects user data to be structured like this:
"users": {
"ML96SDE5": { // Unique user UID (hex, generated by generateRequestId)
"auth": {
"username": "matt123", // Lowercased username/email (optional for demo accounts)
"password": "$argon2id$v=19$m=65536,...", // argon2id hash — never plaintext
"tokens": {
// Keys are SHA-256 hashes of the raw session token (the cookie value).
// Values are expiration timestamps in ms since epoch.
"8f3a...e21c": 1748357368415,
"b12d...07ff": 1748357670935
},
"claims": { // Optional custom claims (free-form object)
"role": "admin",
"plan": "pro"
}
}
}
}
What's actually stored vs. what the cookie holds:
- Password: stored as an
argon2idhash. A legacy plaintext value will be auto-upgraded to a hash on the next successful login. - Session tokens: the cookie holds the raw 32-byte hex token; the database stores only its SHA-256 hash as the key under
auth.tokens. You cannot reconstruct a valid cookie from the database alone. - Don't try to read tokens out of
auth.tokensat runtime — they're hashes, not the values you'd put back into a cookie.
Token cleanup is automatic. A background sweep runs hourly and removes any auth.tokens entries whose expiry has passed, plus stale entries from the in-memory rate-limit map. You don't need to schedule your own cleanup.
Response Format
All NukeBase operations return a standardized response object:
{
// The operation performed
action: "get",
// Data from the operation
data: {
"user123": { name: "John", age: 32 },
"user456": { name: "Jane", age: 28 }
},
// For tracking the request
requestId: "RH8HZX9P",
// Success or Failed
status: "Success"
}
When an error occurs, the response includes:
{
status: "Failed",
message: "Error description here"
}
Complete Client NukeBase SDK with createClient()
Here's a complete example using the new modular SDK:
<script type="module">
import createClient from './sdkmod.js';
// Destructure all the methods you need
const { set, get, update, query, callableFunction, getSub, querySub,
getSubChanged, querySubChanged } = await createClient();
console.log('✅ Connected to NukeBase');
// Set data
await set(["users", "matt"], {
name: "Matt",
color: "red",
count: 0
});
// Get data
const sessions = await get(["sessions"]);
console.log('Sessions:', sessions.data);
// Update data
await update(["users", "matt"], {
leadsSent: "Pending"
});
await update(["users", "matt", "count"], 5);
// Query data
const results = await query({
path: ["sessions"],
query: "child.count > 0"
});
console.log('Query results:', results.data);
// Call a server callable
const functionResult = await callableFunction("custom1", 23);
console.log('Function result:', functionResult);
// Subscribe to changes
const unsubscribe1 = getSub({
event: "value@",
path: ["sessions"]
}, data => {
console.log('Sessions updated:', data);
});
// Query subscription
const unsubscribe2 = querySub({
event: "value@",
path: ["sessions"],
query: "child.count == 4"
}, data => {
console.log('Matching sessions:', data);
});
// Changed-only subscription
const unsubscribe3 = getSubChanged({
event: "value@",
path: ["sessions"]
}, data => {
console.log('Changed sessions:', data);
});
// Query changed subscription
const unsubscribe4 = querySubChanged({
event: "value@",
path: ["sessions"],
query: "child.count != 4"
}, data => {
console.log('Changed query results:', data);
});
// Later, to unsubscribe:
// unsubscribe1();
// unsubscribe2();
// unsubscribe3();
// unsubscribe4();
</script>