Server-Side API
NukeBase is a managed service that provides instant provisioning and deployment. Your project structure includes:
- server/database.js: The core database engine
- server/data.json: Your database file
- server/rules.js: Security rules configuration
- server/app.js: Your application configuration file
- public/: Frontend files (index.html, css, js, etc.)
- sys/deploy.js: Deploy program
- sys/config.json: Deployment configuration
- node_modules/: Dependencies (auto-generated)
- package.json: NPM package configuration
- package-lock.json: Dependency lock file
Setup and Initialization
Getting started with NukeBase is simple - provision your project through our managed service and start developing immediately.
Step 1: Create Your Project
Getting started is as simple as visiting a URL in your browser:
1. Visit: https://nukebase.com/createuser
2. Fill in your project details (username, project name)
3. Click "Provision & Download"
4. Your project zip will download automatically
Instant Deployment: Your project is automatically provisioned, deployed, and live at:
https://username-project.nukebase.com
No build steps, no server configuration - just download the zip and start coding!
Step 2: Local Development Setup
After provisioning, extract the downloaded zip file and set up your VS Code workspace:
# 1. Extract the downloaded project zip file
# Right-click the .zip file and select "Extract All"
# 2. Open VS Code
# File → Add Folder to Workspace → Select your extracted project folder
# 3. Open Terminal in VS Code
# Terminal → New Terminal → Select Folder As Directory
# 4. Install NukeBase CLI globally
npm install -g
# 5. Install NukeBase NPM Packages
npm install
# 6. Now you can use NukeBase commands:
nukebase push # Push local changes to live server
nukebase pull # Pull live server changes to local
NukeBase CLI Commands:
nukebase push- Upload your local changes to the live servernukebase pull- Download the latest changes from the live server
Changes are synced in real-time, allowing you to develop locally and deploy instantly.
Push or pull will instantly remove or add folder/files to server/client unless sys/config "exclude": ["sys", "server/data.json"]
Step 3: Start Developing
Your project structure is ready to use:
- /public: Edit your frontend files (HTML, CSS, JavaScript)
- /server/app.js: Configure backend logic, domains, and database triggers
- /server/rules.js: Define security rules for data access
- /server/data.json: Your real-time database (auto-synced)
Hot Reload: Changes to your /public files are instantly reflected on your live site. Backend changes in /server/app.js are automatically deployed.
Basic Server Configuration Structure
Your server/app.js file uses a module export pattern that provides access to all NukeBase APIs:
module.exports = ({
addFunction,
addWsFunction,
get,
set,
update,
remove,
query,
generateRequestId,
data,
addDomain,
startDB,
onConnection,
onClose,
checkAuth
}) => {
const path = require("path");
const nukebase = addDomain({
authPath: ["users"],
host: "127.0.0.1", // optional - defaults to "127.0.0.1"
port: 3000 // optional - defaults to 3000
});
nukebase.app.serveStatic("/*", path.join(__dirname, "../public"),
(req, res) => { return true; }
);
startDB(nukebase);
}
Security Rules
NukeBase uses a JSON-based security rules system to control access to your database. Rules are defined in
server/rules.js and are evaluated for every database operation.
Available Variables in Rules:
admin- The authenticated user object with properties:admin.uid- User's unique IDadmin.username- User's usernameadmin.claims- Custom claims object (roles, permissions, etc.)
root- The database object at the top leveldata- The current/old value at the path being accessednewData- The new value being written (for write/validate rules)$variables- Wildcard captures like$userId,$postId
Rule Types
Three types of rules control different aspects of data access:
- read - Controls who can read data at a path (triggered by
get()operations) - write - Controls who can create, update, or delete data (triggered by
set(),update(), andremove()operations) - validate - Ensures data meets specific requirements (triggered by
set()andupdate()operations)
How Rules Are Checked:
Parent rules override child rules. When checking a path like users.john.email,
NukeBase checks rules starting from the top:
- Check
users- If this denies access, STOP (don't check deeper rules) - Check
users.john- If this denies access, STOP - Check
users.john.email- Final check
If ANY parent rule denies access, the operation fails. All matching rules must pass.
Rule Matching at Same Level:
- Read/Write rules: If you have both exact (
pets) and wildcard ($other) rules at the same level, BOTH must pass for access topets. - Validate rules: Only the most specific rule matches. Exact match (
pets) takes priority over wildcard ($other).
// These two rules are at the SAME LEVEL (both are direct children of the parent)
module.exports = {
"pets": {
"read": "true", // Rule 1: Anyone can read pets
"write": "admin.claims.role == 'petOwner'", // Rule 2: Must be pet owner
"validate": "newData.type == 'cat' || newData.type == 'dog'" // Only cats/dogs
},
"$other": { // ← This is at the SAME LEVEL as "pets" above
"read": "admin.claims.role == 'admin'", // Rule 3: Must be admin
"write": "false", // Rule 4: No writes allowed
"validate": "newData != null" // Not empty
}
}
// When accessing "pets":
// Read: BOTH "true" AND "admin.claims.role == 'admin'" must pass → Fails for non-admins!
// Write: BOTH "admin.claims.role == 'petOwner'" AND "false" must pass → Always fails!
// Validate: ONLY the "pets" rule applies (most specific)
Basic Example
module.exports = {
"users": {
"$userId": {
"read": "true", // Anyone can read user profiles
"write": "admin.uid == $userId", // Only the user can edit their profile
"email": {
"read": "admin.uid == $userId" // Email is private
}
}
}
};
Path Patterns
Rules support different path patterns to match your data structure:
| Pattern | Description | Example |
|---|---|---|
users.john |
Exact path matching | Matches only users.john |
users.$userId |
Wildcard matching | Matches users.alice, users.bob, etc.The $userId variable captures the actual key |
posts.$postId |
Wildcard for collections | Matches any child: posts.abc, posts.xyz, etc. |
messages.$msgId |
Works with arrays too | Arrays are objects with numeric keys Matches messages.0, messages.1, messages.2 |
Arrays and Path Matching:
JavaScript arrays like ["red", "blue", "green"] are stored as objects with numeric keys:
{ "0": "red", "1": "blue", "2": "green" }
This means:
colors.0- Exact match for first elementcolors.$index- Wildcard matches all elements (0, 1, 2, etc.)colors- Matches the array itself
Operations and Their Rules
Different database operations trigger different combinations of rules:
| Operation | Rules Triggered | Description |
|---|---|---|
get() |
read | Only read rules are checked when retrieving data |
set() |
write + validate | Both write permission and data validation are required |
update() |
write + validate | Same as set() - must have permission and valid data |
remove() |
write | Only write rules are checked (newData is null) |
query() |
read | Read rules filter which items are returned |
Rule Types in Detail
Read Rules
Control who can read data at a specific path:
// Simple read rule
"posts": {
"read": "true", // Anyone can read posts
"$postId": {
"draft": {
"read": "admin.uid == data.authorId" // Only author can read drafts
}
}
}
// Using variables in paths
"users": {
"$userId": {
"read": "true", // Anyone can read user profiles
"email": {
"read": "admin.uid == $userId" // Only the user can read their own email
}
}
}
Write Rules
Control who can create, update, or delete data:
// Basic write rule
"posts": {
"$postId": {
"write": "admin.uid == data.authorId", // Only author can edit
"createdAt": {
"write": "!data" // Can only set createdAt when creating (no previous data)
}
}
}
// Demonstrating rule override hierarchy
"store": {
"write": "false", // No one can write to store (overrides all child rules)
"products": {
"write": "admin.claims.role == 'manager'", // This is ignored due to parent rule
"$productId": {
"write": "admin.uid == data.ownerId" // This is also ignored
}
}
}
Validate Rules
Ensure data integrity and format requirements:
// Simple field validation
"users": {
"$userId": {
"age": {
"validate": "newData >= 13 && newData <= 120"
},
"email": {
"validate": "newData.includes('@') && newData.includes('.')"
}
}
}
// Validating objects with required fields
"posts": {
"$postId": {
"validate": "newData.title && newData.content && newData.title.length <= 200"
}
}
// Using data and newData to compare old and new values
"users": {
"$userId": {
"credits": {
// Ensure credits can only increase, not decrease
"validate": "newData >= data"
}
}
}
// Complex validation with multiple conditions
"products": {
"$productId": {
"validate": "newData.name && newData.price > 0 && newData.stock >= 0"
}
}
Array Validation
Arrays are validated using the same rule system, but understanding how paths are generated is essential for proper validation.
How Array Validation Works:
When you set/update an array, NukeBase generates validation paths for:
- The array itself - Path to the array as a whole
- Each array element - Individual paths like
["tags", "0"],["tags", "1"]
Arrays are treated as objects with numeric keys: ["red", "blue"] becomes {"0": "red", "1": "blue"}
// Example: update(["users", "john", "tags"], ["red", "blue", "green"])
// This generates paths:
// 1. ["users", "john", "tags"] ← Entire array
// 2. ["users", "john", "tags", "0"] ← Element 0: "red"
// 3. ["users", "john", "tags", "1"] ← Element 1: "blue"
// 4. ["users", "john", "tags", "2"] ← Element 2: "green"
// METHOD 1: Validate the ENTIRE array
"users": {
"$userId": {
"tags": {
// newData = entire array ["red", "blue", "green"]
"validate": "Array.isArray(newData) && newData.length <= 5"
}
}
}
// METHOD 2: Validate EACH element using wildcard
"users": {
"$userId": {
"tags": {
"$index": { // $index matches "0", "1", "2", etc.
// newData = individual element ("red", "blue", or "green")
"validate": "typeof newData === 'string' && newData.length < 20"
}
}
}
}
// METHOD 3: COMBINE both approaches
"users": {
"$userId": {
"tags": {
// Validate array properties
"validate": "Array.isArray(newData) && newData.length <= 5",
"$index": {
// Validate each element
"validate": "typeof newData === 'string' && newData.length < 20"
}
}
}
}
// Complex array validation with element uniqueness check
"users": {
"$userId": {
"favoriteColors": {
"$index": {
// Each color must be a valid hex code
"validate": "typeof newData === 'string' && /^#[0-9A-F]{6}$/i.test(newData)"
}
}
}
}
Important: Both the array-level rule AND element-level rules must pass. If you have rules at both levels, all of them are checked.
Available Variables
Rules have access to several context variables:
| Variable | Description | Available In |
|---|---|---|
data |
Current value at the path (before changes) | All rule types |
newData |
Value after the write operation | write, validate |
root |
Current database root | All rule types |
admin |
Authentication object with uid, username, and claims |
All rule types |
$variables |
Values from wildcard path segments | All rule types |
Best Practices
- Start with restrictive rules, then add exceptions as needed
- Use validate rules to ensure data integrity
- Test rules thoroughly before deploying to production
- Keep rules simple and readable
- Only one validate rule per path - combine conditions with
&&or|| - Remember that multiple read/write rules can match at the same depth, but higher rules override deeper ones
- Validate rules only match the most specific rule at a given path
Common Mistakes to Avoid
Mistake 1: Multiple validate rules on same path
// WRONG - Only the last validate rule will be used!
"email": {
"validate": "newData.includes('@')",
"validate": "newData.includes('.')" // This overwrites the first rule!
}
// CORRECT - Combine with &&
"email": {
"validate": "newData.includes('@') && newData.includes('.')"
}
Database Triggers
Create event-driven functions that respond to database changes:
// Create a trigger for when a request is updated
addFunction("onUpdate", ["requests", "$requestId"], async function(context) {
// The context object contains all relevant information about the change
const beforeNotes = context.dataBefore?.notes;
const afterNotes = context.dataAfter?.notes;
// Replace "pizza" with pizza emoji
const newNotes = afterNotes.replaceAll("pizza", "🍕");
// Avoid infinite loop by checking if we already replaced
if (newNotes === afterNotes) {
return;
}
// Update the data with our modified version
update(context.path, { notes: newNotes });
});
Key components of database triggers:
addFunction(eventType, pathArray, callbackFunction)- Path arrays use wildcards like
$userIdto match any value at that position
Event Types
"onSet"- Triggered when data is created or completely replaced"onUpdate"- Triggered when data is partially updated"onRemove"- Triggered when data is deleted"onValue"- Triggered for all changes (set, update, remove)
Path Patterns
Use an array path with wildcards to match specific data paths:
["users", "$userId"]- Matches any user path like ["users", "john"] or ["users", "alice"]["posts", "$postId", "comments", "$commentId"]- Matches any comment on any post
Context Object
Your callback function receives a context object containing:
context.path- The complete path array that was changed (e.g., ["orders", "abc123"])context.dataAfter- The data after the change (null for remove operations)context.dataBefore- The data before the change (null for new data)
Important: When modifying data within a trigger that affects the same path you're watching, always implement safeguards to prevent infinite loops, as shown in the example.
Complete Example: Order Processing
// React to new orders being created
addFunction("onSet", ["orders", "$orderId"], async function(context) {
// Only run if this is a new order (no previous data)
if (!context.dataBefore && context.dataAfter) {
// Extract orderId from the path array
const orderId = context.path[1];
// Update the order status
update(context.path, {
status: "processing",
processingStart: Date.now()
});
}
});
WebSocket Functions
Create custom server functions that clients can call through wsFunction:
addWsFunction("getUsersCount", async function (data, admin, sessionId) {
//get all users
var res = get(["users"])
//Count how many users
count = Object.keys(res.data).length
//return number
return count
});
WebSocket functions receive:
- Client-sent data
- Admin flag (for protected operations)
- User's session ID
Connection Events
Track client connections and disconnections:
// When a client connects
onConnection(function (admin, sessionId) {
// Record session start time
update(["sessions", admin.uid, sessionId], {
start: Date.now()
});
});
// When a client disconnects
onClose(function (admin, sessionId) {
// Record session end time
update(["sessions", admin.uid, sessionId], {
end: Date.now()
});
});
Starting the Database
Start the NukeBase server with configuration options by calling startDB() once at the end of your configuration:
// Basic setup - pass the domain object to startDB
const nukebase = addDomain({
authPath: ["users"],
host: "127.0.0.1", // optional
port: 3000 // optional
});
startDB(nukebase);
addDomain Configuration Options:
- authPath: Array - path to user authentication data (e.g.,
["users"]) - host: String (optional) - the IP address to bind to
- Use "127.0.0.1" to accept connections only from the local machine (default)
- Use a specific IP address like "126.23.45.1" to bind to that server address
- Use "0.0.0.0" to accept connections from any IP
- port: Number (optional) - the port to listen on (default: 3000)
HTTP POST Handlers
This section documents how to handle POST requests using app.post() and app.postWithBody() with the withBody middleware.
Overview
uWebSockets.js (uWS) handles HTTP differently than Express. The request body arrives in chunks via res.onData(), and the raw req object becomes invalid after the initial synchronous callback. The withBody middleware solves both problems by collecting the body, parsing it by content type, and passing a safe request object to your handler.
There are two ways to define POST routes:
| Method | Use when |
|---|---|
app.post(pattern, handler) |
You need raw control over body reading (e.g. streaming uploads) |
app.postWithBody(pattern, handler) |
You want the body parsed automatically (most routes) |
withBody Middleware
withBody wraps a handler to provide automatic body parsing and a normalized request object.
What it does
- Captures all headers and request metadata synchronously (before the raw
reqexpires) - Reads body chunks via
res.onData() - Parses the body based on
Content-Type - Builds a clean request object
reqwith.body,.query,.cookies,.admin,.getHeader() - Adds
res.abortedflag for abort tracking - Adds
res.send()convenience method - Wraps the handler in
Promise.resolve()to catch async errors automatically
Supported content types
| Content-Type | Parsed as |
|---|---|
application/json |
JSON.parse() → object |
application/x-www-form-urlencoded |
URLSearchParams → object |
multipart/form-data |
Fields → strings, files → [{filename, type, data}] arrays |
text/plain |
{ text: "..." } |
| Any other (with data) | { data: Buffer } |
| Empty body | {} |
The request object
The handler receives (res, req) where req is a normalized object (not the raw uWS request):
{
host: "example.com", // Host header
url: "/api/orders", // URL path
query: { page: "1", limit: "10" }, // Parsed query string
cookies: { session: "abc123" }, // Parsed cookies
admin: { uid: "...", ... }, // Auth info from checkAuth()
body: { ... }, // Parsed body (see table above)
getHeader: (name) => "value", // Case-insensitive header lookup
}
The response object
res is the native uWS response with two additions:
| Property | Description |
|---|---|
res.aborted |
true if the client disconnected. Check before writing. |
res.send(body, status?) |
Convenience method — corks, sets status, and ends in one call. |
All native uWS methods remain available: res.writeStatus(), res.writeHeader(), res.write(), res.end(), res.cork(), res.onWritable().
Core Rules
Every uWS POST handler must follow these rules:
1. Always cork your writes
Every call to writeStatus, writeHeader, write, and end must be inside res.cork(). This batches them into a single syscall.
// ✅ Correct
res.cork(() => {
res.writeStatus("200 OK");
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify({ ok: true }));
});
// ❌ Wrong — writes outside cork
res.writeStatus("200 OK");
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify({ ok: true }));
2. Check res.aborted before writing after async work
If your handler does any await, the client may have disconnected by the time it resolves.
// ✅ Correct
app.postWithBody("/api/data", async (res, req) => {
const result = await db.query(req.body.id);
if (res.aborted) return; // Client gone, don't write
res.cork(() => {
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify(result));
});
});
// ❌ Wrong — no abort check after await
app.postWithBody("/api/data", async (res, req) => {
const result = await db.query(req.body.id);
res.cork(() => {
res.end(JSON.stringify(result)); // May crash if client disconnected
});
});
3. Never access the raw uWS req after the synchronous callback
The withBody middleware handles this for you — the req your handler receives is a plain JS object that's safe to use anytime. But if you use raw app.post(), you must copy everything synchronously.
Simple Routes (postWithBody)
For standard request/response routes, use postWithBody. The body is fully parsed before your handler runs.
Sync handler
app.postWithBody("/api/echo", (res, req) => {
res.cork(() => {
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify({ received: req.body }));
});
});
Async handler
app.postWithBody("/api/orders", async (res, req) => {
const order = await db.createOrder(req.body.customerId, req.body.items);
if (res.aborted) return;
res.cork(() => {
res.writeStatus("201 Created");
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify(order));
});
});
Using res.send() shortcut
res.send(body, status?) combines cork + writeStatus + end into one call. Automatically skips if res.aborted is true.
// These are equivalent:
res.cork(() => {
res.writeStatus("200 OK");
res.end(JSON.stringify({ ok: true }));
});
res.send(JSON.stringify({ ok: true }));
res.send(JSON.stringify({ ok: true }), "200 OK"); // status defaults to "200 OK"
res.send(JSON.stringify(order), "201 Created");
Note: res.send() does not set Content-Type. If you need headers, use res.writeHeader() inside cork() manually, or use the helper functions jsonOk and jsonError described below.
File upload (multipart)
Files are parsed into arrays of {filename, type, data} objects:
app.postWithBody("/api/upload", (res, req) => {
const files = req.body.avatar || []; // Field name from the form
const info = files.map((f) => ({
filename: f.filename, // Original filename
type: f.type, // MIME type (e.g. "image/png")
size: f.data.length, // Size in bytes
}));
// f.data is a Buffer containing the raw file bytes
res.cork(() => {
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify({ files: info }));
});
});
Error handling
Errors thrown in async handlers are caught automatically and return a 500 response:
// This automatically returns "500 Internal Server Error"
app.postWithBody("/api/risky", async (res, req) => {
throw new Error("Something broke");
});
// Parse errors (bad JSON, etc.) automatically return "400 Bad Request"
For explicit error responses, use helper functions:
function jsonError(res, status, message) {
res.cork(() => {
res.writeStatus(status);
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify({ error: message }));
});
}
function jsonOk(res, data) {
res.cork(() => {
res.writeStatus("200 OK");
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify(data));
});
}
// Usage
app.postWithBody("/api/items", (res, req) => {
if (!req.body.name) {
jsonError(res, "400 Bad Request", "Missing name");
return;
}
jsonOk(res, { created: req.body.name });
});
Streaming Routes (postWithBody)
For routes that send data over time using res.write(), you can still use postWithBody — the native res methods are untouched.
The key differences from simple routes:
- Set status and headers up front in a
cork()call - Use
res.write()for each chunk (also insidecork()) - Call
res.end()to finish the stream - Handle backpressure with
res.onWritable()
Streaming chunks over time
app.postWithBody("/api/stream", async (res, req) => {
const { chunks = 5, delay = 200 } = req.body;
// Send headers first
res.cork(() => {
res.writeStatus("200 OK");
res.writeHeader("Content-Type", "text/plain");
});
// Send data chunks over time
for (let i = 0; i < chunks; i++) {
if (res.aborted) return;
await new Promise((resolve) => setTimeout(resolve, delay));
if (res.aborted) return;
res.cork(() => {
res.write(`chunk ${i + 1} of ${chunks}\n`);
});
}
// Finish the response
if (!res.aborted) {
res.cork(() => res.end("done\n"));
}
});
Streaming a file/process with backpressure
When piping from a stream (e.g. tar, zip, database cursor), handle backpressure to avoid overwhelming the socket:
app.postWithBody("/api/download", (res, req) => {
const { files } = req.body;
res.cork(() => {
res.writeStatus("200 OK");
res.writeHeader("Content-Type", "application/gzip");
});
const stream = createSomeReadableStream(files);
stream.on("data", (chunk) => {
if (res.aborted) { stream.destroy(); return; }
res.cork(() => {
const ok = res.write(chunk); // Returns false if backpressure
if (!ok) {
stream.pause(); // Stop reading until socket drains
res.onWritable(() => {
stream.resume(); // Socket drained, resume reading
return true;
});
}
});
});
stream.on("end", () => {
if (!res.aborted) res.cork(() => res.end());
});
stream.on("error", (err) => {
if (!res.aborted) res.cork(() => res.end());
});
});
Raw POST (app.post)
Use raw app.post() when you need to handle the body yourself, typically for streaming uploads where you pipe incoming data directly to a destination.
With raw app.post(), you must:
- Copy headers and request data synchronously (raw
reqexpires after the callback) - Set
res.onAborted()before reading any data - Use
res.onData()to receive body chunks - Handle everything manually
app.post("/api/upload", (res, req) => {
// Copy what you need synchronously — req dies after this callback
const project = req.getHeader("x-project");
const auth = checkAuth(req);
if (!auth.uid || !project) {
res.writeStatus("400 Bad Request");
res.end("Missing auth");
return;
}
let aborted = false;
res.onAborted(() => { aborted = true; });
const passthrough = new PassThrough();
const extractor = tar.x({ cwd: targetDir });
passthrough.pipe(extractor);
extractor.on("finish", () => {
if (aborted) return;
res.cork(() => {
res.writeStatus("200 OK");
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify({ ok: true }));
});
});
extractor.on("error", (err) => {
if (!aborted) {
res.cork(() => {
res.writeStatus("500 Internal Server Error");
res.end(err.message);
});
}
});
res.onData((chunk, isLast) => {
if (aborted) return;
passthrough.write(Buffer.from(chunk.slice(0))); // Must copy — uWS reuses the buffer
if (isLast) passthrough.end();
});
});
When to use which
| Scenario | Method | Why |
|---|---|---|
| JSON API endpoint | postWithBody |
Body parsed, async errors caught |
| Form submission | postWithBody |
URL-encoded and multipart handled |
| File upload (save to disk) | postWithBody |
Access req.body.fieldname as Buffer |
| Streaming upload (pipe to process) | app.post |
Need raw onData to pipe chunks directly |
| Download/streaming response | postWithBody |
Body parsing for parameters, then stream with res.write() |
| Simple CRUD | postWithBody + res.send() |
Least boilerplate |
Quick Reference
Minimal postWithBody route
app.postWithBody("/api/example", (res, req) => {
res.cork(() => {
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify({ ok: true }));
});
});
Minimal async postWithBody route
app.postWithBody("/api/example", async (res, req) => {
const data = await someAsyncWork(req.body);
if (res.aborted) return;
res.cork(() => {
res.writeHeader("Content-Type", "application/json");
res.end(JSON.stringify(data));
});
});
Checklist
- All writes (
writeStatus,writeHeader,write,end) insidecork() - Check
res.abortedafter everyawait - Use
postWithBodyunless you need raw streaming uploads - Use
jsonOk/jsonErrorhelpers for consistent JSON responses - Handle backpressure with
res.onWritable()when streaming - Never access raw uWS
reqafter the synchronous callback
Server-Side Data Operations
The same data operations available on the client (get, set, update, remove, query) are also available server-side in your app.js file. However, there is one key difference:
Synchronous vs Asynchronous:
- Client-side: All operations are
asyncand return Promises (useawaitor.then()) - Server-side: All operations are
syncand return results directly (noawaitneeded)
module.exports = ({ get, set, update, remove, query, addWsFunction, ... }) => {
// Server-side operations are SYNCHRONOUS - no await needed
// Get data directly
const user = get(["users", "john"]);
console.log(user.data); // { name: "John", age: 32 }
// Set data directly
set(["users", "alice"], { name: "Alice", age: 28 });
// Update data directly
update(["users", "john"], { lastLogin: Date.now() });
// Remove data directly
remove(["users", "oldUser"]);
// Query data directly
const activeUsers = query({
path: ["users"],
query: "child.status == 'active'"
});
console.log(activeUsers.data);
// Example: Using in a WebSocket function
addWsFunction("getUserStats", function(data, admin, sessionId) {
// All these operations complete immediately (sync)
const user = get(["users", admin.uid]);
const orders = query({
path: ["orders"],
query: `child.userId == '${admin.uid}'`
});
return {
username: user.data.name,
orderCount: Object.keys(orders.data || {}).length
};
});
// Example: Using in a database trigger
addFunction("onSet", ["orders", "$orderId"], function(context) {
// Sync operations in triggers
const user = get(["users", context.dataAfter.userId]);
update(["users", context.dataAfter.userId], {
lastOrderDate: Date.now()
});
});
};
Why synchronous? Server-side operations access the in-memory database directly, eliminating the need for network round-trips. This makes your server code simpler and faster.
Direct Database Access with data
The data export gives you direct read access to the raw in-memory database object. This can be useful for quickly reading values without the overhead of get():
module.exports = ({ data, get, set, ... }) => {
// Direct read — access the raw database object
const userName = data.users?.john?.name; // "John"
const allUsers = data.users; // { john: {...}, alice: {...} }
// Compared to using get():
const user = get(["users", "john"]);
console.log(user.data.name); // "John"
};
Important: The data object is a direct reference to the in-memory database. Use it for reading only. Always use set(), update(), and remove() to modify data — these functions handle subscriptions, triggers, security rules, and persistence. Writing directly to data will bypass all of these.
Connecting to External Databases
If you need to connect to another NukeBase database from your server (for example, a shared service or microservice architecture), you can use the server/serversdk.js module.
When to use serversdk.js:
- Connecting to a separate NukeBase instance
- Building microservices that communicate with each other
- Aggregating data from multiple database servers
- Server-to-server real-time synchronization
module.exports = ({ get, set, update, addWsFunction, startDB, addDomain, ... }) => {
// Import the server SDK for external connections
const createServerClient = require('../sys/serversdk.js');
// Connect to an external NukeBase database
// Note: External connections ARE async (like client-side)
createServerClient('wss://other-project.nukebase.com').then(externalDb => {
console.log('Connected to external database');
// Use the external database with async operations
addWsFunction("getExternalData", async function(data, admin, sessionId) {
// Local database (sync)
const localUser = get(["users", admin.uid]);
// External database (async - requires await)
const externalData = await externalDb.get(["sharedData", data.itemId]);
return {
local: localUser.data,
external: externalData.data
};
});
// Subscribe to changes on external database
externalDb.getSub({
event: "value@",
path: ["notifications"]
}, (event) => {
// When external data changes, update local database
set(["cache", "externalNotifications"], event.data);
});
}).catch(err => {
console.error('Failed to connect to external database:', err);
});
// Set up local domain
const nukebase = addDomain({
authPath: ["users"]
});
startDB(nukebase);
};
Important differences:
- Local operations (via destructured
get,set, etc.) are synchronous - External operations (via
serversdk.js) are asynchronous and requireawait
This is because external connections go over the network via WebSocket, just like client connections.
Complete Server Example
Here's a minimal but complete server setup:
module.exports = ({
addFunction,
addWsFunction,
get,
set,
update,
remove,
query,
generateRequestId,
data,
addDomain,
startDB,
onConnection,
onClose,
checkAuth
}) => {
// Set up a domain
const nukebase = addDomain({
authPath: ["users"], // Path where user authentication data is stored
host: "127.0.0.1", // optional
port: 3000 // optional
});
// Configure middleware for serving static files
const path = require('path');
nukebase.app.serveStatic("/*", path.join(__dirname, "../public"),
(req, res) => { return true; }
);
// Add a database trigger for important changes
addFunction("onValue", ["orders", "$orderId"], async function(context) {
// Only trigger if data has actually changed
if (JSON.stringify(context.dataAfter) !== JSON.stringify(context.dataBefore)) {
await set(["logs", generateRequestId()], {
path: context.path,
timestamp: Date.now(),
oldValue: context.dataBefore,
newValue: context.dataAfter,
change: "Important data changed"
});
}
});
// Add a WebSocket function for client calculations
addWsFunction("addNumbers", function(data, admin, sessionId) {
// Extract numbers from the request
const { num1, num2 } = data;
// Perform the calculation on the server
const sum = num1 + num2;
// Return the result to the client
return sum;
});
// Track user connections
onConnection(function(admin, sessionId) {
// Record when user connects
update(["sessions", admin.uid, sessionId], {
start: Date.now()
});
// Update user status
update(["users", admin.uid], {
online: true,
lastSeen: Date.now()
});
});
// Handle user disconnections
onClose(function(admin, sessionId) {
// Record when user disconnects
update(["sessions", admin.uid, sessionId], {
end: Date.now()
});
// Update user status
update(["users", admin.uid], {
online: false,
lastSeen: Date.now()
});
});
startDB(nukebase);
console.log("🚀 NukeBase server running on http://127.0.0.1:3000");
};
Note: This example demonstrates best practices including:
- Domain setup with authPath, host, and port configuration
- Static file serving with serveStatic
- Real-time database triggers
- Custom WebSocket functions
- Connection tracking
- Server initialization with startDB(nukebase)
Client-Side API
NukeBase's client library provides a real-time connection to your database through WebSockets. The client handles connection management, request tracking, and event dispatching automatically.
Connection Setup
The client automatically establishes a secure WebSocket connection:
<script type="module">
import createClient from './sdkmod.js';
// ============================================
// PATTERN 1: Full client object
// ============================================
const db = await createClient();
// Use methods with db. prefix
await db.set(['users', 'john'], { name: 'John', age: 30 });
const user = await db.get(['users', 'john']);
// ============================================
// PATTERN 2: Destructured methods (recommended)
// All examples below use this pattern
// ============================================
const { set, get, update, remove, query, getSub, querySub,
getSubChanged, querySubChanged, wsFunction, setFile,
login, logout, createUser, changePassword } = await createClient();
console.log("Connected and ready to use NukeBase");
// Use methods directly without prefix
await set(['users', 'alice'], { name: 'Alice', age: 28 });
const userData = await get(['users', 'alice']);
// ============================================
// PATTERN 3: Attach to window (global access)
// Useful for multi-file apps or console debugging
// ============================================
const client = await createClient();
// Attach full client object
window.db = client;
// Optional: expose individual helpers directly
Object.assign(window, client);
// Now use from anywhere: window.db.get(...) or just get(...)
</script>
Important: The example above shows all three patterns for demonstration. In practice, choose ONE pattern for your application. Each pattern creates its own WebSocket connection, so using multiple would create multiple connections.
Key Features:
- Promise-based initialization: Wait for connection before using the client
- Automatic Reconnection: Reconnects every 5 seconds after disconnection
- Subscription Restoration: Automatically restores all active subscriptions after reconnect
- Tab Focus Recovery: Reconnects when browser tab regains focus
- Encapsulated State: Multiple client instances can coexist independently
Connection State Indicators
The SDK provides console messages to track connection state:
- ✅ Connected to [url] - WebSocket connection established
- ❌ Disconnected from [url] - Connection lost
- 🔁 Reconnecting... - Attempting to reconnect
- 🔄 Restoring subscriptions... - Resubscribing after reconnect
Data Operations
Setting Data
The set() function creates or replaces data at a specific path:
Auto-creation: The set() function will automatically create any missing parent objects in the path. You don't need to create intermediate objects manually.
// Set a complete object
set(["users", "john"], { name: "John Doe", age: 32 }).then(response => {
console.log("User created successfully");
});
// Set a single value
set(["users", "john", "email"], "john@example.com").then(response => {
console.log(response);
});
// Auto-creates parent objects - even if 'users' doesn't exist
set(["users", "alice", "profile", "preferences", "theme"], "dark").then(response => {
// Creates: { users: { alice: { profile: { preferences: { theme: "dark" } } } } }
console.log("Theme set with auto-created parent objects");
});
Getting Data
Retrieve data with the get() function:
// Get a single user
get(["users", "john"]).then(response => {
console.log(response.data); // User data
});
// Get entire collection
get(["users"]).then(response => {
const users = response.data;
// Process users...
});
Updating Data
Update existing data without replacing unspecified fields:
Auto-creation: Like set(), the update() function will automatically create any missing parent objects in the path if they don't exist.
// Update specific fields
update(["users", "john"], {
lastLogin: Date.now(),
loginCount: 42
}).then(response => {
console.log(response);
});
// Update a single property
update(["users", "john", "status"], "online").then(response => {
console.log(response);
});
// Auto-creates missing parent objects
update(["settings", "app", "notifications", "email"], true).then(response => {
// If 'settings' doesn't exist, creates the entire path
console.log("Setting created with auto-generated parents");
});
Removing Data
Delete data at a specific path:
// Remove a user
remove(["users", "john"]).then(response => {
console.log("User deleted");
});
// Remove a specific field
remove(["users", "john", "temporaryToken"]).then(response => {
console.log(response);
});
Querying Data
Query allows you to search through collections and find items that match specific conditions. The query
string uses JavaScript expressions where child represents each item being evaluated:
How queries work: NukeBase iterates through each child at the specified path and
evaluates your condition. Items where the condition returns true are included in the results.
// Basic equality check
query({
path: ["users"],
query: "child.age == 32"
}).then(response => {
console.log(response.data); // All users who are exactly 32
});
// Using comparison operators
query({
path: ["products"],
query: "child.price < 50"
}).then(response => {
console.log(response.data); // All products under $50
});
// Compound conditions with AND (&&)
query({
path: ["products"],
query: "child.price < 100 && child.category == 'electronics'"
}).then(response => {
console.log(response.data); // Affordable electronics
});
// Compound conditions with OR (||)
query({
path: ["users"],
query: "child.role == 'admin' || child.role == 'moderator'"
}).then(response => {
console.log(response.data); // All admins and moderators
});
// Text search with includes()
query({
path: ["posts"],
query: "child.title.includes('JavaScript')"
}).then(response => {
console.log(response.data); // Posts with "JavaScript" in the title
});
// Checking nested properties with childPath
query({
path: ["users"],
childPath: ["profile", "location"],
query: "child == 'New York'" // child refers to location value
}).then(response => {
// Returns: { matt123: "New York" }
// child refers to the value AT the childPath, and that's what's returned
console.log(response.data);
});
// Combining multiple conditions
query({
path: ["orders"],
query: "child.status == 'pending' && child.total > 100 && child.items.length > 2"
}).then(response => {
console.log(response.data); // Large pending orders with multiple items
});
// Checking if a property exists
query({
path: ["users"],
query: "child.premiumAccount == true"
}).then(response => {
console.log(response.data); // All premium users
});
// Using NOT operator
query({
path: ["tasks"],
query: "child.completed != true"
}).then(response => {
console.log(response.data); // All incomplete tasks
});
// Date comparisons (assuming timestamps)
query({
path: ["events"],
query: "child.date > " + Date.now()
}).then(response => {
console.log(response.data); // Future events
});
Query Syntax Reference
Queries support standard JavaScript operators and methods:
| Operator/Method | Description | Example |
|---|---|---|
== |
Equal to | child.status == 'active' |
!= |
Not equal to | child.deleted != true |
<, >, <=, >= |
Comparison | child.age >= 18 |
&& |
Logical AND | child.active && child.verified |
|| |
Logical OR | child.role == 'admin' || child.role == 'mod' |
.includes() |
String contains | child.email.includes('@gmail.com') |
.length |
Array/string length | child.tags.length > 3 |
Important: The child variable represents each item at the path you're
querying. For example, when querying "users", child represents each individual user object.
Using childPath to Query Nested Data
The childPath parameter allows you to query and return only specific nested portions of your data. This is especially useful for separating public and private data, improving performance, or working with complex data structures.
How childPath works:
- Navigation: childPath navigates to a nested position in your data
- Query context: The
childvariable in your query refers to the data at that nested position - Response structure: Results include the full path with childPath, so you know which parent item matched
// Data structure:
// {
// users: {
// matt123: {
// public: { name: "Matt", age: 25, city: "NYC" },
// private: { ssn: "123-45-6789", salary: 80000 }
// },
// john456: {
// public: { name: "John", age: 30, city: "LA" },
// private: { ssn: "987-65-4321", salary: 90000 }
// }
// }
// }
// Query WITHOUT childPath - queries full user objects
query({
path: ["users"],
query: "child.public.age > 21"
}).then(response => {
console.log(response.data);
// Returns: {
// matt123: { public: {...}, private: {...} },
// john456: { public: {...}, private: {...} }
// }
// You get FULL user objects including private data
});
// Query WITH childPath - queries only public portion
query({
path: ["users"],
childPath: ["public"],
query: "child.age > 21" // child now refers to the "public" object
}).then(response => {
console.log(response.data);
// Returns: {
// matt123: { name: "Matt", age: 25, city: "NYC" },
// john456: { name: "John", age: 30, city: "LA" }
// }
// You get the data AT the childPath, not wrapped in the childPath structure
});
// Multiple childPath levels
query({
path: ["users"],
childPath: ["public", "address"],
query: "child.city == 'NYC'" // child refers to the "address" object
}).then(response => {
console.log(response.data);
// Returns: {
// matt123: { city: "NYC", state: "NY" }
// }
// Returns the value AT the childPath (the address object itself)
});
childPath Use Cases
// Use Case 1: Security - Exclude private data
// If users.matt123.private is blocked by read rules, childPath ensures
// you only query the accessible portion
query({
path: ["users"],
childPath: ["public"],
query: "child.verified == true"
}).then(response => {
// Only returns public data, won't fail if private is restricted
displayPublicProfiles(response.data);
});
// Use Case 2: Performance - Return only needed data
// When clients only need profile info, not full user objects
query({
path: ["users"],
childPath: ["profile"],
query: "child.country == 'USA'"
}).then(response => {
// Smaller response payload, faster transmission
renderUserProfiles(response.data);
});
// Use Case 3: Complex filtering on nested arrays
// Query specific nested collections
query({
path: ["orders"],
childPath: ["items"],
query: "child.quantity > 5"
}).then(response => {
// Returns: {
// order123: { itemA: {quantity: 10, ...}, ... }
// }
// Returns the value AT the childPath (the items object itself)
console.log("Orders with high-quantity items:", response.data);
});
// Use Case 4: Separating data concerns
// Different parts of your app query different data sections
query({
path: ["products"],
childPath: ["inventory"],
query: "child.stock < 10"
}).then(response => {
// Warehouse dashboard only needs inventory data
showLowStockAlert(response.data);
});
When to use childPath:
- You want to exclude certain fields from results (public vs private data)
- You need to improve query performance by returning less data
- Your read rules block certain paths, and childPath ensures you only query accessible data
- You're querying nested collections or arrays within parent objects
Important: When using childPath, remember that child in your query refers to the data AT the childPath position, not the root object. Adjust your query conditions accordingly.
Real-time Subscriptions
Important: All subscription functions (getSub, getSubChanged,
querySub, and querySubChanged) immediately send the current data when the
subscription is created. This ensures your UI can display the current state right away, before any changes
occur.
Basic Subscriptions
Get real-time updates when data changes. All subscription functions immediately send the current data when the subscription is created, then continue to send updates whenever the data changes:
// Subscribe to changes on a path
const unsubscribe = getSub({
event: "value@",
path: ["users", "john"]
}, event => {
// This fires immediately with current data, then on every change
console.log("User data:", event.data);
});
// When finished listening
unsubscribe();
Query Subscriptions
Subscribe to data matching specific conditions:
// Subscribe to active users
const unsubscribe = querySub({
event: "value@",
path: ["users"],
query: "child.status == 'online'"
}, event => {
// Receives all currently online users immediately, then updates
const onlineUsers = event.data;
updateOnlineUsersList(onlineUsers);
});
Query Subscriptions with childPath
Just like regular queries, subscriptions can use childPath to subscribe only to specific nested portions of your data:
// Subscribe to public profiles only (excludes private data)
const unsubscribe = querySub({
event: "value@",
path: ["users"],
childPath: ["public"],
query: "child.verified == true"
}, event => {
// Receives only public data for verified users
// Response: { matt123: { verified: true, name: "Matt", ... } }
displayVerifiedUsers(event.data);
});
// Subscribe to inventory changes for low stock items
const unsubscribe2 = querySub({
event: "value@",
path: ["products"],
childPath: ["inventory"],
query: "child.stock < 10"
}, event => {
// Only receive inventory data, not full product details
// Response: { productA: { stock: 5, ... } }
showLowStockAlert(event.data);
});
// Use with querySubChanged for efficient updates
const unsubscribe3 = querySubChanged({
event: "value@",
path: ["users"],
childPath: ["profile"],
query: "child.country == 'USA'"
}, event => {
// Only fires when USA profiles change
// Only returns the profile portion that changed
console.log("Updated USA profiles:", event.data);
});
Benefits of childPath with subscriptions:
- Reduced bandwidth: Only transmit the data portions you need
- Security: Never receive data that might be blocked by read rules
- Performance: Smaller payloads mean faster real-time updates
- Clean data: Clients receive exactly the structure they expect
Changed-Only Subscriptions
Despite the name, these subscriptions ALSO receive the initial data immediately when created, then only fire again when data actually changes:
Important for getSubChanged and querySubChanged: What you receive depends on what path you're watching:
- If watching "users" and John updates his name, you get John's COMPLETE object (all fields)
- If watching "users.john" and a field changes, you get ONLY the changed field (e.g., just {name: "New Name"})
- If watching "users.john.name" and it changes, you get just the new name value
- The deeper your watch path, the more specific the change data
// getSubChanged - watching a collection
const unsubscribe = getSubChanged({
event: "value@",
path: ["users"]
}, event => {
// Initial: all users
// If John updates his email:
// event.data = { john: { name: "John", email: "new@email.com", age: 25 } }
// You get John's COMPLETE object
updateChangedUsers(event.data);
});
// getSubChanged - watching a specific user
const unsubscribe2 = getSubChanged({
event: "value@",
path: ["users", "john"]
}, event => {
// Initial: John's complete data
// If John's email changes:
// event.data = { email: "new@email.com" }
// You get ONLY the changed field
Object.assign(currentUser, event.data); // Merge changes
});
// getSubChanged - watching a specific field
const unsubscribe3 = getSubChanged({
event: "value@",
path: ["users", "john", "status"]
}, event => {
// Initial: "online"
// If status changes:
// event.data = "offline"
// You get just the new value
updateStatusIndicator(event.data);
});
// With query filtering - returns only the changed items
const unsubscribe4 = querySubChanged({
event: "value@",
path: ["users"],
query: "child.age > 21"
}, event => {
// If user John (age 25) updates only his name:
// event.data = { john: { name: "John Doe", age: 25, email: "john@example.com" } }
// You get John's COMPLETE object, not just the changed name field
console.log("Users that changed:", event.data);
});
// Example: monitoring low stock products
const unsubscribe5 = querySubChanged({
event: "value@",
path: ["products"],
query: "child.stock < 5"
}, event => {
// If product ABC updates its price, you get:
// { ABC: { name: "Widget", stock: 3, price: 29.99 } }
// The complete product object for ONLY the product that changed
Object.keys(event.data).forEach(productId => {
updateSingleProduct(productId, event.data[productId]);
});
});
Operation-Specific Subscriptions
Listen for specific types of operations by prefixing your path with an operation type:
Available operation types:
value@- Fires on any change (set, update, or remove)set@- Fires only when data is created or completely replacedupdate@- Fires only when existing data is partially updatedremove@- Fires only when data is deleted
Compatibility: Operation prefixes work with all subscription functions:
getSub, getSubChanged, querySub, and querySubChanged.
// Listen only for updates to user data
const unsubscribe = getSub({
event: "update@",
path: ["users", "john"]
}, event => {
console.log("User was updated:", event.data);
});
// Listen for new data being set
const unsubscribe2 = getSub({
event: "set@",
path: ["orders"]
}, event => {
console.log("New order created:", event.data);
});
// Listen for data removal
const unsubscribe3 = getSub({
event: "remove@",
path: ["users"]
}, event => {
console.log("A user was deleted:", event.path);
});
// Operation-specific with getSubChanged
const unsubscribe4 = getSubChanged({
event: "set@",
path: ["products"]
}, event => {
// Only fires when NEW products are created (not updates)
console.log("New products added:", event.data);
});
// Operation-specific with queries
const unsubscribe5 = querySub({
event: "update@",
path: ["users"],
query: "child.status == 'premium'"
}, event => {
// Only fires when premium users are UPDATED (not created or deleted)
console.log("Premium users updated:", event.data);
});
// Combining with querySubChanged
const unsubscribe6 = querySubChanged({
event: "remove@",
path: ["tasks"],
query: "child.completed == true"
}, event => {
// Only fires when completed tasks are DELETED
console.log("Completed tasks removed:", event.data);
});
// Default behavior without prefix (same as value@)
const unsubscribe7 = getSub({
path: ["users", "john"]
}, event => {
// Fires on ANY change: set, update, or remove
// event parameter defaults to "value@" if not specified
console.log("Something changed:", event.data);
});
Subscription Bubble-Up Behavior
Understanding how subscription changes propagate is crucial for designing efficient real-time applications. NukeBase subscriptions follow a "bubble-up" pattern:
Key Concept: Changes Bubble UP, Not DOWN
- Bubble UP ✅: Changes at child paths trigger parent subscriptions
- No Trickle DOWN ❌: Changes at parent paths do NOT trigger child subscriptions
// Set up subscriptions at different levels
getSub({
event: "value@",
path: ["calls"]
}, (event) => {
console.log("1. Calls level:", event.data);
});
getSub({
event: "value@",
path: ["calls", "123"]
}, (event) => {
console.log("2. Specific call:", event.data);
});
getSub({
event: "value@",
path: ["calls", "123", "answer"]
}, (event) => {
console.log("3. Answer level:", event.data);
});
// Scenario 1: Change at deep level (bubbles UP)
await set(["calls", "123", "answer"], { type: "answer", sdp: "..." });
// ✅ Fires: 1. Calls level (bubbled up)
// ✅ Fires: 2. Specific call (bubbled up)
// ✅ Fires: 3. Answer level (direct match)
// Scenario 2: Change at middle level (bubbles UP, not DOWN)
await update(["calls", "123"], { status: "active" });
// ✅ Fires: 1. Calls level (bubbled up)
// ✅ Fires: 2. Specific call (direct match)
// ❌ NOT fired: 3. Answer level (no trickle down)
// Scenario 3: Change at top level (no trickle DOWN)
await set(["calls"], { "456": { offer: {...} } });
// ✅ Fires: 1. Calls level (direct match)
// ❌ NOT fired: 2. Specific call (no trickle down)
// ❌ NOT fired: 3. Answer level (no trickle down)
Practical Implications:
- Parent subscriptions are "catch-all": Watching
userswill fire for ANY change in ANY user or their properties - Child subscriptions are specific: Watching
users.john.emailonly fires when that exact path or its children change - Performance consideration: Higher-level subscriptions fire more frequently due to bubble-up
- Data replacement warning: If you
set()at a parent level, child subscriptions may stop working as their paths no longer exist
Custom Server Functions
Execute custom logic on the server without exposing implementation details:
// Call the server function
wsFunction("addNumbers", {
num1: 5,
num2: 7
})
.then(response => {
// Display the result returned by the server
console.log(`The sum is: ${response.data}`); // Output: The sum is: 12
});
This straightforward example shows how WebSocket functions allow you to execute code on the server and return results directly to the client, with the return value accessible via the data property of the response.
Ultra-Low Latency Performance
WebSocket functions provide the fastest possible way to communicate with your server. This makes them perfect for real-time games, live collaboration, and any application where milliseconds matter.
WebSocket functions are especially powerful when you need to:
- Aggregate data from multiple database paths
- Perform complex calculations server-side
- Validate game moves or business logic
- Return processed results without exposing raw data
Example Use Cases: Game state calculations, leaderboard generation, real-time analytics, complex permission checks, or any scenario where you need to fetch multiple database values, process them, and return a calculated result.
Authentication
NukeBase provides a built-in cookie-based authentication system. When you configure
authPath: ["users"]
in your domain setup, authentication endpoints are automatically available and cookies are handled
seamlessly.
How it works:
- Configure
authPath: ["users"]in your domain setup - Use the built-in authentication endpoints from your client
- Server automatically sets HTTP cookies (uid, token)
- WebSocket connections automatically use these cookies
- User information populates the
adminobject for security rules
Authentication Endpoints
NukeBase automatically provides these authentication endpoints when authPath is configured:
Available Endpoints:
- POST /login - Login with username/password, or resume session via cookies (can also upgrade a demo account by providing username/password)
- POST /createuser - Create a new account. Without credentials creates a demo account, with username/password creates a full account (fails if already signed in)
- POST /logout - Clear authentication cookies
- POST /changepassword - Change user password (requires authentication)
- POST /magic-link - Send a magic sign-in link to the user's email (passwordless authentication)
Login
Use the /login endpoint to login with a username and password. If valid cookies exist, the session is resumed automatically. You can also upgrade a demo account by providing a username and password while authenticated with a demo session:
// Import and destructure the methods you need
import createClient from './sdkmod.js';
const { login } = await createClient();
// Login with username and password
const result = await login("username", "password");
if (result && result.status === "Success") {
console.log('Authenticated as:', result.username);
}
// Resume session (if cookies are already set)
const result2 = await login();
if (result2 && result2.status === "Success") {
console.log('Session resumed:', result2.uid);
}
// Upgrade a demo account to a full account
// (must be logged in as a demo user with valid cookies)
const result3 = await login("newUsername", "newPassword");
if (result3 && result3.status === "Success") {
console.log('Demo account upgraded:', result3.username);
}
Create Account
Create a new account using the /createuser endpoint. Without credentials, a demo account is created. With a username and password, a full account is created. This will fail if the user is already signed in:
import createClient from './sdkmod.js';
const { createUser } = await createClient();
// Create a demo/anonymous account (no credentials)
const result = await createUser();
if (result && result.status === "Success") {
console.log('Demo account created:', result.uid);
}
// Create a full account with username and password
const result2 = await createUser("myUsername", "myPassword");
if (result2 && result2.status === "Success") {
console.log('Account created:', result2.username);
}
// Note: Fails if already signed in - logout first
Logout
Clear authentication cookies to log out the user:
import createClient from './sdkmod.js';
const { logout } = await createClient();
const result = await logout();
if (result && result.status === "Success") {
console.log('Logged out successfully');
}
Magic Link (Passwordless Sign-In)
Send a magic sign-in link to the user's email. The user clicks the link and is automatically authenticated with session cookies — no password required. The email must belong to an existing account (username must be set to the email address):
// Request a magic link
const result = await fetch('/magic-link', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ email: 'user@example.com' })
}).then(r => r.json());
if (result.status === "Success") {
console.log('Magic link sent! Check your email.');
}
// The user receives an email with a link like:
// https://yoursite.com/magiclink?token=abc123...
//
// When clicked, the server:
// 1. Validates the token (single-use, expires in 15 minutes)
// 2. Sets authentication cookies (uid, token)
// 3. Redirects to your site — user is now signed in
Change Password
Allow authenticated users to change their password:
import createClient from './sdkmod.js';
const { changePassword } = await createClient();
const result = await changePassword("newPassword123");
if (result && result.status === "Success") {
console.log('Password changed successfully');
} else {
console.log('Failed to change password');
}
Using Authentication in Security Rules
Once authenticated, the admin object is available in your security rules:
// In your rules.js
module.exports = {
"users": {
"$userId": {
// Anyone can read profiles
"read": "true",
// Only the user themselves can edit
"write": "admin.uid == $userId",
"private": {
// Private data only visible to the user
"read": "admin.uid == $userId"
}
}
},
"adminPanel": {
// Only users with admin role can access
"read": "admin.claims.role == 'admin'",
"write": "admin.claims.role == 'admin'"
}
};
Security Notes:
- Passwords are encrypted using Argon2 hashing — they are never stored in plaintext
- Tokens are hashed with SHA-256 before being stored, so raw tokens only exist in the user's cookie
- Login attempts are rate-limited (5 attempts per 60 seconds per IP)
- Plaintext passwords from older accounts are automatically migrated to Argon2 on next login
- Use HTTPS in production to protect cookies
- Regularly clean up expired tokens to prevent database bloat
Custom Claims
Custom claims let you attach arbitrary data (roles, permissions, plan tiers, etc.) to a user's auth record. Claims are available in security rules and wsFunction callbacks via admin.claims.
// Set all claims at once
set(["users", uid, "auth", "claims"], { role: "admin", plan: "pro" });
// Update or add a single claim
update(["users", uid, "auth", "claims"], { role: "editor" });
// Remove a single claim
remove(["users", uid, "auth", "claims", "role"]);
module.exports = {
"adminPanel": {
"read": "admin.claims.role == 'admin'",
"write": "admin.claims.role == 'admin'"
},
"premiumContent": {
"read": "admin.claims.plan == 'pro'"
}
};
Important: Claims are read when a WebSocket connection is established. If you change a user's claims while they are connected, the changes won't take effect until their next connection (page reload, reconnect, or new login). Users without any claims will have admin.claims default to an empty object {}.
Database Structure for Authentication
The authentication system expects user data to be structured like this:
"users": {
"ML96SDE5": { // Unique user UID
"auth": {
"username": "matt123", // Unique username
"password": "$argon2id$v=19$m=65536,t=3,p=4$...", // Argon2 hashed password
"tokens": {
"a1b2c3d4e5f6...": 1748357368415, // SHA-256 hashed token → expiry timestamp
"f7e8d9c0b1a2...": 1748357670935
},
"claims": { // Optional custom claims
"role": "admin",
"plan": "pro"
}
}
}
}
Token Management: Tokens are generated with generateRequestId() and then
hashed with SHA-256 before storage. The database stores the hashed token as the key and
the expiration timestamp as the value. The raw token is only sent to the client via cookies — the server
re-hashes it on each request to validate the session. This means even if the database is compromised,
raw session tokens are not exposed.
Response Format
All NukeBase operations return a standardized response object:
{
// The operation performed
action: "get",
// Data from the operation
data: {
"user123": { name: "John", age: 32 },
"user456": { name: "Jane", age: 28 }
},
// For tracking the request
requestId: "RH8HZX9P",
// Success or Failed
status: "Success"
}
When an error occurs, the response includes:
{
status: "Failed",
message: "Error description here"
}
Complete Client NukeBase SDK with createClient()
Here's a complete example using the new modular SDK:
<script type="module">
import createClient from './sdkmod.js';
// Destructure all the methods you need
const { set, get, update, query, wsFunction, getSub, querySub,
getSubChanged, querySubChanged } = await createClient();
console.log('✅ Connected to NukeBase');
// Set data
await set(["users", "matt"], {
name: "Matt",
color: "red",
count: 0
});
// Get data
const sessions = await get(["sessions"]);
console.log('Sessions:', sessions.data);
// Update data
await update(["users", "matt"], {
leadsSent: "Pending"
});
await update(["users", "matt", "count"], 5);
// Query data
const results = await query({
path: ["sessions"],
query: "child.count > 0"
});
console.log('Query results:', results.data);
// Custom WebSocket function
const functionResult = await wsFunction("custom1", 23);
console.log('Function result:', functionResult);
// Subscribe to changes
const unsubscribe1 = getSub({
event: "value@",
path: ["sessions"]
}, data => {
console.log('Sessions updated:', data);
});
// Query subscription
const unsubscribe2 = querySub({
event: "value@",
path: ["sessions"],
query: "child.count == 4"
}, data => {
console.log('Matching sessions:', data);
});
// Changed-only subscription
const unsubscribe3 = getSubChanged({
event: "value@",
path: ["sessions"]
}, data => {
console.log('Changed sessions:', data);
});
// Query changed subscription
const unsubscribe4 = querySubChanged({
event: "value@",
path: ["sessions"],
query: "child.count != 4"
}, data => {
console.log('Changed query results:', data);
});
// Later, to unsubscribe:
// unsubscribe1();
// unsubscribe2();
// unsubscribe3();
// unsubscribe4();
</script>