This package was developed for https://creavit.studio
A powerful native macOS screen recording Node.js package with advanced window selection, multi-display support, and automatic overlay window exclusion. Built with ScreenCaptureKit for modern macOS with intelligent window filtering and Electron compatibility.
β¨ Advanced Recording Capabilities
- π₯οΈ Full Screen Recording - Capture entire displays with ScreenCaptureKit
- πͺ Window-Specific Recording - Record individual application windows
- π― Area Selection - Record custom screen regions
- π±οΈ Multi-Display Support - Automatic display detection and selection
- π¨ Cursor Control - Toggle cursor visibility in recordings
- π±οΈ Cursor Tracking - Track mouse position, cursor types, and click events
- π· Camera Recording - Capture silent camera video alongside screen recordings
- π Audio Capture - Record microphone/system audio into synchronized companion files
- π« Automatic Overlay Exclusion - Overlay windows automatically excluded from recordings
- β‘ Electron Compatible - Enhanced crash protection for Electron applications
- π¬ Multi-Window Recording - β¨ NEW! Record multiple windows/displays simultaneously
π΅ Granular Audio Controls
- π€ Microphone Audio - Separate microphone control (default: off)
- π System Audio - System audio capture (default: on)
- π» Audio Device Listing - Enumerate available audio devices
- ποΈ Device Selection - Choose specific audio input devices
π§ Smart Window Management
- π Window Discovery - List all visible application windows
- π― Automatic Coordinate Conversion - Handle multi-display coordinate systems
- π Display ID Detection - Automatically select correct display for window recording
- πΌοΈ Window Filtering - Smart filtering of recordable windows
- ποΈ Preview Thumbnails - Generate window and display preview images
βοΈ Customization Options
- π¬ Quality Control - Adjustable recording quality presets
- ποΈ Frame Rate Control - Custom frame rate settings
- π Flexible Output - Custom output paths and formats
- π Permission Management - Built-in permission checking
Note: The screen recording container remains silent. Audio is saved separately as
temp_audio_<timestamp>.webmso you can remix microphone and system sound in post-processing.
This package leverages Apple's modern ScreenCaptureKit framework (macOS 12.3+) for superior recording capabilities:
- π― Native Overlay Exclusion: Overlay windows are automatically filtered out during recording
- π Enhanced Performance: Direct system-level recording with optimized resource usage
- π‘οΈ Crash Protection: Advanced safety layers for Electron applications
- π± Future-Proof: Built on Apple's latest screen capture technology
- π¨ Better Quality: Improved frame handling and video encoding
Note: For applications requiring overlay exclusion (like screen recording tools with floating UI), ScreenCaptureKit automatically handles window filtering without manual intervention.
npm install node-mac-recorder- macOS 12.3+ (Monterey or later) - Required for ScreenCaptureKit
- Node.js 14+
- Xcode Command Line Tools
- Screen Recording Permission (automatically requested)
- CPU Architecture: Intel (x64) and Apple Silicon (ARM64) supported
# Install Xcode Command Line Tools
xcode-select --install
# The package will automatically build native modules during installationApple Silicon Support: The package automatically builds for the correct architecture (ARM64 on Apple Silicon, x64 on Intel) during installation. No additional configuration required.
const MacRecorder = require("node-mac-recorder");
const recorder = new MacRecorder();
// Simple full-screen recording
await recorder.startRecording("./output.mov");
await new Promise((resolve) => setTimeout(resolve, 5000)); // Record for 5 seconds
await recorder.stopRecording();Record multiple windows or displays simultaneously using child processes:
const MacRecorder = require("node-mac-recorder/index-multiprocess");
// Create separate recorders for each window
const recorder1 = new MacRecorder();
const recorder2 = new MacRecorder();
// Get available windows
const windows = await recorder1.getWindows();
// Record first window (e.g., Finder)
await recorder1.startRecording("window1.mov", {
windowId: windows[0].id,
frameRate: 30
});
// Wait for ScreenCaptureKit initialization
await new Promise(r => setTimeout(r, 1000));
// Record second window (e.g., Chrome)
await recorder2.startRecording("window2.mov", {
windowId: windows[1].id,
frameRate: 30
});
// Both recordings are now running in parallel! π
// Stop both after 10 seconds
await new Promise(r => setTimeout(r, 10000));
await recorder1.stopRecording();
await recorder2.stopRecording();
// Cleanup
recorder1.destroy();
recorder2.destroy();Key Benefits:
- β No native code changes required
- β Each recorder runs in its own process
- β True parallel recording
- β Separate output files
- β Independent control
See: MULTI_RECORDING.md for detailed documentation and examples.
const recorder = new MacRecorder();Starts screen recording with the specified options.
await recorder.startRecording("./recording.mov", {
// Audio Controls
includeMicrophone: false, // Enable microphone (default: false)
includeSystemAudio: true, // Enable system audio (default: true)
audioDeviceId: "device-id", // Specific audio input device (default: system default)
systemAudioDeviceId: "system-device-id", // Specific system audio device (auto-detected by default)
// Display & Window Selection
displayId: 0, // Display index (null = main display)
windowId: 12345, // Specific window ID
captureArea: {
// Custom area selection
x: 100,
y: 100,
width: 800,
height: 600,
},
// Recording Options
quality: "high", // 'low', 'medium', 'high'
frameRate: 30, // FPS (15, 30, 60)
captureCursor: false, // Show cursor (default: false)
// Camera Capture
captureCamera: true, // Enable simultaneous camera recording (video-only WebM)
cameraDeviceId: "built-in-camera-id", // Use specific camera (optional)
});Stops the current recording.
const result = await recorder.stopRecording();
console.log("Recording saved to:", result.outputPath);
console.log("Camera clip saved to:", result.cameraOutputPath);
console.log("Audio clip saved to:", result.audioOutputPath);
console.log("Shared session timestamp:", result.sessionTimestamp);The result object always contains cameraOutputPath and audioOutputPath. If either feature is disabled the corresponding value is null. The shared sessionTimestamp matches every automatically generated temp file name (temp_cursor_*, temp_camera_*, and temp_audio_*).
Returns a list of all recordable windows.
const windows = await recorder.getWindows();
console.log(windows);
// [
// {
// id: 12345,
// name: "My App Window",
// appName: "MyApp",
// x: 100, y: 200,
// width: 800, height: 600
// },
// ...
// ]Returns information about all available displays.
const displays = await recorder.getDisplays();
console.log(displays);
// [
// {
// id: 69733504,
// name: "Display 1",
// resolution: "2048x1330",
// x: 0, y: 0
// },
// ...
// ]Returns a list of available audio input devices.
const devices = await recorder.getAudioDevices();
console.log(devices);
// [
// {
// id: "BuiltInMicDeviceID",
// name: "MacBook Pro Microphone",
// manufacturer: "Apple Inc.",
// isDefault: true,
// transportType: 0
// },
// ...
// ]Lists available camera devices with resolution metadata.
const cameras = await recorder.getCameraDevices();
console.log(cameras);
// [
// {
// id: "FaceTime HD Camera",
// name: "FaceTime HD Camera",
// position: "front",
// maxResolution: { width: 1920, height: 1080, maxFrameRate: 60 }
// },
// ...
// ]setCameraEnabled(enabled)β toggles simultaneous camera recording (video-only)setCameraDevice(deviceId)β selects the camera by unique macOS identifierisCameraEnabled()β returns current camera toggle stategetCameraCaptureStatus()β returns{ isCapturing, outputFile, deviceId, sessionTimestamp }
A typical workflow looks like this:
const cameras = await recorder.getCameraDevices();
const selectedCamera = cameras.find((camera) => camera.position === "front") || cameras[0];
recorder.setCameraDevice(selectedCamera?.id);
recorder.setCameraEnabled(true);
await recorder.startRecording("./output.mov");
// ...
const status = recorder.getCameraCaptureStatus();
console.log("Camera stream:", status.outputFile); // temp_camera_<timestamp>.webm
await recorder.stopRecording();The camera clip is saved alongside the screen recording as
temp_camera_<timestamp>.webm. The file contains video frames only (no audio). Use the same device IDs in Electron to power live previews (navigator.mediaDevices.getUserMedia({ video: { deviceId } })). On macOS versions prior to 15, Apple does not expose a WebM encoder; the module falls back to a QuickTime container while keeping the same filename, so transcode or rename if an actual WebM container is required.
See CAMERA_CAPTURE.md for a deeper walkthrough and Electron integration tips.
setAudioDevice(deviceId)β select the microphone inputsetSystemAudioEnabled(enabled)/isSystemAudioEnabled()setSystemAudioDevice(deviceId)β prefer loopback devices when you need system audio onlygetAudioCaptureStatus()β returns{ isCapturing, outputFile, deviceIds, includeMicrophone, includeSystemAudio, sessionTimestamp }
See AUDIO_CAPTURE.md for a step-by-step guide on enumerating devices, enabling microphone/system capture, and consuming the audio companion files.
Checks macOS recording permissions.
const permissions = await recorder.checkPermissions();
console.log(permissions);
// {
// screenRecording: true,
// microphone: true,
// accessibility: true
// }Returns current recording status and options.
const status = recorder.getStatus();
console.log(status);
// {
// isRecording: true,
// outputPath: "./recording.mov",
// cameraOutputPath: "./temp_camera_1720000000000.webm",
// audioOutputPath: "./temp_audio_1720000000000.webm",
// cameraCapturing: true,
// audioCapturing: true,
// sessionTimestamp: 1720000000000,
// options: { ... },
// recordingTime: 15
// }Captures a thumbnail preview of a specific window.
const thumbnail = await recorder.getWindowThumbnail(12345, {
maxWidth: 400, // Maximum width (default: 300)
maxHeight: 300, // Maximum height (default: 200)
});
// Returns: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA..."
// Can be used directly in <img> tags or saved as fileCaptures a thumbnail preview of a specific display.
const thumbnail = await recorder.getDisplayThumbnail(0, {
maxWidth: 400, // Maximum width (default: 300)
maxHeight: 300, // Maximum height (default: 200)
});
// Returns: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA..."
// Perfect for display selection UIStarts automatic cursor tracking and saves data to JSON file in real-time.
await recorder.startCursorCapture("./cursor-data.json");
// Cursor tracking started - automatically writing to fileStops cursor tracking and closes the output file.
await recorder.stopCursorCapture();
// Tracking stopped, file closedJSON Output Format:
[
{
"x": 851,
"y": 432,
"timestamp": 201,
"cursorType": "default",
"type": "move"
},
{
"x": 851,
"y": 432,
"timestamp": 220,
"cursorType": "pointer",
"type": "mousedown"
}
]Cursor Types: default, pointer, text, grab, grabbing, ew-resize, ns-resize, crosshair
Event Types: move, mousedown, mouseup, rightmousedown, rightmouseup
const recorder = new MacRecorder();
// List available windows
const windows = await recorder.getWindows();
console.log("Available windows:");
windows.forEach((win, i) => {
console.log(`${i + 1}. ${win.appName} - ${win.name}`);
});
// Record a specific window
const targetWindow = windows.find((w) => w.appName === "Safari");
await recorder.startRecording("./safari-recording.mov", {
windowId: targetWindow.id,
includeSystemAudio: false,
includeMicrophone: true,
captureCursor: true,
});
await new Promise((resolve) => setTimeout(resolve, 10000)); // 10 seconds
await recorder.stopRecording();const recorder = new MacRecorder();
// List available displays
const displays = await recorder.getDisplays();
console.log("Available displays:");
displays.forEach((display, i) => {
console.log(`${i}: ${display.resolution} at (${display.x}, ${display.y})`);
});
// Record from second display
await recorder.startRecording("./second-display.mov", {
displayId: 1, // Second display
quality: "high",
frameRate: 60,
});
await new Promise((resolve) => setTimeout(resolve, 5000));
await recorder.stopRecording();const recorder = new MacRecorder();
// Record specific screen area
await recorder.startRecording("./area-recording.mov", {
captureArea: {
x: 200,
y: 100,
width: 1200,
height: 800,
},
quality: "medium",
captureCursor: false,
});
await new Promise((resolve) => setTimeout(resolve, 8000));
await recorder.stopRecording();const recorder = new MacRecorder();
// List available audio devices to find system audio devices
const audioDevices = await recorder.getAudioDevices();
console.log("Available audio devices:");
audioDevices.forEach((device, i) => {
console.log(`${i + 1}. ${device.name} (ID: ${device.id})`);
});
// Find system audio device (like BlackHole, Soundflower, etc.)
const systemAudioDevice = audioDevices.find(device =>
device.name.toLowerCase().includes('blackhole') ||
device.name.toLowerCase().includes('soundflower') ||
device.name.toLowerCase().includes('loopback') ||
device.name.toLowerCase().includes('aggregate')
);
if (systemAudioDevice) {
console.log(`Using system audio device: ${systemAudioDevice.name}`);
// Record with specific system audio device
await recorder.startRecording("./system-audio-specific.mov", {
includeMicrophone: false,
includeSystemAudio: true,
systemAudioDeviceId: systemAudioDevice.id, // Specify exact device
captureArea: { x: 0, y: 0, width: 1, height: 1 }, // Minimal video
});
} else {
console.log("No system audio device found. Installing BlackHole or Soundflower recommended.");
// Record with default system audio capture (may not work without virtual audio device)
await recorder.startRecording("./system-audio-default.mov", {
includeMicrophone: false,
includeSystemAudio: true, // Auto-detect system audio device
captureArea: { x: 0, y: 0, width: 1, height: 1 },
});
}
// Record for 10 seconds
await new Promise(resolve => setTimeout(resolve, 10000));
await recorder.stopRecording();System Audio Setup:
For reliable system audio capture, install a virtual audio device:
- BlackHole (Free): https://github.com/ExistentialAudio/BlackHole
- Soundflower (Free): https://github.com/mattingalls/Soundflower
- Loopback (Paid): https://rogueamoeba.com/loopback/
These create aggregate audio devices that the package can detect and use for system audio capture.
const recorder = new MacRecorder();
// Listen to recording events
recorder.on("started", (outputPath) => {
console.log("Recording started:", outputPath);
});
recorder.on("stopped", (result) => {
console.log("Recording stopped:", result);
});
recorder.on("timeUpdate", (seconds) => {
console.log(`Recording time: ${seconds}s`);
});
recorder.on("completed", (outputPath) => {
console.log("Recording completed:", outputPath);
});
await recorder.startRecording("./event-recording.mov");const recorder = new MacRecorder();
// Get windows with thumbnail previews
const windows = await recorder.getWindows();
console.log("Available windows with previews:");
for (const window of windows) {
console.log(`${window.appName} - ${window.name}`);
try {
// Generate thumbnail for each window
const thumbnail = await recorder.getWindowThumbnail(window.id, {
maxWidth: 200,
maxHeight: 150,
});
console.log(`Thumbnail: ${thumbnail.substring(0, 50)}...`);
// Use thumbnail in your UI:
// <img src="${thumbnail}" alt="Window Preview" />
} catch (error) {
console.log(`No preview available: ${error.message}`);
}
}const recorder = new MacRecorder();
async function createDisplaySelector() {
const displays = await recorder.getDisplays();
const displayOptions = await Promise.all(
displays.map(async (display, index) => {
try {
const thumbnail = await recorder.getDisplayThumbnail(display.id);
return {
id: display.id,
name: `Display ${index + 1}`,
resolution: display.resolution,
thumbnail: thumbnail,
isPrimary: display.isPrimary,
};
} catch (error) {
return {
id: display.id,
name: `Display ${index + 1}`,
resolution: display.resolution,
thumbnail: null,
isPrimary: display.isPrimary,
};
}
})
);
return displayOptions;
}const MacRecorder = require("node-mac-recorder");
async function trackUserInteraction() {
const recorder = new MacRecorder();
try {
// Start cursor tracking - automatically writes to file
await recorder.startCursorCapture("./user-interactions.json");
console.log("β
Cursor tracking started...");
// Track for 5 seconds
console.log("π± Move mouse and click for 5 seconds...");
await new Promise((resolve) => setTimeout(resolve, 5000));
// Stop tracking
await recorder.stopCursorCapture();
console.log("β
Cursor tracking completed!");
// Analyze the data
const fs = require("fs");
const data = JSON.parse(
fs.readFileSync("./user-interactions.json", "utf8")
);
console.log(`π ${data.length} events recorded`);
// Count clicks
const clicks = data.filter((d) => d.type === "mousedown").length;
if (clicks > 0) {
console.log(`π±οΈ ${clicks} clicks detected`);
}
// Most used cursor type
const cursorTypes = {};
data.forEach((item) => {
cursorTypes[item.cursorType] = (cursorTypes[item.cursorType] || 0) + 1;
});
const mostUsed = Object.keys(cursorTypes).reduce((a, b) =>
cursorTypes[a] > cursorTypes[b] ? a : b
);
console.log(`π― Most used cursor: ${mostUsed}`);
} catch (error) {
console.error("β Error:", error.message);
}
}
trackUserInteraction();const MacRecorder = require("node-mac-recorder");
async function recordWithCursorTracking() {
const recorder = new MacRecorder();
try {
// Start both screen recording and cursor tracking
await Promise.all([
recorder.startRecording("./screen-recording.mov", {
captureCursor: false, // Don't show cursor in video
includeSystemAudio: true,
quality: "high",
}),
recorder.startCursorCapture("./cursor-data.json"),
]);
console.log("β
Recording screen and tracking cursor...");
// Record for 10 seconds
await new Promise((resolve) => setTimeout(resolve, 10000));
// Stop both
await Promise.all([recorder.stopRecording(), recorder.stopCursorCapture()]);
console.log("β
Recording completed!");
console.log("π Files created:");
console.log(" - screen-recording.mov");
console.log(" - cursor-data.json");
} catch (error) {
console.error("β Error:", error.message);
}
}
recordWithCursorTracking();// In main process
const { ipcMain } = require("electron");
const MacRecorder = require("node-mac-recorder");
const recorder = new MacRecorder();
ipcMain.handle("start-recording", async (event, options) => {
try {
await recorder.startRecording("./recording.mov", options);
return { success: true };
} catch (error) {
return { success: false, error: error.message };
}
});
ipcMain.handle("stop-recording", async () => {
const result = await recorder.stopRecording();
return result;
});
ipcMain.handle("get-windows", async () => {
return await recorder.getWindows();
});const express = require("express");
const MacRecorder = require("node-mac-recorder");
const app = express();
const recorder = new MacRecorder();
app.post("/start-recording", async (req, res) => {
try {
const { windowId, duration } = req.body;
await recorder.startRecording("./api-recording.mov", { windowId });
setTimeout(async () => {
await recorder.stopRecording();
}, duration * 1000);
res.json({ status: "started" });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.get("/windows", async (req, res) => {
const windows = await recorder.getWindows();
res.json(windows);
});When recording windows, the package automatically:
- Detects Window Location - Determines which display contains the window
- Converts Coordinates - Translates global coordinates to display-relative coordinates
- Sets Display ID - Automatically selects the correct display for recording
- Handles Multi-Monitor - Works seamlessly across multiple displays
// Window at (-2000, 100) on second display
// Automatically converts to (440, 100) on display 1
await recorder.startRecording("./auto-display.mov", {
windowId: 12345, // Package handles display detection automatically
});The getWindows() method automatically filters out:
- System windows (Dock, Menu Bar)
- Hidden windows
- Very small windows (< 50x50 pixels)
- Windows without names
- Native Implementation - Uses AVFoundation for optimal performance
- Minimal Overhead - Low CPU usage during recording
- Memory Efficient - Proper memory management in native layer
- Quality Presets - Balanced quality/performance options
Run the included demo to test cursor tracking:
node cursor-test.jsThis will:
- β Start cursor tracking for 5 seconds
- π± Capture mouse movements and clicks
- π Save data to
cursor-data.json - π±οΈ Report clicks detected
If recording fails, check macOS permissions:
# Open System Preferences > Security & Privacy > Screen Recording
# Ensure your app/terminal has permission# Reinstall with verbose output
npm install node-mac-recorder --verbose
# Clear npm cache
npm cache clean --force
# Ensure Xcode tools are installed
xcode-select --install- Empty/Black Video: Check screen recording permissions
- No Audio: Verify audio permissions and device availability
- Window Not Found: Ensure target window is visible and not minimized
- Coordinate Issues: Window may be on different display (handled automatically)
// Get module information
const info = recorder.getModuleInfo();
console.log("Module info:", info);
// Check recording status
const status = recorder.getStatus();
console.log("Recording status:", status);
// Verify permissions
const permissions = await recorder.checkPermissions();
console.log("Permissions:", permissions);- Recording Quality: Higher quality increases file size and CPU usage
- Frame Rate: 30fps recommended for most use cases, 60fps for smooth motion
- Audio: System audio capture adds minimal overhead
- Window Recording: Slightly more efficient than full-screen recording
- Multi-Display: No significant performance impact
- Output Format: MOV (QuickTime)
- Video Codec: H.264
- Audio Codec: AAC
- Container: QuickTime compatible
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
MIT License - see LICENSE file for details.
- β Cursor Tracking: Track mouse position, cursor types, and click events with JSON export
- β Window Recording: Automatic coordinate conversion for multi-display setups
- β Audio Controls: Separate microphone and system audio controls
- β Display Selection: Multi-monitor support with automatic detection
- β Smart Filtering: Improved window detection and filtering
- β Performance: Optimized native implementation
Made for macOS π | Built with AVFoundation πΉ | Node.js Ready π
π‘οΈ Permissions: Ensure your host application's
Info.plistdeclares camera and microphone usage descriptions (seeMACOS_PERMISSIONS.md).