Quickstart Guide
Deploy your first Physical AI system on make87 in under 5 minutes.
Prerequisites
Before starting, ensure you have access to a Linux machine that meets these requirements:
Hardware Prerequisites
- Operating System: Any 64-bit Linux distribution (Ubuntu, Debian, Raspberry Pi OS recommended)
- Architecture: x86-64 (amd64) or ARM64 (aarch64)
- Connectivity: Internet connection for setup and management
- Permissions:
sudo
access for node installation
Suitable Hardware:
- Your laptop or desktop PC running Linux
- Raspberry Pi 4/5 with 64-bit OS
- NVIDIA Jetson devices
- Cloud VMs (AWS EC2, Google Compute, Azure)
- Industrial PCs
Step 1: Set Up Your First Node
A Node provides compute resources for running applications. Follow these steps to register your first self-hosted node:
Get Installation Command
- Go to make87 Dashboard
- Select Node
on the top navigation
- Select yourself above the node section as owner
- Select import
to the right of the search bar
-
Request install command for one of the two options in the Import Self-Hosted Node pop-up window
- CPU Client - Standard installation (recommended for most users)
- GPU Client - Only if you have NVIDIA GPU and want container GPU acceleration
-
Copy the installation command (contains your unique registration token) and follow the steps in section Install Node Client
Keep Your Token Secure
The installation command contains a unique token. Don't share it with others.
Install Node Client
- Open a terminal on your target machine or SSH into it (make sure to have cURL installed)
- Paste and run the installation command
- Enter your password when prompted for
sudo
permissions - Wait for installation to complete (~1 minute) and follow the steps in section Approve Node Registration
Approve Node Registration
- A notification about a node authentication request appears on the top navigation
- Select Notification
and follow the approval flow
- Once your device is connected close this window
- Your Node appears in the Nodes section when approved
Having issues? See Node Management for detailed installation instructions and troubleshooting.
Step 2: Deploy the Quickstart System Template
Now let's deploy a complete AI system using make87's quickstart template. This template includes:
- Virtual Camera: Streams video from a sidewalk recording that continuously loops for testing
- Face Detection AI: Detects faces in real-time using YuNet model
- Image Processing: Raw-to-JPEG conversion for efficient data transfer
- Logging System: Vector and Rerun Viewer for comprehensive monitoring
- Message Shippers: Routes data to the logging system
Create System from Template
- Select System Templates
on the top navigation
- Find and select the make87-quickstart template to open it in the System Designer
- Select Deploy on the left sidebar
- Give your system a descriptive name (e.g., "My First AI System")
- Select Create System
You are redirected to the System Designer where you can see all available applications.
Step-by-Step Deployment
We'll deploy this system incrementally to understand how each component works:
Step 1: Basic Face Detection
Let's start with the core functionality:
-
Deploy Virtual Camera:
- Find the
virtual-camera
application - Select Add node button and select your registered node
- Select the â–¶ (play) button to start the application
- Find the
-
Deploy Face Detection:
- Find the
face-detection-yunet
application - Select Add node and select your registered node
- Select the â–¶ (play) button to start the application
- Find the
-
View Text Logs:
- Go to Logs in the sidebar
- Open the face detection app tab to see text logs of detected faces
- The
virtual-camera
is preconfigured with a sidewalk recording that continuously loops
Step 2: Advanced Logging with Rerun
Now let's add comprehensive logging capabilities:
-
Deploy Logging Infrastructure:
- Find the
vector
application and deploy it to your node - Find the
rerun-viewer
application and deploy it to your node - Select the â–¶ (play) button to start both applications
- Find the
-
Access Rerun Viewer:
- After both applications have started, select Rerun Viewer under "Dev UIs" in the right sidebar
- This opens an embedded Rerun viewer containing all text logs, including the face detection results
Step 3: Visual Stream Logging
Let's add image streaming to our logging system:
-
Deploy Image Processing:
- Find the
raw-to-jpeg
application and deploy it to your node - Find the
make87-messages-shipper
connected toraw-to-jpeg
and deploy it to your node - Start both applications
- Find the
-
Understanding Image Conversion:
- The
raw-to-jpeg
conversion reduces data transfer by 10-20x (from 5-10 MB per frame to 300-800 KB) - This prevents network bandwidth saturation and enables efficient distributed processing
- The
-
View Video Stream:
- Return to the Rerun Viewer
- You should now see the sidewalk scene streaming into the viewer
- Notice the new time sequence called
header_time
- this is the timestamp thatvirtual-camera
puts on each frame - This differs from
log_time
which may be delayed due to processing times
Step 4: Complete Detection Visualization
Finally, let's visualize the AI detection results:
-
Deploy Detection Logging:
- Find the
make87-messages-shipper
connected to the face detection app - Deploy it to your node and start the application
- Find the
-
View Detection Results:
- Return to the Rerun Viewer
- Wait for bounding boxes to appear over detected faces in the video stream
- You now have a complete AI system with real-time visual feedback
🎉 Congratulations!
You've successfully deployed your first Physical AI system on make87! You've learned how to deploy a useful system using modular logging techniques to gain insights. Your system now demonstrates:
- ✅ Real-time video processing with the virtual camera streaming sidewalk footage
- ✅ AI inference at the edge with face detection running on your node
- ✅ Flexible data routing with message shippers connecting components
- ✅ Multi-modal logging combining text logs and visual streams
- ✅ Scalable architecture ready for production deployment
Understanding What You Built
System Architecture
graph LR
subgraph "Your Node"
subgraph "Core Applications"
A[Virtual Camera] --> B[Face Detection AI]
A --> C[Raw-to-JPEG]
end
subgraph "Log Shipping"
D[Detection Shipper]
E[Image Shipper]
F[Vector Logger]
end
subgraph "Log Viewing"
G[Rerun Viewer]
end
B --> D
C --> E
D --> F
E --> F
F --> G
end
Data Flow:
- Virtual camera streams video frames to face detection and image processing
- Face detection AI processes frames and outputs detection results
- Raw-to-JPEG converts images for efficient transfer
- Message shippers route data to the logging system
- Vector aggregates all logs and feeds them to Rerun Viewer
- Rerun Viewer provides real-time visualization of the entire system
Key Concepts Demonstrated
- Modular Applications: Each component serves a specific purpose and can be deployed independently
- Efficient Data Processing: Raw-to-JPEG conversion optimizes bandwidth usage
- Multi-Stream Logging: Separate channels for text logs and image data with different timestamps
- Real-time Visualization: Comprehensive monitoring through Rerun Viewer
- Incremental Deployment: Building complex systems step-by-step for better understanding