Skip to content

Quickstart Guide

Deploy your first Physical AI system on make87 in under 5 minutes.

Prerequisites

Before starting, ensure you have access to a Linux machine that meets these requirements:

Hardware Prerequisites

  • Operating System: Any 64-bit Linux distribution (Ubuntu, Debian, Raspberry Pi OS recommended)
  • Architecture: x86-64 (amd64) or ARM64 (aarch64)
  • Connectivity: Internet connection for setup and management
  • Permissions: sudo access for node installation

Suitable Hardware:

  • Your laptop or desktop PC running Linux
  • Raspberry Pi 4/5 with 64-bit OS
  • NVIDIA Jetson devices
  • Cloud VMs (AWS EC2, Google Compute, Azure)
  • Industrial PCs

Step 1: Set Up Your First Node

A Node provides compute resources for running applications. Follow these steps to register your first self-hosted node:

Get Installation Command

  1. Go to make87 Dashboard alt text
  2. Select Node alt text on the top navigation
  3. Select yourself above the node section as owner
  4. Select import alt text to the right of the search bar
  5. Request install command for one of the two options in the Import Self-Hosted Node pop-up window

    • CPU Client - Standard installation (recommended for most users)
    • GPU Client - Only if you have NVIDIA GPU and want container GPU acceleration
  6. Copy the installation command (contains your unique registration token) and follow the steps in section Install Node Client

Keep Your Token Secure

The installation command contains a unique token. Don't share it with others.

Install Node Client

  1. Open a terminal on your target machine or SSH into it (make sure to have cURL installed)
  2. Paste and run the installation command
  3. Enter your password when prompted for sudo permissions
  4. Wait for installation to complete (~1 minute) and follow the steps in section Approve Node Registration

Approve Node Registration

  1. A notification about a node authentication request appears on the top navigation
  2. Select Notification alt text and follow the approval flow
  3. Once your device is connected close this window
  4. Your Node appears in the Nodes section when approved

Animation showing node installation process

Having issues? See Node Management for detailed installation instructions and troubleshooting.

Step 2: Deploy the Quickstart System Template

Now let's deploy a complete AI system using make87's quickstart template. This template includes:

  • Virtual Camera: Streams video from a sidewalk recording that continuously loops for testing
  • Face Detection AI: Detects faces in real-time using YuNet model
  • Image Processing: Raw-to-JPEG conversion for efficient data transfer
  • Logging System: Vector and Rerun Viewer for comprehensive monitoring
  • Message Shippers: Routes data to the logging system

Create System from Template

  1. Select System Templates alt text on the top navigation
  2. Find and select the make87-quickstart template to open it in the System Designer
  3. Select Deploy on the left sidebar
  4. Give your system a descriptive name (e.g., "My First AI System")
  5. Select Create System

You are redirected to the System Designer where you can see all available applications.

Step-by-Step Deployment

We'll deploy this system incrementally to understand how each component works:

Step 1: Basic Face Detection

Let's start with the core functionality:

  1. Deploy Virtual Camera:

    • Find the virtual-camera application
    • Select Add node button and select your registered node
    • Select the â–¶ (play) button to start the application
  2. Deploy Face Detection:

    • Find the face-detection-yunet application
    • Select Add node and select your registered node
    • Select the â–¶ (play) button to start the application
  3. View Text Logs:

    • Go to Logs in the sidebar
    • Open the face detection app tab to see text logs of detected faces
    • The virtual-camera is preconfigured with a sidewalk recording that continuously loops
Deploying virtual-camera app
Opening simple built-in logs

Step 2: Advanced Logging with Rerun

Now let's add comprehensive logging capabilities:

  1. Deploy Logging Infrastructure:

    • Find the vector application and deploy it to your node
    • Find the rerun-viewer application and deploy it to your node
    • Select the â–¶ (play) button to start both applications
  2. Access Rerun Viewer:

    • After both applications have started, select Rerun Viewer under "Dev UIs" in the right sidebar
    • This opens an embedded Rerun viewer containing all text logs, including the face detection results
Opening the Rerun Viewer from sidebar

Step 3: Visual Stream Logging

Let's add image streaming to our logging system:

  1. Deploy Image Processing:

    • Find the raw-to-jpeg application and deploy it to your node
    • Find the make87-messages-shipper connected to raw-to-jpeg and deploy it to your node
    • Start both applications
  2. Understanding Image Conversion:

    • The raw-to-jpeg conversion reduces data transfer by 10-20x (from 5-10 MB per frame to 300-800 KB)
    • This prevents network bandwidth saturation and enables efficient distributed processing
  3. View Video Stream:

    • Return to the Rerun Viewer
    • You should now see the sidewalk scene streaming into the viewer
    • Notice the new time sequence called header_time - this is the timestamp that virtual-camera puts on each frame
    • This differs from log_time which may be delayed due to processing times
Shipping images to Rerun Viewer

Step 4: Complete Detection Visualization

Finally, let's visualize the AI detection results:

  1. Deploy Detection Logging:

    • Find the make87-messages-shipper connected to the face detection app
    • Deploy it to your node and start the application
  2. View Detection Results:

    • Return to the Rerun Viewer
    • Wait for bounding boxes to appear over detected faces in the video stream
    • You now have a complete AI system with real-time visual feedback

🎉 Congratulations!

You've successfully deployed your first Physical AI system on make87! You've learned how to deploy a useful system using modular logging techniques to gain insights. Your system now demonstrates:

  • ✅ Real-time video processing with the virtual camera streaming sidewalk footage
  • ✅ AI inference at the edge with face detection running on your node
  • ✅ Flexible data routing with message shippers connecting components
  • ✅ Multi-modal logging combining text logs and visual streams
  • ✅ Scalable architecture ready for production deployment

Understanding What You Built

System Architecture

graph LR
    subgraph "Your Node"
        subgraph "Core Applications"
            A[Virtual Camera] --> B[Face Detection AI]
            A --> C[Raw-to-JPEG]
        end

        subgraph "Log Shipping"
            D[Detection Shipper]
            E[Image Shipper]
            F[Vector Logger]
        end

        subgraph "Log Viewing"
            G[Rerun Viewer]
        end

        B --> D
        C --> E
        D --> F
        E --> F
        F --> G
    end

Data Flow:

  1. Virtual camera streams video frames to face detection and image processing
  2. Face detection AI processes frames and outputs detection results
  3. Raw-to-JPEG converts images for efficient transfer
  4. Message shippers route data to the logging system
  5. Vector aggregates all logs and feeds them to Rerun Viewer
  6. Rerun Viewer provides real-time visualization of the entire system

Key Concepts Demonstrated

  • Modular Applications: Each component serves a specific purpose and can be deployed independently
  • Efficient Data Processing: Raw-to-JPEG conversion optimizes bandwidth usage
  • Multi-Stream Logging: Separate channels for text logs and image data with different timestamps
  • Real-time Visualization: Comprehensive monitoring through Rerun Viewer
  • Incremental Deployment: Building complex systems step-by-step for better understanding