
Turn your iPhone into a professional spatial capture tool. Stream depth, color, and point cloud data over the network in real time — no expensive hardware required.

Capture Modes
Switch between modes in real time. Every mode leverages your iPhone's LiDAR sensor and full camera array.

Live RGB camera feed at 60 FPS with real-time LiDAR depth fusion. What you see is what you capture.

High-contrast grayscale feed — ideal for low-light environments and precision spatial scanning.

LiDAR depth visualization with 9 selectable colormaps including thermal, incandescent, and deep sea.

Real-time 3D point cloud with true RGB colors. Configurable frame window and point density up to 12,500 points per frame.
Network Streaming
Send LiDAR depth, color, point cloud, and camera tracking data over the network in real time. Four protocols, all independently configurable, all running simultaneously.

In Action
Stream live point cloud data from your iPhone straight into TouchDesigner. No capture cards, no expensive rigs — just your phone and a Wi-Fi connection.
LOTA in Use
One iPhone. A Wi-Fi connection. Venue-scale depth visuals projected in real-time. No extra hardware required.


ARKit Integration
Stream real-time skeleton data, facial blend shapes, and hand landmarks from ARKit and Vision straight into TouchDesigner, Max/MSP, or any OSC receiver. 91 body joints. 52 face blend shapes. 21 hand landmarks per hand with 2D or 3D coordinates. Zero extra hardware.

3D Export
Capture posed camera frames with ARKit intrinsics, extrinsics, and LiDAR point clouds. Export COLMAP-compatible binary datasets ready for training with OpenSplat, Nerfstudio, or gsplat — directly from your iPhone.
COLMAP-compatible export — cameras.bin, images.bin, points3D.bin
Standalone PLY export — unlimited point accumulation across your entire session
iCloud sync — pick an export folder and captures sync automatically
Compatible everywhere — Blender, CloudCompare, MeshLab, and any PLY tool
Accessibility
Professional spatial capture shouldn't require perfect vision, hearing, or motor control. LOTA is designed to meet Apple's App Store accessibility standards, so every feature works for every user.
Every control is labeled and announced. Mode switches, streaming state, and recording status are all spoken aloud, so you never have to guess what’s happening.
Say "stream", "record", or "switch to Depth" and LOTA responds. Every button is discoverable by voice with natural alternative commands.
Text scales up to 200% or more. Every font uses semantic Dynamic Type styles, and the UI reflows to a grid layout at extreme sizes so nothing gets cut off.
Designed dark from the start. Every screen, menu, and control uses a true dark color scheme for comfortable use in any environment.
Status indicators swap to distinct symbols when enabled. Shapes, icons, and text labels replace color as the sole differentiator, nothing is lost.
Increase Contrast swaps blur materials for solid backgrounds and boosts status colors for guaranteed readability over any camera feed.
All transitions respect the system Reduce Motion setting. Visual feedback stays, decorative animation goes.
Stay in the Loop
Be the first to know when LOTA launches.