LOTA app icon

LiDAR Over the Air

Turn your iPhone into a professional spatial capture tool. Stream depth, color, and point cloud data over the network in real time, with no expensive hardware required.

Join the BetaComing Soon on the App StoreSee what it does

The hardware you already have

Three pages. Stream, track, record.

Swipe between live camera streaming, ARKit motion capture, and 3D dataset recording. Each page hosts its own set of modes, tapping a different combination of sensors your iPhone already ships with: LiDAR, cameras, IMU, and microphone.

Networking

Stream over your local network.

Every enabled transport sends simultaneously, on the same Wi-Fi or a wired USB hotspot.

  • NDI Auto-discovered as LOTA - <label> in TouchDesigner, OBS, vMix, Resolume. Per-device labels keep multi-phone rigs distinguishable.
  • TCP / UDP H.264 video for color and mono, raw Float32 depth maps for depth, point cloud, and blob modes.
  • OSC Camera pose, 18-joint body skeleton, 52 face blend shapes, 21 per-hand landmarks, motion sensors, audio levels, beats, and a 20-band FFT.
  • PLY Live point cloud as packed binary, 15 bytes per point. Drag-and-drop TouchDesigner receiver included.

Capture mode

Mono.

A clean grayscale feed at 60 fps. Strips chromatic noise from the camera so depth visualizations and tracking overlays read cleaner downstream. Works on every iPhone.

Depth

Two depth pipelines.

Same nine colormaps. Different sources for different phones.

  • LiDAR Depth Hardware time-of-flight on iPhone Pro. Cleanest signal under controlled lighting; reliable to ~5 m on Gen 1 (12–14 Pro), ~10 m on Gen 2 (15–16 Pro).
  • Neural Depth Depth Anything V2 Small running on the Apple Neural Engine at ~30 ms / frame. Works on every iPhone, no LiDAR required. Picks up where the sensor struggles: bright sun, very long range, glass, mirrors.

Settings

Both modes share a Color Map picker: Black & White, Blue Red, Deep Sea, Heated Metal, Visible Spectrum, and four more presets. Open via the <Mode> Settings button below the status bar. Neural Depth Settings also surfaces an About the Model section crediting ByteDance Research and Apple, confirming inference runs fully on-device.

Capture mode

Point Cloud.

Real-time 3D points with true RGB colors sampled from the camera. The full 256×192 LiDAR depth map unprojected on the GPU each frame, streamed live to TouchDesigner over PLY at 60 fps.

Capture mode

Blob Track.

Replaces a Kinect + Blob Track TOP chain with a single iPhone. Carves a configurable depth slab out of the LiDAR scan, finds connected regions, assigns each a stable ID across frames, and streams the result as OSC with field names matching TouchDesigner's blobtrackTOP_Class verbatim.

Settings

Four subsections mirror TouchDesigner's parameter pages: Detection (Color / Mono / Mask / Binary base styles, ID labels, outline color), Depth (slab range, confidence filter), Constraints (min / max blob size, max move distance), and Revival (re-identify lost blobs by time and distance).

Capture mode

Transcription.

Live on-device speech-to-text. No internet, no cloud.

iOS 26's SpeechAnalyzer running on the phone. The first-launch flow prefetches the speech model for your locale, so captions appear instantly the first time you open the mode. A 200-bar mirrored waveform replaces the camera, recognized words show as live captions, and the whole composition streams through NDI as a black-and-white video source.

  • /lota/speech/word: per-word, trigger-friendly.
  • /lota/speech/partial: running transcript as words arrive.
  • /lota/speech/final: finalized sentences when the recognizer commits.

Settings

Three toggles in Transcription Settings (Send Per-Word, Send Partial, Send Final) apply simultaneously to OSC and TCP/UDP transports. Drop-in TouchDesigner components LOTASpeechTCP and LOTASpeechUDP parse the binary frame format automatically.

Capture mode

Motion.

Phone as a wireless motion-sensor OSC source. Reads the device IMU and streams every value as a per-axis OSC channel, so each sensor maps 1:1 to a TouchDesigner CHOP channel. Each active sensor renders as a scrolling line graph lane, and the whole composition streams through NDI.

  • /lota/motion/accel/x,y,z: gravity-removed acceleration, G-force.
  • /lota/motion/gyro/x,y,z: rotation rate, rad/s.
  • /lota/motion/heading: compass heading, 0–360°.
  • /lota/motion/pressure, /altitude: barometric kPa + relative meters.

Settings

Toggle each sensor independently in Motion Settings: Acceleration on by default; Gyroscope, Compass, and Barometric Pressure off. The Update Rate picker offers 30 / 60 / 100 Hz; 30 Hz matches ARKit, 100 Hz is for latency-critical controllers.

Capture mode

Audio.

Real-time on-device audio analysis as OSC. Taps the mic via AVAudioEngine and runs DSP using Apple's Accelerate/vDSP framework, with no third-party dependencies. Active channels render as scrolling graph lanes and stream through NDI.

  • Levels /lota/audio/{bass,mid,high}: continuous 0–1 energy per band, rolling auto-gain.
  • Beats /lota/audio/drums/{low,mid,high}: binary 0/1 onset triggers, drop straight into a TD Trigger CHOP.
  • Dynamics /lota/audio/burst: fast transient detector, pulse-shaped 0–1.
  • FFT /lota/audio/fft/0 through /19: 20 log-spaced bands, red→blue gradient.

Settings

Toggle each group independently in Audio Settings: Levels on by default; Beats, Dynamics, and FFT off. The Update Rate picker offers 30 / 60 Hz, decimated from internal ~86 Hz analysis.

Swipe left

ARKit Tracking.

Three motion-capture modes, all OSC.

  • Body 18 key joints from a 91-joint ARKit skeleton on /lota/body/skeleton.
  • Face 52 named blend shapes via TrueDepth, each at /lota/face/<name> bundled in one UDP datagram.
  • Hands 21 landmarks per hand, up to 2 hands, organized by finger at /lota/hand/{left|right}/{finger}.

Settings

Per-mode Settings sheets toggle each overlay (Skeleton, Face, Hand), and 3D Hand Coordinates opts in to LiDAR-projected world-space hand positions on Pro phones.

Swipe right

Gaussian Capture.

Record datasets, materials, and traces.

  • 3D scenes COLMAP, Nerfstudio, and Nerfstudio + Depth datasets for OpenSplat, gsplat, splatfacto, Instant-NGP. Optional 16-bit PNG depth maps for depth-supervised training.
  • Material Single-shot PBR map set ZIP: basecolor, normal, height (16-bit), AO, roughness, preview, manifest. Drops into Substance, Blender, Unreal, Unity, TouchDesigner. Requires LiDAR.
  • IMU / Audio Trace Sensor or audio analysis frames over time as CSV / TSV + manifest.json + optional dark-theme PDF report. For Jupyter, Ableton MIDI, sensor-fusion training sets.

Settings

Per-format Settings sheets: Material (output resolution, AO sample count, OpenGL/DirectX normal convention, delight + roughness scale sliders); and Trace (channel selection, CSV/TSV format, optional PDF report, auto-stop duration cap 5–600 s).

Stay in the Loop

Get Development Updates

Be the first to know when LOTA launches.