
Turn your iPhone into a professional spatial capture tool. Stream depth, color, and point cloud data over the network in real time, with no expensive hardware required.
The hardware you already have
Swipe between live camera streaming, ARKit motion capture, and 3D dataset recording. Each page hosts its own set of modes, tapping a different combination of sensors your iPhone already ships with: LiDAR, cameras, IMU, and microphone.
Networking
Every enabled transport sends simultaneously, on the same Wi-Fi or a wired USB hotspot.
LOTA - <label> in TouchDesigner, OBS, vMix, Resolume. Per-device labels keep multi-phone rigs distinguishable.Float32 depth maps for depth, point cloud, and blob modes.Capture mode
A clean grayscale feed at 60 fps. Strips chromatic noise from the camera so depth visualizations and tracking overlays read cleaner downstream. Works on every iPhone.
Depth
Same nine colormaps. Different sources for different phones.
Settings
Both modes share a Color Map picker: Black & White, Blue Red, Deep Sea, Heated Metal, Visible Spectrum, and four more presets. Open via the <Mode> Settings button below the status bar. Neural Depth Settings also surfaces an About the Model section crediting ByteDance Research and Apple, confirming inference runs fully on-device.
Capture mode
Real-time 3D points with true RGB colors sampled from the camera. The full 256×192 LiDAR depth map unprojected on the GPU each frame, streamed live to TouchDesigner over PLY at 60 fps.
Capture mode
Replaces a Kinect + Blob Track TOP chain with a single iPhone. Carves a configurable depth slab out of the LiDAR scan, finds connected regions, assigns each a stable ID across frames, and streams the result as OSC with field names matching TouchDesigner's blobtrackTOP_Class verbatim.
Settings
Four subsections mirror TouchDesigner's parameter pages: Detection (Color / Mono / Mask / Binary base styles, ID labels, outline color), Depth (slab range, confidence filter), Constraints (min / max blob size, max move distance), and Revival (re-identify lost blobs by time and distance).
Capture mode
Live on-device speech-to-text. No internet, no cloud.
iOS 26's SpeechAnalyzer running on the phone. The first-launch flow prefetches the speech model for your locale, so captions appear instantly the first time you open the mode. A 200-bar mirrored waveform replaces the camera, recognized words show as live captions, and the whole composition streams through NDI as a black-and-white video source.
/lota/speech/word: per-word, trigger-friendly./lota/speech/partial: running transcript as words arrive./lota/speech/final: finalized sentences when the recognizer commits.Settings
Three toggles in Transcription Settings (Send Per-Word, Send Partial, Send Final) apply simultaneously to OSC and TCP/UDP transports. Drop-in TouchDesigner components LOTASpeechTCP and LOTASpeechUDP parse the binary frame format automatically.
Capture mode
Phone as a wireless motion-sensor OSC source. Reads the device IMU and streams every value as a per-axis OSC channel, so each sensor maps 1:1 to a TouchDesigner CHOP channel. Each active sensor renders as a scrolling line graph lane, and the whole composition streams through NDI.
/lota/motion/accel/x,y,z: gravity-removed acceleration, G-force./lota/motion/gyro/x,y,z: rotation rate, rad/s./lota/motion/heading: compass heading, 0–360°./lota/motion/pressure, /altitude: barometric kPa + relative meters.Settings
Toggle each sensor independently in Motion Settings: Acceleration on by default; Gyroscope, Compass, and Barometric Pressure off. The Update Rate picker offers 30 / 60 / 100 Hz; 30 Hz matches ARKit, 100 Hz is for latency-critical controllers.
Capture mode
Real-time on-device audio analysis as OSC. Taps the mic via AVAudioEngine and runs DSP using Apple's Accelerate/vDSP framework, with no third-party dependencies. Active channels render as scrolling graph lanes and stream through NDI.
/lota/audio/{bass,mid,high}: continuous 0–1 energy per band, rolling auto-gain./lota/audio/drums/{low,mid,high}: binary 0/1 onset triggers, drop straight into a TD Trigger CHOP./lota/audio/burst: fast transient detector, pulse-shaped 0–1./lota/audio/fft/0 through /19: 20 log-spaced bands, red→blue gradient.Settings
Toggle each group independently in Audio Settings: Levels on by default; Beats, Dynamics, and FFT off. The Update Rate picker offers 30 / 60 Hz, decimated from internal ~86 Hz analysis.
Swipe left
Three motion-capture modes, all OSC.
/lota/body/skeleton./lota/face/<name> bundled in one UDP datagram./lota/hand/{left|right}/{finger}.Settings
Per-mode Settings sheets toggle each overlay (Skeleton, Face, Hand), and 3D Hand Coordinates opts in to LiDAR-projected world-space hand positions on Pro phones.
Swipe right
Record datasets, materials, and traces.
manifest.json + optional dark-theme PDF report. For Jupyter, Ableton MIDI, sensor-fusion training sets.Settings
Per-format Settings sheets: Material (output resolution, AO sample count, OpenGL/DirectX normal convention, delight + roughness scale sliders); and Trace (channel selection, CSV/TSV format, optional PDF report, auto-stop duration cap 5–600 s).
Networking
Every enabled transport sends simultaneously, on the same Wi-Fi or a wired USB hotspot.
LOTA - <label> in TouchDesigner, OBS, vMix, Resolume. Per-device labels keep multi-phone rigs distinguishable.Float32 depth maps for depth, point cloud, and blob modes.Capture mode
A clean grayscale feed at 60 fps. Strips chromatic noise from the camera so depth visualizations and tracking overlays read cleaner downstream. Works on every iPhone.
Depth
Same nine colormaps. Different sources for different phones.
Settings
Both modes share a Color Map picker: Black & White, Blue Red, Deep Sea, Heated Metal, Visible Spectrum, and four more presets. Open via the <Mode> Settings button below the status bar. Neural Depth Settings also surfaces an About the Model section crediting ByteDance Research and Apple, confirming inference runs fully on-device.
Capture mode
Real-time 3D points with true RGB colors sampled from the camera. The full 256×192 LiDAR depth map unprojected on the GPU each frame, streamed live to TouchDesigner over PLY at 60 fps.
Capture mode
Replaces a Kinect + Blob Track TOP chain with a single iPhone. Carves a configurable depth slab out of the LiDAR scan, finds connected regions, assigns each a stable ID across frames, and streams the result as OSC with field names matching TouchDesigner's blobtrackTOP_Class verbatim.
Settings
Four subsections mirror TouchDesigner's parameter pages: Detection (Color / Mono / Mask / Binary base styles, ID labels, outline color), Depth (slab range, confidence filter), Constraints (min / max blob size, max move distance), and Revival (re-identify lost blobs by time and distance).
Capture mode
Live on-device speech-to-text. No internet, no cloud.
iOS 26's SpeechAnalyzer running on the phone. The first-launch flow prefetches the speech model for your locale, so captions appear instantly the first time you open the mode. A 200-bar mirrored waveform replaces the camera, recognized words show as live captions, and the whole composition streams through NDI as a black-and-white video source.
/lota/speech/word: per-word, trigger-friendly./lota/speech/partial: running transcript as words arrive./lota/speech/final: finalized sentences when the recognizer commits.Settings
Three toggles in Transcription Settings (Send Per-Word, Send Partial, Send Final) apply simultaneously to OSC and TCP/UDP transports. Drop-in TouchDesigner components LOTASpeechTCP and LOTASpeechUDP parse the binary frame format automatically.
Capture mode
Phone as a wireless motion-sensor OSC source. Reads the device IMU and streams every value as a per-axis OSC channel, so each sensor maps 1:1 to a TouchDesigner CHOP channel. Each active sensor renders as a scrolling line graph lane, and the whole composition streams through NDI.
/lota/motion/accel/x,y,z: gravity-removed acceleration, G-force./lota/motion/gyro/x,y,z: rotation rate, rad/s./lota/motion/heading: compass heading, 0–360°./lota/motion/pressure, /altitude: barometric kPa + relative meters.Settings
Toggle each sensor independently in Motion Settings: Acceleration on by default; Gyroscope, Compass, and Barometric Pressure off. The Update Rate picker offers 30 / 60 / 100 Hz; 30 Hz matches ARKit, 100 Hz is for latency-critical controllers.
Capture mode
Real-time on-device audio analysis as OSC. Taps the mic via AVAudioEngine and runs DSP using Apple's Accelerate/vDSP framework, with no third-party dependencies. Active channels render as scrolling graph lanes and stream through NDI.
/lota/audio/{bass,mid,high}: continuous 0–1 energy per band, rolling auto-gain./lota/audio/drums/{low,mid,high}: binary 0/1 onset triggers, drop straight into a TD Trigger CHOP./lota/audio/burst: fast transient detector, pulse-shaped 0–1./lota/audio/fft/0 through /19: 20 log-spaced bands, red→blue gradient.Settings
Toggle each group independently in Audio Settings: Levels on by default; Beats, Dynamics, and FFT off. The Update Rate picker offers 30 / 60 Hz, decimated from internal ~86 Hz analysis.
Swipe left
Three motion-capture modes, all OSC.
/lota/body/skeleton./lota/face/<name> bundled in one UDP datagram./lota/hand/{left|right}/{finger}.Settings
Per-mode Settings sheets toggle each overlay (Skeleton, Face, Hand), and 3D Hand Coordinates opts in to LiDAR-projected world-space hand positions on Pro phones.
Swipe right
Record datasets, materials, and traces.
manifest.json + optional dark-theme PDF report. For Jupyter, Ableton MIDI, sensor-fusion training sets.Settings
Per-format Settings sheets: Material (output resolution, AO sample count, OpenGL/DirectX normal convention, delight + roughness scale sliders); and Trace (channel selection, CSV/TSV format, optional PDF report, auto-stop duration cap 5–600 s).
Stay in the Loop
Be the first to know when LOTA launches.