I design and build humane interfaces β mostly in Swift (iOS & macOS), sometimes in Rust and Python.
My work bridges HCI research, developer tools, and new ways of working with AI.
π San Francisco
- Stitch β founding engineer of Stitch, an open-source tool for designers
- Chroma-Swift β Swift package for Chromaβs on-device vector database
- TiktokenSwift β Swift bindings for OpenAIβs tiktoken via UniFFI
- Roboflow Swift SDK β first SDK for running Roboflow-trained models on iOS
- AudioKit β helped launch this open-source audio synthesis/analysis framework
- Visual iMessage β what if Siri could describe images in a thread?
- Diffusion Demo β SwiftUI interfaces for Inception Labsβ model
- ASL Classifier β detecting ASL signs on-device with CoreML
- Emulating TouchΓ© β open-source capacitive sensing with plants & water
- O Soli Mio β radar-powered gestural interfaces for music
- Whistlr β contact sharing over audio on iOS
- Push-to-Talk Chat β lightweight audio chat app
- Plus experiments with BLE sensors, CoreML sound recognition, LED control, and more (archive)
- O Soli Mio: Exploring Millimeter Wave Radar for Musical Interaction (NIME 2017)
- Investigation of the use of Multi-Touch Gestures in Music Interaction (MSc Thesis, University of York)
Send me a note at nicholasarner (at) gmail (dot) com, or find me on Twitter.