R³ – Real-Time Audio Bridging Between Audiocube and External DAWs
By Audiocube.app and Noah Feasey-Kemp (2025)
Introduction
Audiocube is a three-dimensional digital audio workstation (DAW) built on Unity, aiming to spatialize and manipulate sound in a 3D environment. Integrating Audiocube with traditional DAWs (like Ableton Live, Pro Tools, etc.) requires real-time audio I/O streaming between the applications. In practice, this means Audiocube should receive multiple individual audio channels from an external DAW (to position and process them as 3D AudioSources in Unity) and send its own processed audio output back into the DAW for monitoring or further mixing. Achieving this bidirectional link with low latency poses technical challenges. This paper examines the feasibility of such integration, surveys existing technologies (e.g. ReWire, JACK, ASIO drivers, plugin bridges, Dante, AudioGridder, WebRTC), and outlines an architecture for real-time audio streaming without licensing costs. Key considerations include software design for data flow, open-source vs proprietary options, performance constraints, and a potential custom implementation strategy (with pseudocode examples) when off-the-shelf solutions fall short.
Receiving Individual Audio Channels from DAWs
To spatialize DAW tracks in Audiocube’s 3D environment, Audiocube must ingest individual audio channels from the external DAW in real time. Traditional DAWs typically mix down to a stereo output, but many allow routing each track or bus to separate outputs (physical or virtual). The goal is to capture these isolated outputs and feed them into Unity. Several approaches can achieve this:
Virtual Audio Drivers: On each DAW track, the output can be assigned to a virtual audio device that loops back into Audiocube. For example, on macOS one might use Soundflower or BlackHole (which create virtual CoreAudio devices), and on Windows tools like VB-Audio Virtual Cable or JACK can serve a similar role (How to route audio between applications – Ableton ). Audiocube (Unity) would open the virtual device as an input, effectively “listening” to the DAW’s output in real time. Each channel (or stereo pair) from the DAW can be mapped to a separate AudioSource in Unity.
Audio Plugin Sends: Another method is to use a DAW insert plugin that transmits audio. A notable example is Cockos ReaStream, a free VST/AU plugin that sends audio/MIDI between different hosts over a network or locally (ReaRoute ASIO Driver and ReaStream Network Audio Plugin | The REAPER Blog). By inserting such a plugin on individual DAW tracks, each track’s audio can be streamed (via localhost network) to a corresponding receiver in Audiocube. Unity could integrate a small networking client to receive these streams and feed them into its audio engine. This avoids dealing with audio drivers directly and instead leverages the DAW’s plugin infrastructure.
Direct Device Sharing: In some cases, Audiocube could capture the DAW’s output using a shared audio interface. For example, a DAW might output to channels of a multi-channel audio interface, while Audiocube records from those same channels (acting like a multichannel “microphone”). However, standard ASIO drivers on Windows are single-client, meaning only one program can use the interface exclusively at a time. Solutions like Steinberg’s ReWire (now deprecated) historically enabled one DAW to send individual channels to another without extra hardware. ReWire functioned as an inter-app audio bus with sample-accurate timing, but it has been discontinued as of 2020 (Reason Studios, the developer, dropped support, and Live 11 removed ReWire) (ReWire in Live – Ableton ). Newer DAW versions have shifted away from ReWire toward other methods.
In summary, receiving multiple DAW channels in Unity is feasible via virtual audio routing or network streaming. The incoming audio streams can be assigned to Unity AudioSources, allowing each DAW track to behave as a positional sound in Audiocube’s 3D space. The main challenge is ensuring all channels remain synchronized and low-latency, which we address in later sections.
Sending Audiocube’s Audio Back to the DAW
Equally important is the return path: after Audiocube processes and spatializes audio, the result should be piped back into the external DAW in real time. This enables users to capture Audiocube’s output in their main DAW (for recording, additional effects, or master mix). Several strategies can accomplish this streaming out of Unity:
Virtual Audio Outputs: Audiocube can output its master mix (or individual sub-mixes) to a virtual audio device that the DAW listens to as an input. For instance, using JACK or Virtual Audio Cable, one can create virtual output channels from Unity that appear as input channels in the DAW (How to route audio between applications – Ableton ). On macOS, tools like Soundflower/BlackHole can expose Audiocube’s output to other apps, and on Windows, drivers like ASIO4All or FlexASIO can be configured to share output streams between programs (Multi-Client Operation Drivers(Windows) – Ableton ). The DAW would simply record from this virtual source as if it were an external instrument.
Plugin Receivers: Similar to the incoming path, a dedicated VST/AU plugin could be used inside the DAW to receive audio from Audiocube. For example, an “Audiocube Return” plugin could connect to the Audiocube application (via network or shared memory) and output the streamed audio to a DAW track. This is analogous to how AudioGridder works: in an AudioGridder setup, the DAW runs a plugin that connects to a separate server application which processes audio and sends it back (AudioGridder – DSP servers using general purpose computers and networks). In our case, Audiocube would act as the processing “server” and the plugin in the DAW would capture Audiocube’s 3D audio output stream. The DAW track with this plugin would then reproduce the spatialized audio in sync with the project. The communication can be bidirectional if needed (the same plugin or parallel plugins could handle send/return), effectively bridging the DAW and Unity’s audio engine.
ReWire Replacement Approaches: Before its deprecation, ReWire handled both audio send and return between two music applications, with the host DAW controlling transport sync. With ReWire gone, some modern workflows replace it by using one DAW as a VST instrument within another. For example, Reason 11 introduced a VST3 plugin version of the entire Reason DAW, eliminating the need for ReWire in order to integrate with other DAWs (About ReWire - what is the future of DAW to DAW connectivity? - General Discussion - Renoise Forums). By analogy, one could imagine Audiocube itself running as a VST plugin inside the main DAW, thereby sending its audio output directly through the DAW’s plugin channel. However, converting a Unity application into a VST plugin would be non-trivial (involving embedding the Unity engine in a plugin) and might sacrifice the standalone 3D interface. Thus, a more practical variant is the aforementioned audio-network plugin approach, where Audiocube remains a separate app but communicates through a plugin.
Implementing the return path focuses on capturing Unity’s audio mix. Unity’s audio engine allows grabbing the output of an AudioListener or specific AudioSources programmatically. For instance, using Unity’s audio API, Audiocube can route its final mix into a custom script or native audio plugin that transmits the audio to the DAW. Unity’s new AudioStream components (in the Unity Render Streaming package) even provide built-in classes to send an AudioSource/Listener over the network in real time (Audio Streaming Component | Unity Render Streaming | 3.1.0-exp.9 ) (Audio Streaming Component | Unity Render Streaming | 3.1.0-exp.9 ), which could be repurposed to stream audio back to the DAW environment (e.g. via WebRTC or similar). Whichever method is used, the return feed must be low-latency and high fidelity to be useful in a pro-audio setting.
Existing Technologies for Real-Time Audio Streaming
A variety of existing technologies can facilitate audio streaming between applications. We evaluate each for applicability to Audiocube:
ReWire (Deprecated): ReWire was specifically designed for DAW-to-DAW audio and MIDI routing with sample-accurate sync. It allowed one DAW (the “slave”) to stream multiple channels into another (the “host”) internally. For example, one could run Ableton Live and route individual tracks into Pro Tools via ReWire. Its advantages were tight integration (no manual cable patching) and tempo/transport synchronization. However, as noted, ReWire is now essentially “ghostware” – support was ended by the developer in 2020 (About ReWire - what is the future of DAW to DAW connectivity? - General Discussion - Renoise Forums). Modern DAWs like Ableton Live 11 have removed ReWire support and suggest alternate routing methods (ReWire in Live – Ableton ). Thus, ReWire is not a viable long-term solution for Audiocube, aside from legacy compatibility.
JACK Audio Connection Kit: JACK is an open-source sound server and API that provides low-latency, real-time audio routing between applications on Windows, macOS, and Linux (Home | JACK Audio Connection Kit). It acts as a virtual patchbay; multiple apps connect to the JACK server and can freely send/receive audio and MIDI between each other. JACK would allow Audiocube and a DAW to share audio interfaces and exchange audio streams with minimal latency. For instance, the DAW could output tracks to JACK ports instead of hardware outputs, and Audiocube could pick those up, then route its output back to the DAW – all via JACK’s internal bus. The strength of JACK is its flexibility and performance: “You can practically create a whole virtual cabling system under your PC” with JACK, and it enables device sharing (multiple apps using the audio interface simultaneously) (Home | JACK Audio Connection Kit). Many Linux audio users rely on JACK for complex routing. On the downside, JACK can be complex to set up, especially for non-technical users on Windows/macOS. It typically requires running a JACK server and manually connecting ports. As one user remarked in a forum, “ReWire is a piece of cake compared to JACK… JACK is great if you have time to experiment… but it’s nothing straightforward” (About ReWire - what is the future of DAW to DAW connectivity? - General Discussion - Renoise Forums). Thus, while JACK is feasible and cost-free, integrating it into Audiocube’s workflow would likely involve bundling a JACK installer and possibly automating the connection graph for the user – not impossible, but added complexity.
Virtual Audio Drivers (Loopback Devices): These are software drivers that create virtual audio interfaces on the OS. Examples include Soundflower and BlackHole on Mac, iShowU Audio Capture on Mac, and VB-Audio Cable or VoiceMeeter on Windows (How to route audio between applications – Ableton ) (How to route audio between applications – Ableton ). They effectively shuttle audio from one application to another by exposing virtual inputs/outputs that internally connect. Using such a driver, a DAW track can output to “Device X” and Audiocube can use “Device X” as its recording source (for input). Similarly, Audiocube’s output could be set to “Device Y” which the DAW listens to. These drivers are generally free (Soundflower, BlackHole are open-source; VB-Cable basic version is donationware) or low-cost (Loopback, VoiceMeeter Banana/Potato). The advantage is transparency – the OS treats them as normal audio interfaces, so Audiocube wouldn’t need any special code except to select the device as input/output. However, not all virtual drivers support multi-channel audio; some might be stereo only, requiring multiple instances. Also, they rely on the OS mixer in some cases, which can add a bit of latency or cause sample rate conversion if misconfigured. For Windows pro audio, where ASIO is the norm, these WDM/DirectSound-based solutions can incur higher latency. Nonetheless, virtual loopback drivers are a straightforward, cost-free approach and could be recommended to users as a quick solution (e.g. “Use BlackHole with 16 channels to connect Ableton and Audiocube”). The downside is that Audiocube would be dependent on third-party drivers and user configuration, rather than providing a seamless built-in link.
ASIO Multi-Client Drivers: On Windows, ASIO offers low-latency by bypassing the Windows mixer, but typically only one program can use an ASIO device at a time. To allow sharing or routing of ASIO audio, some solutions exist. ReaRoute (part of REAPER) is a virtual ASIO driver that provides 16 channels of audio send/receive between REAPER and other apps (ReaRoute ASIO Driver and ReaStream Network Audio Plugin | The REAPER Blog). A user could, for example, use ReaRoute as Ableton’s ASIO driver and route certain channels into REAPER. In Audiocube’s context, ReaRoute could be used if Audiocube’s audio engine could act as a REAPER client – though this tightly couples to REAPER being in the loop. Another approach is ASIO multi-client wrappers like FlexASIO. FlexASIO is an open-source universal driver that can interface with Windows WASAPI; importantly, it “permits multiple applications… to share the same audio driver”, functioning like a bridge that mixes sources in the Windows audio engine (Multi-Client Operation Drivers(Windows) – Ableton ). In practice, one could run both the DAW and Audiocube through FlexASIO, allowing both to output to the same soundcard simultaneously. That doesn’t inherently carry separate channel streams between them, but it at least enables simultaneous use of the audio interface. For actual inter-app routing on Windows via ASIO, VoiceMeeter (which presents ASIO devices) is another solution; it can mix and route multiple software I/O internally and offers virtual ASIO insert drivers (Multi-Client Operation Drivers(Windows) – Ableton ) (Multi-Client Operation Drivers(Windows) – Ableton ). Overall, ASIO-based solutions can achieve low latency and multichannel support, but often require the user to adopt specific tools (Reaper/ReaRoute, FlexASIO config, VoiceMeeter, etc.). They are viable but might not be as cross-platform or easy as other methods.
VST/AU Plugin Bridge: Using audio plugins as the bridge is an attractive approach because it leverages the DAW’s native extensibility. We’ve touched on ReaStream (which uses network) and the idea of an Audiocube plugin. There are also proprietary examples like Wormhole (an older plugin for LAN audio transfer) and Source Elements’ Source-Nexus plugin (which routes audio between apps). A custom Audiocube plugin could be developed in VST3/AU format to send or receive audio. The advantage is a potentially tight integration: the user could insert “Audiocube Send” on any track they want to pipe into Unity, and insert “Audiocube Return” on a track to get audio back from Unity. This method could work on both Windows and Mac (just different plugin formats), and Audiocube’s Unity app would communicate with these plugins (via network socket or shared memory). The disadvantage is the development effort and maintenance of plugins across formats and DAWs. Additionally, using network transport via plugin can introduce slight overhead and latency (likely on the order of one audio buffer or more). Still, this approach can be made very user-friendly and does not rely on external drivers or tools. It’s essentially an in-house ReWire replacement purpose-built for Audiocube. Notably, Reason Studios took this route by offering Reason as a plugin, and others have built similar bridges; for example, Bidule and Vienna Ensemble allow hosting instruments in a separate app and bridging into the DAW via plugins. AudioGridder, as an open-source project, demonstrates that even streaming live audio and plugin GUIs over a network can be done with low enough latency to feel almost as if the plugins were local (AudioGridder – DSP servers using general purpose computers and networks) (AudioGridder – DSP servers using general purpose computers and networks). That gives confidence that a purpose-built plugin bridge for Audiocube is technically feasible.
Network Audio Protocols (Dante and Others): In professional audio networking, Dante by Audinate is a dominant protocol for low-latency audio over IP. Dante’s software like Dante Via allows routing audio between applications on the same computer: it can “isolate and route audio to and from applications, up to 16 bidirectional channels each”, effectively acting as a virtual patchbay with networked audio pipes (Dante Via | Dante). Dante Via or the Dante Virtual Soundcard could connect Audiocube to a DAW as if they were devices on a network (even if on one machine). Dante’s advantages are robustness, the ability to scale to many channels, and very low latency (often a few milliseconds) because it’s designed for real-time performance. However, Dante’s solutions are proprietary and not free – they require licenses. For example, Dante Via has a cost (after a trial period) (Dante Via | Dante). Additionally, using Dante introduces network setup complexity (though on one PC it’s mostly automatic). There are also open standards like RAVENNA/AES67 for network audio, but they are more common in broadcast than in DAWs, and integration in Unity would be complex. Another similar technology is NDI (Network Digital Interface) commonly used for video+audio between production apps. NDI could, in theory, carry audio from a DAW to Unity and back (and some users employ NDI to route audio to OBS streaming software), but DAWs don’t natively support NDI, so a plugin or driver would still be needed. In short, network audio protocols like Dante are powerful and low-latency, but because Audiocube targets cost-free solutions, these proprietary options are less attractive unless a user already has that infrastructure.
WebRTC and Streaming Frameworks: WebRTC is the technology behind real-time audio/video in browsers and is geared for low-latency peer-to-peer communication. Unity’s Render Streaming package uses WebRTC under the hood for broadcasting audio and video from a Unity app (Audio Streaming Component | Unity Render Streaming | 3.1.0-exp.9 ). This implies Audiocube could use WebRTC to stream audio output to a peer (which could be a custom client in the DAW, or even a browser source). WebRTC is open-source and handles details like buffering, jitter, and codec negotiation (often using Opus codec by default). The benefit is that it’s a standardized method, and Unity already has components to send/receive audio streams (Audio Streaming Component | Unity Render Streaming | 3.1.0-exp.9 ). However, WebRTC’s design for internet use means it may introduce more latency than desired in a localhost scenario. It typically operates with at least ~10–20 ms buffers to smooth network jitter. Tests of WebRTC in a local loopback have shown latencies on the order of 15–30 ms in browsers (JackTrip WebRTC: high quality, uncompressed, low-delay audio streaming | Hacker News) – acceptable for communication, but on the higher side for tight musical interaction. One could possibly configure WebRTC for uncompressed audio and minimal buffering (and on a local network, packet loss is minimal, so buffers can be smaller). Still, compared to JACK or a direct driver, WebRTC is likely to have a bit more overhead. Also, integrating it would require a receiving client in the DAW, which likely points back to a plugin or external app since DAWs themselves don’t speak WebRTC. In summary, WebRTC could work and is free, but it might be overkill and not as low-latency as dedicated audio pipelines. It’s more relevant if one envisions Audiocube streaming to a web client or over the internet.
Comparison: In choosing a solution for Audiocube, several factors matter: latency, reliability, ease-of-use, cross-platform support, and licensing cost. JACK and virtual audio drivers are immediate solutions that exist today and could be recommended to users willing to set up some external tools – they are open-source or free and can achieve the goal. A plugin-based custom solution offers a tailored and potentially smoother user experience integrated into Audiocube, but requires development effort. Dante and similar pro solutions offer excellent performance but conflict with the cost-free requirement. Table 1 summarizes the pros and cons of key options:
ReWire: (+) Tight DAW integration (sync + audio); (–) Discontinued, not in new DAWs (ReWire in Live – Ableton ).
JACK: (+) Low-latency, multi-channel, open-source (Home | JACK Audio Connection Kit); (–) Requires manual setup/patching, not beginner-friendly (About ReWire - what is the future of DAW to DAW connectivity? - General Discussion - Renoise Forums).
Virtual Audio Cable (Soundflower/etc): (+) Simple concept, OS-level integration, free (How to route audio between applications – Ableton ); (–) Setup outside app, potential latency via OS mixer, limited channels (depending on driver).
ASIO ReaRoute/FlexASIO: (+) Low-latency, can use existing DAW features (ReaRoute ASIO Driver and ReaStream Network Audio Plugin | The REAPER Blog) (Multi-Client Operation Drivers(Windows) – Ableton ); (–) Windows-only, needs specific hosts (Reaper) or config, moderate complexity.
Custom VST/AU Bridge: (+) Seamless for end-user (just use plugins), fully in-app control, cross-platform possible; (–) Significant development needed, must manage network/buffer code, ensure all DAWs compatibility.
Dante Via / Soundcard: (+) Professional-grade, up to 16+ channels, very low latency, robust (Dante Via | Dante); (–) Proprietary, requires license, overkill if user doesn’t already have Dante network.
AudioGridder-like (Plugin+App): (+) Open-source example to follow, proven concept for streaming audio/MIDI and even UIs (AudioGridder – DSP servers using general purpose computers and networks); (–) Meant for plugins not entire DAW mixes, but concept can be adapted.
WebRTC Streaming: (+) Built-in Unity tools, handles network adaptivity, open standard (Audio Streaming Component | Unity Render Streaming | 3.1.0-exp.9 ); (–) Additional latency due to network stack, needs custom client in DAW, meant for internet scenarios.
Given these options, a cost-free, cross-platform approach that fits Audiocube’s niche likely involves either leveraging an open audio router (like JACK or virtual drivers) or creating a custom plugin-socket solution. The next sections will discuss the architecture needed and how to address performance and latency, whichever solution is chosen.
Architectural Considerations for Bidirectional Audio Streaming
Designing a robust real-time audio bridge requires careful planning of data flows, synchronization, and system resources. The overarching architecture for Audiocube’s integration can be visualized in terms of audio source and sink paths:
External DAW → Audiocube (Unity): For each audio channel/track to send, the DAW needs an audio output route that doesn’t go to the hardware but into Audiocube. This could be a virtual audio channel or plugin-based stream. Audiocube will have a corresponding audio input module that continuously receives the audio sample data. Within Unity, this data must be buffered and fed into an AudioSource component (or a custom Unity audio callback) so that it becomes part of the 3D audio scene. If multiple channels are coming, each might be mapped to a distinct AudioSource object placed at the desired 3D coordinates. Audiocube’s audio engine (based on Unity) will mix these in real time with spatialization.
Audiocube → External DAW: Audiocube’s overall audio output (the mix down of all 3D sources, or specific busses of them) needs to be captured and sent out. In Unity, one can attach a script to the AudioListener (the component that receives the final mix) or use the OnAudioFilterRead callback on an AudioSource to grab generated samples. Those samples then must be delivered to the DAW. In a virtual driver scenario, Audiocube would simply output to a virtual audio device, so the OS takes care of forwarding those samples to the DAW’s input. In a custom plugin scenario, Audiocube would explicitly transmit the audio data (over a network socket or shared memory), and the DAW plugin would receive and inject it into the DAW’s audio stream.
Data Flow & Protocols: If using a network or custom transport, a key decision is whether to send audio as raw PCM or use compression. Given this is local machine streaming and high fidelity is important, uncompressed PCM frames are ideal to avoid codec latency and quality loss. For example, streaming 32-bit float PCM at 48 kHz for a stereo channel is about 0.384 Mbit/s, easily handled by localhost network or memory. Thus, Audiocube can send audio in small packets corresponding to audio buffers (e.g. 128 samples per packet for low latency). UDP is a suitable protocol for this (used by ReaStream and JACK’s netjack) because it has lower overhead and no built-in retransmission delay (a lost packet in audio is better treated as a glitch than waiting too long). If using TCP, one must be careful to avoid it stalling on packet loss or causing jitter due to Nagel’s algorithm; it could be used if reliability is paramount, but real-time audio often prefers UDP with its own lightweight sync/check. WebRTC internally uses UDP with congestion control and can use Opus encoding by default, but for a local link we would disable heavy compression.
Synchronization: One big architectural concern is keeping Audiocube and the DAW in sync. If the DAW is playing a song at a certain tempo or timeline, Audiocube’s processing should ideally be time-aligned. Audio clock sync is achieved if both apps use the same audio interface clock or if one’s output drives the other’s input in real time. With a direct loop (like virtual cable or plugin send), the DAW’s audio is driving Audiocube input buffer every 128 samples, and Audiocube can process immediately and return audio which the DAW then picks up (perhaps next buffer cycle or so). This pipeline inherently introduces one or more buffer-delays of latency, but as long as those are consistent, the audio will be steady (just slightly delayed). The latency can be compensated if needed by nudging track delays in the DAW or adjusting Audiocube’s timing (if, for example, Audiocube is also generating visuals or events that need alignment). If tight musical sync (tempo) is needed (say Audiocube triggers events in sync with DAW playback), using Ableton Link or MIDI Clock could be considered. Ableton Link is a technology that syncs timing (tempo and phase) across apps on a network (ReWire in Live – Ableton ); Audiocube could join a Link session to share tempo with the DAW without any cables. This doesn’t carry audio, but ensures any tempo-synced effects or sequencing in Audiocube line up with the DAW’s beat grid.
At the audio buffer level, buffer size and sample rate must be consistent between DAW and Audiocube. A mismatch in sample rate would cause drift or pitch issues – this is usually solved by both using the same audio interface or by software resampling. In a virtual driver scenario, the driver ensures a single sample clock. In a plugin network scenario, Audiocube should follow the DAW’s sample rate (e.g., Audiocube could detect the incoming audio’s rate or have a setting to match the DAW project). If drift still occurs (different clock domains), a strategy is needed to avoid buffer overrun/underrun: e.g., periodically skip or duplicate a sample or use a dynamic resampler to fine-tune. JACK, for instance, forces all clients to use one master clock, avoiding this issue altogether (Home | JACK Audio Connection Kit).
Threading and Performance: Audiocube (Unity) will be running graphics and game logic threads in addition to audio. Unity’s audio runs on a separate high-priority thread to maintain real-time performance. When bridging audio, we must ensure that any network or I/O operations do not interrupt the audio thread. A common architecture is a double-buffer or ring buffer: one thread receives audio data (from network or driver) and writes it into a ring buffer, and the Unity audio thread reads from this buffer during its audio callback. Similarly, for output, the audio thread writes to a ring buffer that a separate thread reads and sends to the network or driver. This decoupling prevents blocking operations (like socket send/recv) from happening on the critical audio callback, which could cause dropouts if they took too long. The size of these buffers might be one or two audio frames worth to minimize added latency.
Transport and Format: If using a custom protocol, a simple design might include a small header on each audio packet indicating channel ID, sequence number (to detect drops or reorder), and payload of PCM samples. Since this is all real-time, there’s no need for complex file formats or large buffers. Some implementations (like Soundjack or JackTrip for remote jamming) send raw audio over UDP with very small frames to minimize latency. Those lessons can be applied here on a single machine scenario: use the smallest possible stable buffer, and tune OS socket buffers to not introduce extra delay.
In summary, the architecture involves two main components in Audiocube: an Input Stream Handler (gathering audio from DAW sources and feeding Unity audio) and an Output Stream Handler (capturing Unity’s audio and sending to DAW). On the DAW side, if using a plugin, there will be complementary components: a Send Plugin (sends track audio to Audiocube) and a Receive Plugin (plays back audio from Audiocube). Figure 1 (conceptually) would show DAW tracks flowing into Audiocube, and Audiocube’s output flowing back into a DAW master or aux track. All of this must run continuously with minimal buffering to feel “real-time.”
Open-Source vs. Proprietary Implementations
When selecting technologies for this audio bridge, cost and licensing are a concern. The priority is to use open-source or free solutions to avoid additional fees for Audiocube users. Below we categorize options:
Open-Source / Free: JACK (GPL licensed, free) (Home | JACK Audio Connection Kit), Soundflower and BlackHole (MIT/BSD licensed), JACKTrip (GPL, for network audio), AudioGridder (MIT licensed) (AudioGridder – DSP servers using general purpose computers and networks), ReaPlugs/ReaStream (freeware), FlexASIO (MIT) (Multi-Client Operation Drivers(Windows) – Ableton ), and WebRTC (BSD-like license) are all free to use. These can be integrated or recommended without licensing cost. For example, Audiocube could bundle JACK on Windows with appropriate credit to allow out-of-the-box routing. Writing a custom plugin or network code using libraries like JUCE (which has a permissive license for non-GPL use if not using the GPL parts) is also possible without royalties.
Proprietary: ReWire was proprietary (by Propellerhead) and required license agreements to integrate – moot now that it’s deprecated. Dante (Audinate) is proprietary and requires purchasing software (like Dante Via or Virtual Soundcard) for full functionality. Rogue Amoeba’s Loopback and VB-Audio’s VoiceMeeter (beyond basic edition) are commercial products. Given Audiocube’s needs, we likely avoid these; however, we acknowledge their performance. For instance, a studio that already owns Dante Virtual Soundcard could use it to connect Audiocube with nearly zero hassle – but that’s a niche case. ASIO itself is somewhat proprietary; Steinberg provides an SDK for ASIO under a free license for developers, but writing a custom ASIO driver from scratch is complex. Instead, relying on community-driven drivers (ASIO4All, FlexASIO) avoids having to navigate proprietary driver development.
In weighing open vs. closed, the open solutions provide ample tools but sometimes less polish. Proprietary solutions often come with user-friendly interfaces (e.g., Dante Via’s drag-and-drop routing UI (Dante Via | Dante) (Dante Via | Dante)) and robust support, at the cost of ~$50–100. For Audiocube’s user base, which might include indie creators or students, asking them to pay for an extra audio router is undesirable. Therefore, focusing on an open solution aligns with accessibility. Moreover, an in-house implementation (custom plugin bridge) would be under Audiocube’s control entirely, avoiding external dependencies or licensing issues. The development effort is justified if it provides a smoother experience than piecing together third-party tools.
Performance and Latency Challenges
Real-time audio streaming mandates low latency and minimal jitter. Latency is the delay between audio leaving the DAW and arriving back (through Audiocube) at the DAW’s output. To be unobtrusive, this round-trip should ideally be only a few milliseconds – especially if the musician is monitoring through Audiocube’s processed audio. Professional performers can perceive latency around 6–10 ms as “tight,” while latencies above ~20–30 ms become noticeable and can disrupt timing (JackTrip WebRTC: high quality, uncompressed, low-delay audio streaming | Hacker News) (JackTrip WebRTC: high quality, uncompressed, low-delay audio streaming | Hacker News). One often-cited guideline is a 7 ms threshold for “truly undetectable” latency in interactive audio (JackTrip WebRTC: high quality, uncompressed, low-delay audio streaming | Hacker News) (though achieving 7 ms total may be challenging; under 15 ms is more realistic for our scenario).
Sources of Latency: In the Audiocube-DAW loop, latency comes from several buffering stages: the DAW’s audio buffer (e.g., 128 or 256 samples), any transport buffering (e.g., OS audio buffer or network packetization), Audiocube’s processing buffer (Unity’s audio buffer, typically same size), and then the return path buffers. If each stage is 128 samples at 48 kHz (~2.67 ms) and there are, say, three stages (DAW out, Unity in/out, DAW in), that’s roughly 8 ms base latency. Add any additional safety buffers or OS scheduling delays and it might be ~10–15 ms. Our goal is to minimize extra buffering beyond the necessary audio block sizes.
Buffer Management: Using very small buffers (e.g., 32 samples) can cut latency further, but at high CPU cost and risk of dropouts if the system cannot complete processing that fast. A 128-sample buffer is a common compromise in DAWs for reliable low-latency performance. If JACK is used, one could configure 64-sample or even 32-sample buffers if the system is powerful and audio interface drivers allow it. If using FlexASIO with Windows audio, the default was 20 ms buffer (Multi-Client Operation Drivers(Windows) – Ableton ), which is too high; it can be configured lower, but WASAPI shared mode often has a minimum around 10 ms. Thus, a native ASIO or CoreAudio path is preferred for ultra-low latency.
Operating System Constraints: On Windows, running two audio apps with ASIO can be problematic without multi-client support. If one uses ASIO for the DAW and another for Unity, typically the second app fails to open the device. Solutions mentioned (FlexASIO, VoiceMeeter) essentially force both through one driver to allow sharing (Multi-Client Operation Drivers(Windows) – Ableton ). Another approach is to run the DAW on ASIO and Unity on DirectSound/WASAPI; but then Unity’s audio might have a larger delay (since non-ASIO drivers often have bigger buffers). A unified approach (via a shared backend like JACK or a single driver) avoids this mismatch. On macOS, CoreAudio natively allows multiple apps to use the same device and offers low-latency mixing. Using an Aggregate Device or a Multi-Output Device, one can even combine multiple hardware or virtual devices. So Mac users might find it easier to route audio without special drivers – for instance, setting the system output to Soundflower means any app’s output (including Unity) goes there, and the DAW can take Soundflower as input.
CPU and Thread Priority: Unity and the DAW will both consume CPU. Real-time audio threads should run at high priority to meet deadlines. Windows has a special “Pro Audio” scheduling class in recent versions when using MMCSS (Multimedia Class Scheduler) for ASIO threads. Unity’s audio engine uses a separate thread with real-time priority; however, heavy graphical processing in Unity could starve CPU cycles from audio if not optimized. It’s essential to profile Audiocube to ensure the audio thread isn’t overrun. If Audiocube is doing intensive 3D graphics simultaneously, consider giving the audio threads a higher priority boost or enabling Unity’s DSP buffer to be modest (Unity often defaults to 512 samples = ~11 ms by default, which we’d reduce for this use-case).
Jitter and Dropouts: Network-based streaming introduces the possibility of jitter (variations in packet arrival time). On localhost with UDP, jitter is usually negligible, but if the system is under load, a packet might be delayed. A small jitter buffer (one or two extra packets) can smooth this but adds latency. Solutions like WebRTC use adaptive jitter buffers, but we can keep it static and minimal for local use. Dropouts (lost packets) in local streaming are rare; if using UDP, we might occasionally lose a packet if the buffer overruns, in which case a simple strategy is to fill missed audio with silence or last sample to avoid pops. If using TCP, dropouts per se won’t happen, but a lost packet will stall the stream until retransmitted, which can cause an audible glitch or delay – another reason UDP is often preferred for continuous audio.
Throughput: Audio streaming bandwidth is not a major issue for typical track counts. For example, 16 channels of 48 kHz 32-bit audio is about 16 * 48,000 * 4 bytes ≈ 3.07 MB/s (~25 Mbps). Modern computers handle that easily in memory or over loopback network. The overhead of any protocol (TCP/IP, etc.) is also minor relative to these rates. So even if Audiocube wanted to stream, say, 32 channels in and out, it would be within local network capability. The more important aspect is ensuring timely delivery rather than raw bandwidth.
Monitoring and Feedback: If the user monitors audio in both Audiocube and the DAW simultaneously, there could be echo or phasing if both signals are heard. Typically, one would monitor from one end. For instance, the musician might listen to the DAW’s master output which includes the returned Audiocube audio. In that case, Audiocube might not need to output to speakers at all (it could be “silent” and only send to the DAW). Alternatively, Audiocube could output to speakers and the DAW is only used to record – but then the DAW should mute input monitoring to avoid doubling. These use-case decisions influence how the routing is set up (e.g., whether Audiocube’s output device is a real speaker or a virtual cable only).
In summary, achieving low-latency, reliable streaming requires:
Keeping buffer sizes as low as stable on the system (potentially 128 samples or below).
Aligning sample rates and minimizing unnecessary sample conversions.
Using efficient inter-process transport (shared memory or loopback network with minimal overhead).
Isolating audio I/O on dedicated threads to avoid hiccups from the main Unity loop.
Testing on both platforms (Win/Mac) for any OS-specific latency (e.g., Windows audio vs. CoreAudio differences). If done correctly, the additional latency introduced by Audiocube’s round trip can be kept to a few milliseconds above the base audio hardware latency, which is acceptable for most real-time music applications.
Custom Implementation Strategy (with Pseudocode)
If existing solutions are inadequate or too cumbersome, a custom audio streaming implementation can be developed for Audiocube. The core idea is to create a lightweight client-server model where the DAW and Audiocube exchange audio via network sockets or shared memory. One practical design is to use a pair of VST/AU plugins as the DAW-side client and embed a networking server in Audiocube (Unity C# or C++ native plugin). Below, we outline such an approach:
1. Audiocube (Unity) Side – Network Audio Server:
Audiocube will run a small server that listens for audio connections. This could be implemented in C# using UDP sockets (for simplicity) or TCP if reliability is preferred. The server needs to handle multiple incoming streams (one per DAW track plugin) and possibly one outgoing stream (to a return plugin). We can identify streams by port number or an identifier in the data. Unity’s audio system will interface with this server via ring buffers.
Data structures: RingBuffer per incoming channel, and one RingBuffer for outgoing mix. Each RingBuffer holds float samples. Size might be a few blocks worth (to account for slight timing differences).
Receiving audio (Unity side): A background thread AudioNetServerThread receives UDP packets. Each packet contains audio samples (e.g., 128 samples * channels). The thread decodes the packet header to find which channel it belongs to, then writes the samples into that channel’s RingBuffer. It signals that new data is available (could use a semaphore or atomic flags).
Feeding Unity Audio: For each AudioSource meant to play a DAW stream, attach a custom script with OnAudioFilterRead(float[] data, int channels). Unity calls this every audio frame asking for audio data to output for that source. Our implementation for an incoming stream source would read from the corresponding RingBuffer into data. Pseudocode for the AudioSource callback:
// Pseudocode for Unity AudioSource custom stream
float[] internalBuffer = new float[blockSize]; // e.g., 128 samples
RingBuffer inputBuffer; // assigned to this source’s channel
void OnAudioFilterRead(float[] data, int channels) {
int N = data.Length;
if(inputBuffer.AvailableSamples >= N) {
inputBuffer.Read(data, N); // fill the audio buffer with fresh samples
} else {
// Not enough data, fill with silence to avoid clicks
Array.Clear(data, 0, N);
}
}
This way, the Unity audio thread pulls whatever audio has arrived from the DAW. If the DAW is running slightly ahead, the ring buffer may have extra (which will accumulate up to a limit), and if it’s running slightly behind, Audiocube will output silence or possibly repeat last samples briefly (though repeating can cause jittery sound, so silence or a simple hold might be better for a tiny gap).
Sending audio (Unity side): To send processed audio back, we target the AudioListener or a specific AudioSource that represents the mix. Unity allows us to tap the AudioListener output via OnAudioFilterRead on a script attached to the AudioListener. In that callback, Unity provides the mixed audio data for that frame. We then send it out via UDP to the DAW’s return plugin. Pseudocode:
// Pseudocode for Unity AudioListener tap
UdpClient udp; // initialized and connected to DAW return plugin address
int sequence = 0;
void OnAudioFilterRead(float[] data, int channels) {
// data contains the mixed audio for this frame
byte[] packet = EncodeAudioPacket(data, channels, seq: sequence);
udp.Send(packet, packet.Length);
sequence++;
}
Here EncodeAudioPacket would convert the float array to bytes (e.g., 32-bit float PCM) and add a simple header with sequence number etc. We could also apply basic compression or downmix if needed, but likely not – keep it raw for fidelity. The sequence can help the receiver detect if a packet was dropped (sequence jump).
2. DAW Side – Send/Receive Plugins:
We create two plugin types: AudioCubeSend and AudioCubeReturn. These would be implemented in C++ using a framework like JUCE or the VST3 SDK. The send plugin is an audio effect plugin that takes audio in and has no traditional audio out (or passes it through unchanged if we want to also monitor locally). The return plugin is an instrument or effect that produces audio (with no audio input, just network input).
AudioCubeSend (effect plugin): For each block of samples the DAW processes, this plugin will transmit those samples to Audiocube. Using the VST/AU API, we get a pointer to the audio buffer for that block in the process() callback. We then send it via UDP to Audiocube’s server. We can use a unique port or stream ID per instance. One approach is to have the plugin instantiate with a user-selectable ID (if multiple tracks, user sets “send channel 1, 2, 3…” etc., or auto-assign sequentially). Simplified pseudocode in C++ style:
void AudioCubeSend::processAudio(float** inputs, float** outputs, int numSamples, int numChannels) {
// Prepare packet with audio data
AudioPacketHeader hdr;
hdr.channelId = this->channelId;
hdr.sequence = seqCount++;
hdr.numSamples = numSamples;
hdr.numChannels = numChannels;
// Copy samples interleaved or channel-separated as agreed
PacketBuffer buffer;
buffer.write(&hdr, sizeof(hdr));
buffer.write(inputs[0], numSamples * numChannels * sizeof(float));
udpSocket.send(buffer.data(), buffer.size());
// Optionally pass audio through to output (or silence it if we don't want local monitoring)
for(int ch = 0; ch < numChannels; ++ch) {
memcpy(outputs[ch], inputs[ch], numSamples * sizeof(float));
}
}
In this snippet, inputs is the array of input channel buffers. We copy them to a network packet along with a header (with channelId identifying which track). We use seqCount to number packets. We send via an already-connected UDP socket (the socket would be set to target Audiocube’s IP/port, likely localhost:someport). We then copy input to output so the DAW’s audio chain isn’t broken (or we could output silence if we want to remove it from the DAW mix to avoid duplication – depends on use case).
AudioCubeReturn (instrument plugin): This plugin will receive the audio from Audiocube and output it inside the DAW. Typically, in a DAW, an instrument plugin is one that generates sound. We can leverage that by making this plugin launch a background UDP listener that receives Audiocube’s streamed audio. For simplicity, it could join a multicast or just listen on a predetermined port. In the plugin’s audio callback, we fill the output buffer with any received samples. The challenge is that the DAW will call the plugin’s process function regularly (e.g., every 128 samples) and expects audio ready. We should buffer incoming packets in the plugin and match them to the DAW’s timing. A ring buffer here too is useful.
Pseudocode for the return plugin’s audio generation:
void AudioCubeReturn::processAudio(float** inputs, float** outputs, int numSamples, int numChannels) {
// inputs unused
// We assume stereo output for simplicity (numChannels=2).
// Fill outputs from internal buffer which is filled by network thread.
if(netBuffer.available() >= numSamples * numChannels) {
netBuffer.read(outputs, numSamples, numChannels);
} else {
// not enough data, output silence or hold last sample
for(int ch=0; ch< numChannels; ++ch)
memset(outputs[ch], 0, numSamples * sizeof(float));
}
}
Meanwhile, a separate thread in the plugin (started when the plugin is loaded) would be listening for UDP packets from Audiocube:
void NetworkReceiveThread() {
while(running) {
int bytesReceived = udpSocket.recv(recvBuffer, MAX_PACKET);
if(bytesReceived > 0) {
AudioPacketHeader* hdr = (AudioPacketHeader*)recvBuffer;
float* audioData = (float*)(recvBuffer + sizeof(AudioPacketHeader));
int N = hdr->numSamples * hdr->numChannels;
netBuffer.write(audioData, N); // push into ring buffer
// (Could add sequence check here to detect drop and maybe handle accordingly)
}
}
}
This way, the plugin decouples the network reception from the audio callback using netBuffer. The buffer size should be a little larger than one block to allow slight timing difference. The plugin might also need to handle dynamic cases like the user stopping Audiocube (so no data — it should then output silence).
3. Ensuring Low Latency: The custom protocol we described uses network sockets, but on the same machine one could optimize further. For instance, using a named shared memory region for the ring buffer instead of UDP could reduce overhead (no packetization). The DAW plugin and Audiocube could map the same memory and use inter-process mutexes or atomic flags to coordinate read/write. This can achieve very fast transfer (essentially just memory copy). However, implementing cross-process shared memory with proper locking is more complex than UDP, and UDP on localhost is usually very fast (<0.1 ms). So the network approach is acceptable and simpler to implement/maintain.
4. Example Walk-through: Suppose the user has two tracks in Ableton they want in Audiocube, and one return track for Audiocube’s mix. They insert two instances of AudioCubeSend (assign ID 1 and 2) on those tracks, and an instance of AudioCubeReturn on a return track. They launch Audiocube, which starts its server listening on, say, port 9000 for incoming audio and sends out on port 9001 for return. As Ableton plays, for each block, Send(1) plugin sends track1 audio with ID=1 to port 9000; Send(2) sends track2 audio with ID=2 to port 9000. Audiocube’s server receives both, puts them in separate ring buffers. Unity’s audio thread pulls from those buffers for AudioSource1 and AudioSource2, which are positioned differently in 3D space. Unity processes spatialization (delay, panning, maybe reverb) and produces a mixed output. In the AudioListener callback, it sends the mix to port 9001. The return plugin in Ableton receives that and fills the return track’s buffer. The user hears the spatialized mix alongside the rest of their project. The round-trip latency might equate to roughly one or two audio blocks delay. If needed, the user can nudge the track delay compensation in Ableton (or Audiocube could potentially report a latency value to the DAW via plugin so it can compensate automatically if the DAW supports plugin delay compensation (PDC) for such routing).
5. Pseudocode Summary: Below is a simplified summary tying it together:
// DAW Send Plugin (pseudo)
processAudio(inputs, outputs, N, ch) {
AudioPacket pkt;
pkt.id = channelId;
pkt.samples = N;
copy(inputs, inputs+N*ch, pkt.audioData);
socket.send(pkt);
passthroughAudio(outputs, inputs, N, ch);
}
// DAW Return Plugin (pseudo)
processAudio(inputs, outputs, N, ch) {
if(returnBuffer.available() >= N*ch) {
returnBuffer.read(outputs, N, ch);
} else {
zero(outputs, N, ch);
}
}
NetworkThread() {
while(listening) {
pkt = socket.recv();
returnBuffer.write(pkt.audioData, pkt.samples * numChannels);
}
}
// Audiocube (Unity) side
AudioInputThread() {
while(listening) {
pkt = socket.recv();
inputBuffer[pkt.id].write(pkt.audioData, pkt.samples * numChannels);
}
}
AudioSourceScript.OnAudioFilterRead(data, channels) {
id = this.assignedId;
if(inputBuffer[id].available() >= data.length) {
inputBuffer[id].read(data, data.length);
} else {
zero(data);
}
}
AudioListenerScript.OnAudioFilterRead(data, channels) {
AudioPacket pkt;
pkt.id = 0; // 0 for master out
pkt.samples = data.length / channels;
copy(data, data+data.length, pkt.audioData);
socket.send(pkt);
}
This pseudocode omits some details (thread safety, initialization, error handling), but conveys the mechanism.
6. Using Unity Native Audio Plugin SDK: As an alternative to the above pure C# approach, one could implement parts of this in Unity’s native audio plugin layer (C++). Unity’s Native Audio Plugin SDK allows creation of custom audio DSP that can have custom IO. For instance, an AudioPlugin could open a socket and exchange audio. This might yield lower latency and better stability (no garbage collection concerns). However, it’s a more advanced route. The high-level design remains the same.
7. Testing and Iteration: A custom solution like this should be tested under realistic conditions: different buffer sizes, heavy CPU load in Unity, various DAWs, etc., to ensure it doesn’t drift or glitch. Logging buffer levels and sequence numbers can help tune the ring buffer size and detect any sync issues.
While a custom implementation is non-trivial, it offers Audiocube a tailor-made bridge that could be integrated into its UI (for example, Audiocube could automatically discover the DAW plugins if they broadcast their presence, similar to how AudioGridder’s plugin auto-discovers servers (AudioGridder – DSP servers using general purpose computers and networks)). This would create a smooth user experience: “Press connect to DAW” in Audiocube, and the audio starts streaming.
Conclusion
Real-time audio I/O bridging between Audiocube and traditional DAWs is technically feasible using a variety of approaches. Existing technologies like virtual audio cables and JACK can achieve the basic functionality today, though with some configuration overhead for the user. More integrated solutions, such as custom VST/AU plugin bridges or leveraging frameworks like WebRTC, can provide a smoother and more automated experience within Audiocube’s ecosystem. Each option comes with trade-offs in latency, complexity, and cost. Our analysis favors open-source, free methods to maintain accessibility – for example, using JACK for a power-user or developing a bespoke plugin-based streaming system for a turnkey setup. The software architecture should be designed around robust real-time principles: consistent sample rates, minimal buffering, and thread-safe audio exchange. By prioritizing low-latency data flow and proper sync, Audiocube can augment traditional DAWs with its 3D audio capabilities in real time, effectively acting as an extension of the DAW’s mixing environment.
In summary, bridging Audiocube with other DAWs can be accomplished via either intelligent use of existing routing tools (with JACK and virtual drivers being strong contenders) or by implementing a custom interconnect system akin to a modern ReWire replacement. The custom route, while requiring development effort, can be tailored to Audiocube’s needs (e.g., multi-channel 3D audio streams) and optimized for performance, as illustrated with the provided pseudocode. Given the rapid evolution of music production workflows, providing a seamless link between Audiocube’s spatial audio and standard DAW workflows will empower producers to incorporate immersive audio techniques without leaving their familiar production environment. With careful attention to latency and synchronization, Audiocube can effectively become a real-time 3D audio plugin for any DAW – enabling new creative possibilities in music and sound design.
References
Ableton. (2020). ReWire in Live – Ableton Knowledge Base. “ReWire has been deactivated as of Live 11. The developer, Reason Studios, ended support for ReWire in 2020.” (ReWire in Live – Ableton )
Ableton. (2020). How to route audio between applications – Ableton Knowledge Base. Lists common virtual audio routing tools: “Various virtual audio-routing protocols exist… VoiceMeeter (Windows), Jack (Windows and Mac), iShowU (Mac), Soundflower (Mac), BlackHole (Mac), Loopback (Mac).” (How to route audio between applications – Ableton )
JACK Audio. (n.d.). Home | JACK Audio Connection Kit. “JACK... is a professional sound server API... to provide real-time, low-latency connections for both audio and MIDI data between applications… enabling device sharing and inter-application audio routing.” (Home | JACK Audio Connection Kit)
Renoise Forums. (2020). About ReWire – what is the future of DAW to DAW connectivity? (Forum discussion). User notes Reason 11 dropped ReWire: “Reason 11 and beyond no longer support ReWire... ReWire is now basically ghostware… no need for ReWire because of Reason as VST.” (About ReWire - what is the future of DAW to DAW connectivity? - General Discussion - Renoise Forums) (About ReWire - what is the future of DAW to DAW connectivity? - General Discussion - Renoise Forums)
Renoise Forums. (2020). ReWire vs JACK user experience (Forum discussion). “ReWire is a piece of cake compared to JackAudio... Jack is a great environment if you have time to experiment... but it’s nothing straightforward… nothing as user friendly as ReWire.” (About ReWire - what is the future of DAW to DAW connectivity? - General Discussion - Renoise Forums)
Reaper Blog. (2022). ReaRoute ASIO Driver and ReaStream Network Audio Plugin. “ReaRoute... provides 16 channels of input and output between other ASIO apps and tracks in REAPER. ReaStream is a plugin for streaming audio and MIDI between computers on your network, or between apps using the ReaPlugs VST.” (ReaRoute ASIO Driver and ReaStream Network Audio Plugin | The REAPER Blog)
Ableton. (2022). Multi-Client Operation Drivers (Windows) – Ableton Knowledge Base. “FlexASIO is a universal ASIO driver… can be used for multi-client operation… permits multiple applications to share the same audio driver... making it possible to emulate a typical Windows application that opens an audio device in shared mode.” (Multi-Client Operation Drivers(Windows) – Ableton )
Audinate. (n.d.). Dante Via – Product Page. “Dante Via isolates and routes audio to and from applications, up to 16 bidirectional channels each… you can even send audio from different applications to different locations at once.” (Dante Via | Dante)
AudioGridder. (2020). AudioGridder – DSP servers using general purpose computers and networks. “AudioGridder is a network bridge for audio and MIDI that allows for offloading the DSP processing of audio plugins to remote computers… On your DAW, use the AudioGridder plugin… audio data from your DAW will be streamed over the network, processed on the server and streamed back.” (AudioGridder – DSP servers using general purpose computers and networks) (AudioGridder – DSP servers using general purpose computers and networks)
Unity Technologies. (2021). Unity Render Streaming Documentation – Audio Streaming. Describes Unity’s AudioStreamSender/Receiver: “This component streams the audio rendering results from an AudioListener or AudioSource... The receiver component receives an audio track stream and renders to an AudioSource.” (Audio Streaming Component | Unity Render Streaming | 3.1.0-exp.9 ) (Audio Streaming Component | Unity Render Streaming | 3.1.0-exp.9 )
Hacker News. (2021). JackTrip WebRTC Latency Discussion. User measurement: “Testing in Chrome on Mac... lowest latency I can get is 23ms, in Firefox it’s 14ms (client-side round-trip).” (JackTrip WebRTC: high quality, uncompressed, low-delay audio streaming | Hacker News)
Hacker News. (2021). Latency Perception Comment. “In experiments transmitting realtime audio... 10ms frame sizes are acceptable for one-way synchronicity... 7ms total latency is the threshold for truly undetectable processing (take with a grain of salt).” (JackTrip WebRTC: high quality, uncompressed, low-delay audio streaming | Hacker News)