Most streaming platforms make you choose: low latency or high scale. Real-time interaction or broad reach. You pick one delivery method and accept the trade-offs.
We decided early on that Conductor wouldn't force that choice. A pharma advisory board needs real-time Q&A with panelists and also needs to reach 500 viewers reliably. A product launch needs to simulcast to YouTube and LinkedIn while delivering a premium experience on the company's own watch page. These aren't edge cases. They're the default use case.
So we built three delivery paths into the architecture. Each one exists for a specific reason, serves a specific viewer, and makes a specific trade-off.
Path 1: WebRTC — the real-time path
WebRTC delivers video the way a phone call delivers audio: in real time, with no buffering, no segments, no manifest files. The video frame leaves the compositor and arrives at the viewer's browser in under a second. Often under 500 milliseconds.
This is the path that makes interactive production possible. When a moderator asks the audience a question and the poll results update in real time, the moderator needs to see those results while they're still talking about the topic — not 8 seconds later when the conversation has moved on. When a panelist responds to a viewer question submitted via Q&A, the viewer needs to hear the response while they still remember asking the question.
The trade-off is scale. WebRTC maintains a connection per viewer. Each connection consumes server resources — CPU for packet routing, bandwidth for individual streams. Ant Media can handle thousands of concurrent WebRTC viewers on a single instance, but it's not infinitely scalable the way a CDN is. At 10,000 viewers you start thinking about clustering. At 100,000 you need a different approach.
For Conductor's primary market — pharma advisory boards with 50-500 viewers, corporate town halls with 200-2,000 — WebRTC handles the load comfortably. The viewers get a real-time experience that no HLS-based platform can match.
Path 2: HLS via CloudFront — the scale path
HLS (HTTP Live Streaming) is the workhorse of internet video. Netflix, YouTube, Twitch — every major streaming platform uses some variant of HLS or its cousin DASH. The video is chopped into small segments (typically 2-6 seconds), served from a CDN, and reassembled by the player.
The advantage is scale. A CDN like Amazon CloudFront has 300+ edge locations worldwide. Once the stream is packaged as HLS segments and pushed to CloudFront, every additional viewer is just another HTTP request to the nearest edge node. Going from 1,000 to 100,000 viewers doesn't meaningfully increase the origin server load. The CDN absorbs it.
The trade-off is latency. Even in low-latency mode, HLS adds 2-5 seconds of delay. The segments need to be encoded, packaged, distributed to edge nodes, downloaded by the player, and buffered before playback. This is the physics of the protocol. You can optimize it, but you can't eliminate it.
In our architecture, Ant Media pushes an RTMP feed to Amazon IVS, which packages it as HLS and delivers it via CloudFront. The IVS Player SDK on the watch page handles adaptive bitrate switching — if a viewer's connection degrades, the player drops to a lower quality rendition automatically rather than buffering.
This path is the fallback. Every viewer gets it. If WebRTC fails (corporate firewall blocking UDP, weak mobile connection, browser that doesn't support it), the watch page seamlessly falls back to HLS. The viewer might not even notice — they just get a slightly less real-time experience.
Path 3: Multi-destination RTMP — the reach path
The third path doesn't go to Conductor's watch page at all. It goes everywhere else.
RTMP (Real-Time Messaging Protocol) is the lingua franca of live streaming ingest. YouTube Live, LinkedIn Live, Facebook Live, Twitch, and virtually every other streaming platform accepts RTMP. Ant Media can push the same composited output to multiple RTMP destinations simultaneously.
This means a single Conductor show can stream to the company's branded watch page (via WebRTC and HLS), to YouTube (for public discovery), to LinkedIn (for professional reach), and to a custom RTMP endpoint (for internal distribution or archival) — all at the same time, from one production.
The director doesn't need to set up separate encoders for each destination. They don't need OBS running alongside Conductor. They don't need a third-party multistreaming service. The multi-destination output is built into the infrastructure layer.
For pharma, this means a KOL advisory board can stream to the branded watch page for the primary audience while simultaneously streaming to an internal recording endpoint for compliance archival. For entertainment, it means a premiere event can go to the branded page, YouTube, and Twitter simultaneously. One show, every platform.
How the watch page selects the path
The viewer doesn't choose a delivery path. The watch page chooses for them.
On load, the watch page attempts a WebRTC connection to the Ant Media server. If the connection succeeds and the network quality is sufficient, the viewer gets the real-time path. Playback starts in under a second with no buffering.
If WebRTC fails — corporate firewall, UDP blocked, browser limitation — the watch page falls back to HLS via the IVS player. Playback starts with a brief buffer, and the viewer gets the CDN-delivered experience. Still reliable, still high quality, just with a few seconds of latency.
The transition is invisible. There's no "click here for low-latency mode" toggle. There's no quality selector that the viewer has to understand. The infrastructure makes the decision based on what works best for that specific viewer's environment.
Why this matters more than it sounds
The three-path architecture is an infrastructure decision, but its impact is felt in the product experience. When a platform has only HLS, the product team works around the latency — polls have to be pre-timed, Q&A feels disconnected, the moderator learns to wait before reacting. When a platform has only WebRTC, the team worries about scale — will it handle 500 viewers? What about 5,000?
Having all three means the product team doesn't have to think about delivery at all. They design interactive features knowing that real-time viewers will have a real-time experience. They promise scale knowing that the CDN handles overflow. They promise reach knowing that multi-destination streaming is a configuration, not a project.
The delivery layer becomes invisible. Which is exactly what infrastructure should be.
What's next
The three paths are architecturally defined. WebRTC via Ant Media and HLS via IVS are both functional. Multi-destination RTMP is configured but hasn't been tested in production yet.
The next step is the StreamingProvider abstraction — a clean separation between ingest endpoint and playback URL that lets us swap CDN providers without touching the director or watch page code. IVS is the right CDN for now. It might not be the right CDN forever. The architecture should be ready for that change when it comes.
The deeper lesson from building three delivery paths is the same lesson from every architectural decision in Conductor: don't make the user choose when the infrastructure can choose for them. The viewer shouldn't know or care whether they're watching via WebRTC or HLS. The director shouldn't know or care whether the stream is going to one destination or five. The infrastructure handles it. The humans do their jobs.
This is part of our ongoing build journal. Follow Conductor Live on LinkedIn for updates, or request early access to try the platform.