Multichannel/multi-zone outputs

Post your Echotopia feature requests here. A good idea is to search or ask first on the Echotopia General Discussions forum to see if the feature already exists.
Post Reply
dionzand
Posts: 4
Joined: 29 Dec 2022 12:34

Hi,

Amazing software already! One feature that would really help my project is the creation of multichannel/multi-zone outputs.
I want to create a multi-zone installation with multiple speakers. Once I created my soundscape in Echotopia, it would be nice to place multiple Observer-objects on the map that I can feed to different audio outputs/speakers.

I believe this feature can also be really useful for multi-zone escape rooms. It would be really cool when you set up the soundscape for your escape room and place different speakers throughout the rooms, and that the sounds fade nicely into each other, when you move from one room/zone to the other.

I now tried to render multiple Observer objects to disk and play them on different speakers, but this results in a number of problems:
- When you use "random" sounds in your soundscape, the rendered files are different for each speaker, so they do not fade nicely together. (Maybe this can also be overcome by adding a "set random seed" option, so every output file has the same "pseudo randomness"?)
- Starting multiple audio files simultaneously and feed it to different speakers is quite a challenge. I have now set something up in Pure Data, where I can use an external trigger to start the different audio files and send them to different DAC-outputs.
- It takes away the interactivity. It would be nice to have it all in one place. And while we're at it, it would also be really cool if a soundboard can be incorporated in Live mode. But I saw that this is already on the roadmap :D

Thanks for all your work!
Kind regards,

Dion
User avatar
Pan Athen
SoundFellas Crew
Posts: 76
Joined: 04 Dec 2021 20:51
Location: Athens / Greece
Contact:

I can see the benefits of that.

As you stated, multi-zone functionality means that Echotopia would have more than one Observers that could render in a way that each Observer could be transmitted to a different set of speakers.

This is very useful in escape rooms or other installations that need more than one position on the world to be experienced simultaneously, with the added feature that each position can be played back from a different set of speakers.

I would like to hear your thoughts on that as this is a complicated idea, and I would like to know more before starting to design a feature like that.

To kickstart the discussion, I can share some questions that I have regarding the use of such a feature.

1. How are escape rooms operate?
2. Will it be one computer playing all that sound? And if yes, then how will it connect to multiple rooms?
3. Maybe it's better to use multiple mini-PCs to playback the different positions for each zone and have one master Echotopia running and controlling the client mini-PCs. That has the benefit of running complex scenes more easily and also not having to buy expensive sound cards with 56 channels to serve one escape adventure. Also, this is easier for us to implement as it doesn't need to connect with ASIO or other proprietary hardware drivers that would take more time to develop.

Maybe having one master Echotopia to control other clients of Echotopia, that each one having one Observer positioned in a different location, is the best solution. This can be made to communicate like in multiplayer games, the master Echotopia functioning as the server and the clients as the clients. This can also be a helpful functionality for game masters playing TTRPGs online to control the sound that each of their players is hearing from their Echotopia without transmitting any sound over the internet.
dionzand
Posts: 4
Joined: 29 Dec 2022 12:34

Thanks for replying so fast, and thinking along! I really appreciate it :)
1. How are escape rooms operate?
Usually the puzzles/games are controlled using microprocessors, like RPi's or Arduino's. These are all connected (over internet) with the main control pc, that also controls for instance the hints and music and sfx's that are being played in the room.
2. Will it be one computer playing all that sound? And if yes, then how will it connect to multiple rooms?
Ideally, yes. Or at least a main computer to control which sound is played where. Like I mentioned, I now use Pure Data to control which sound is being played by which speaker (see for example: https://www.youtube.com/watch?v=kxpWrd8YozE. Except I use the dac~ object to specify which channel is being used for which sound.). I use multiple USB-sound cards to expand my outputs (see this project: https://www.raspberrypi.com/news/multip ... pberry-pi/).
3. Maybe it's better to use multiple mini-PCs to playback the different positions for each zone and have one master Echotopia running and controlling the client mini-PCs. That has the benefit of running complex scenes more easily and also not having to buy expensive sound cards with 56 channels to serve one escape adventure. Also, this is easier for us to implement as it doesn't need to connect with ASIO or other proprietary hardware drivers that would take more time to develop.

Maybe having one master Echotopia to control other clients of Echotopia, that each one having one Observer positioned in a different location, is the best solution. This can be made to communicate like in multiplayer games, the master Echotopia functioning as the server and the clients as the clients. This can also be a helpful functionality for game masters playing TTRPGs online to control the sound that each of their players is hearing from their Echotopia without transmitting any sound over the internet.
I like this idea, since I can also understand the added functionality for online TTRPGs! This would however, for multi-zone escape rooms, mean that each zone would require a separate mini-PC, with low latency... So in that case it may be more favorable to (also) have the option to assign Observer objects to multiple local output channels. But I understand that this may be hard to develop, I have no experience with ASIO... Anyhow this would not be a deal breaker.

Thanks again for taking the time to brainstorm over this :D

Kind regards,
Dion
User avatar
Pan Athen
SoundFellas Crew
Posts: 76
Joined: 04 Dec 2021 20:51
Location: Athens / Greece
Contact:

Thanks for replying so fast, and thinking along! I really appreciate it :)
Thank you too, this is valuable input.
Usually the puzzles/games are controlled using microprocessors, like RPi's or Arduino's. These are all connected (over internet) with the main control pc, that also controls for instance the hints and music and sfx's that are being played in the room.
Good to know. Are those communicating using OSC?
Ideally, yes. Or at least a main computer to control which sound is played where. Like I mentioned, I now use Pure Data to control which sound is being played by which speaker (see for example: Except I use the dac~ object to specify which channel is being used for which sound.). I use multiple USB-sound cards to expand my outputs (see this project: https://www.raspberrypi.com/news/multip ... pberry-pi/).
That's a cool project I missed from the Pi, nice!
I like this idea, since I can also understand the added functionality for online TTRPGs! This would however, for multi-zone escape rooms, mean that each zone would require a separate mini-PC, with low latency... So in that case it may be more favorable to (also) have the option to assign Observer objects to multiple local output channels. But I understand that this may be hard to develop, I have no experience with ASIO... Anyhow this would not be a deal breaker.
It's something that we are going to look into in depth. We are currently gathering information to help our research on audio I/O. some of the main points that we are researching is, framework's multiclient capabilities (able to server more than one sound card), ASIO and CoreAudio, Virtual Sound Cards (virtual cables, etc.), ability to handle arbitrary number of channels.
Thanks again for taking the time to brainstorm over this :D
Are you kidding? There are not many things that I would rather do :-) Thank you for the awesome input.

Right now I am rethinking the next version of the audio output part of the engine. According to user input over the last months, but also from our experience from DMDJ for over 11 years, there are some things that can be done to drive immersive soundscapes to the next level.

Echotopia needs to be able to serve tabletop game masters, streamers, art installations, interior designers and architecture projects, content creators, media producers, and installations (escape rooms, museums, etc.).

The needs seem to be slowly getting into confluence. I can see some trends:

1. Ability to share experiences from different locations of a world to different devices locally or remotely.
2. Ability to control location switching from OSC or MIDI.
3. Ability for arbitrary sound channels to be ingested and then rendered in arbitrary speaker configurations. - This is already offered automatically and we plan to support it for any new formats we expand Echotopia to support. This has to be a straightforward process for the users.
4. Ability to share experiences from different locations of a world to different channels for different devices locally or remotely.
5. A soundboard to launch special effects, Foley, and loops and the ability to trigger those from OSC or MIDI.
6. Remote control of the app via mobile devices.

We also want to be able to ingest and render more immersive formats, like Dolby Atmos beds, ambisonic files (probably up to 3rd order), and to provide some basic processes for psychoacoustic upmixing. Also, Center speaker and LFE management for cinematic formats for those who do audio for media or they have a standard speaker configuration in their living rooms.

It's a lot of work if you consider that Echotopia is very different from your standard DAW out there, as it offers acoustics simulation right out of the box as part of the standard mixing engine. Making those features in a simple DAW would be a difficult job anyway, so, if you add the simulations that are already happening so the rendered sound field is realistic, offering those features may make Echotopia heavy on processing. So, this is also an area that needs testing.

But, there is light at the end of the tunnel.

We are working on a new core part of the mixing engine, currently in the design stage, that will unlock the mixing part a lot. Then, it will be very easy to work on the I/O part and link them appropriately, offering the users a lot of freedom. Any feedback or information is welcome.

:-)
Post Reply