I have a question: SynthV is highly optimized to reproduce a human voice; 1 instrument (for example, Solaria) = 1 voice. But from your point of view, is there anything that would prevent creating a group of people in a single instrument? For example, a ‘soprano section’? Similarly, what would prevent developing instruments on this basis? Of course, there would no longer be notions of lyrics, but with AI, wouldn’t we achieve a different quality of rendering than current VSTs?
I wonder what multiple voices with perfectly synchronised timing, vibrato and glissandi would sound like? definitely not ‘natural’. I have a ‘choir’ sample set for a sampler which sounds… useable, but still needs a few layers with timing differences etc. to start to sound convincing for me.
I suspect there are tools out there much better suited to musical instrument rendering, even free programs like DecentSampler allows for multi-sample, round-robin playback and the CPU payload is much lower as it is just playback rather than rendering from a single data set.
Certainly, just as VSTs sound so different to my trusty old Korg M1. But is that better - or just ‘different’?
Given the cost of producing a human voice for SynthV versus easily producing your own for a sample playback program I personally don’t think it would work financially or interest me personally.
There’s about a gazillion or so vsts out there for that purpose. Symphonic & Choir instruments do sections (out of necessity), but otherwise, a single instrument is just a single instrument.