What accessibility or usability improvements would you like to see in Synth V? + User Survey

Hey Dreamtonics community! What accessibility / usability improvements would you like to see in Synthesizer V Studio? In terms of issues, think of what makes you frustrated, or makes you wish took less time and effort when you use Synth V.

What is accessibility?

Digital accessibility means making things like websites, apps and software easier for disabled individuals to use. It is closely linked to usability, UX/UI and web/software development practices. Accessibility is essential for people with disabilities and useful for all.

I’m a Synth V user myself, as well as a Digital Accessibility Specialist apprentice whose special interest is vocal synthesis. I am currently working on my final project, based on accessibility testing / auditing of the Synthesizer V Studio software.

This idea was inspired by a post I saw made on the old Synthesizer V forum from a user with a visual disability requesting for accessibility to be added for blind/partially-sighted people. It stated that they were a huge fan of singing synthesizers and have always wanted to make content with this program but were sadly not able to due to Synth V having insufficient support for screen readers, an assistive technology used by many which allows users to hear the text and controls on the screen read aloud. I have also seen similar requests from other vocal synth user groups like VOCALOID. This stuck with me since this program and all the creations made with it give me so much joy; creating with Synth V has been an important part of my life and it’s really such a shame that others are excluded from this.

As a result, I’d like to collect as many opinions and thoughts as I can from the community: to help support my project plan but also to gather proof of key barriers pain points experienced by you, the users and customers of Dreamtonics, in an attempt to call for change in greater accessibility and inclusion so that more people can access, enjoy and create with the Synthesizer V software, regardless of disability. I’m planning on putting together a summary resource based off my findings and final report once I am done evaluating Synth V, with best practice recommendations for accessible software development which will hopefully be of benefit to the wider vocal synth and web/software dev community.

Please leave your thoughts and experiences on this topic. As my project is focused on accessibility, if you have one I would appreciate if you included any details about how your particular disability or condition (this includes challenges related to age and mental health conditions) impacts how you use Synth V - even if you do not consider yourself to be disabled, do still comment about any general usability issues you’ve faced.

If you have the time, it would help me out greatly if you could fill out this short User Survey I made which goes into more detail. It should take about 5-10 minutes to complete.

Thanks so much for reading! Let me know if you have any questions at all.

1 Like

Thanks for preparing this survey

1 Like

Hi there! I might be the person you’re talking about from the old forum. I made a topic about this, I think it was some time last year? I’m not sure. Anyway, sorry for the very late response.

There’s quite a lot of things that could be done to make Synth V accessible, but some things are more complicated than others. A good start would be keyboard shortcuts though. If DT makes SV completely usable with a keyboard without requiring a mouse, that is a huge step by itself. Drawing parameter curves, inserting notes, moving the play-head across the grid by quantize amount, jumping to the menu bar… all these things need to either have keyboard shortcuts or allow you to set them.

After that, things get a little more complicated. I’m assuming DT made the scripting system like this for a reason, but you can’t call external libraries and/or C code within a script. This would be useful for making the program screen reader accessible, since the idea would be to allow people to make calls to a screen reader controller client library to speak things like what measure you’re on, different notes you’re arrowing between, etc. If it’s a matter of security, they would need to add that functionality themselves. From what little I understand, it’s not too difficult. The problem lies in finding accessibility information, since it’s scattered all over the web. Currently, there is no central location to find accessibility information - a problem I am actively working on a solution for. Until then, it’ll always be sort of complicated to find accessibility development info.

I am a Vocaloid and Synth V producer myself, with about 2-3-ish years of experience. I’d love for you to get in touch sometime so we can talk about this more, should you be open to that, or anyone else interested in discussing this topic for that matter. My email is garrettsworld2000 (at) gmail (dot) com. I also use Discord, (dangero2000 is my handle). I know this is quite a bit late, but I hope this info has been useful to you.

Hey! Sorry for the late reply, I’ve been away on holiday - in any case, I mega appreciate your detailed response, it’s incredibly useful for me! Especially as I’m not as knowledgeable about scripting and the like. Since I’ve started my project, it has indeed been evident that there is a severe lack of centralised information about non-web software accessibility. Even in terms of auditing/testing procedures or guidance there is very little; I’m currently working off of multiple isolated sources myself (WCAG2ICT, ISO 9241-171 etc).

I would absolutely love to get in touch and chat more about this topic, and welcome the opportunity to work together in creating a central location to find specific accessibility guidance; I was already planning on creating a generalised best practice resource for non-web software. I’ll be in contact via Discord, looking forward to discussing more!

Also, if it was you, thanks so much for filling out my survey! Have a great day :slight_smile: