@annamaeblythe This only provide range in the vocoder, not acoustic model. Acoustic range will always be limited by the data it has, unless you use tone shift with it which is a big hassle especially if your data doesnt have a lot of pitches
@NaysanMLo7 This is fixed by using this vocoder :) !
github.com/openvpi/vocode…
Don't use it on human recorded models with a decent pitch range, though! It will flatten it completely 😅
Idk if ppl know this or not but if youre doing utau to diffsinger please make sure your render or the final audio you are going to use for training, please make sure you cover LOTS of pitches. Its to prevent your model not able to sing in certain range thats not in the data range
I'm sorry, but I don't get people who stopped listening to vsynths because the pipeline moves to AI and think it's bad just because the vocals are AI generated 😭
Why the fock does Twitter keeps suggesting me apologie tweets 😭 i swear every time I got a notification from this account its some random person saying sorry to something