Hi. Thanks for your new course. I was following it, and at this point I said to myself - I will test my character, I made synchronized animations, so that should be seamless. But it isn't. I think there is something wrong with Godot blending. I believe it is described here: https://github.com/godotengine/godot/issues/23414 and even one user posted workaround (didn't try it yet) - https://www.youtube.com/watch?v=13S76dOIaDc&feature=youtu.be.
Thanks for a long response. Mind that all my animations are in the same rhytm and lenght, feet positions are more less synchronised, so at least they should behave much better. But I must admit, that this can be more complicated than I know.
Edit: Sorry, but after checking same animations in B3D I need to kindly refuse your answer. My animations in Blender can blend seamlessly between in NLA track editor - I just get interpolation between poses as I expected. I understand that Godot cannot do this between non synced animations, but in my case it should work.
But of course this have nothng to do with a course. I like it a lot, Thank You very much :).
The behavior you're observing is normal. Animation blending applies a linear interpolation between two animations. Walk and Run have different rhythms and feet positon, so there's an area on the transition that is going to feel off.
The fast walk in the YouTube video above also looks off to me, it's not a natural walk. But that's the simplest quick fix to animation blending for basic movement.
A better way to fix the blending with simple blending features is to add one or two animations: jog, and possibly a fast walk animation along the blend scale. They bridge the gap in rhythm between the walk and run animations and offer accurate intermediate feet position. That's how the animation team at Bungie solved this issue on Halo... 3?
Anyway, a more modern approach is to use animation matching with a larger library of animation poses, but that's a more recent technology that, I believe, Godot doesn't support yet. And most indie teams don't have the resources to support such a system either, as it relies on creating a big library of animation poses that the algorithm can match based on the rig's current pose.