Provide a similar pardigm as image editing but for audio
The more and more I work with audio the more and more I feel that the paradigm is dated and could do with a huge shift. In fact, the more I have worked with other products that think a bit outside the box, the more I see audio as images/vectors and that it could easily be treated as such and using similar tools that we are already familiar with from the likes of Photoshop and Illustrator.
For example, to fade out, one could create a new layer (or better still a node) and then draw a rectangle filled with a gradient representing the volume (in spectrum mode), or any other effect, for that matter, that applies to the audio. In fact, the user wouldn't be limited to simple geometric shapes but could draw free-hand, etc., and use various brushes, just as one can with Photoshop, e.g.. What they draw is in essence the mask that represents the impact of the effect that is applied for that area.
As with images, layers can be scaled as the user sees fit that then affects the timing and pitch as well as the volume (in waveform mode).
Basic things that could be achieved with a simple “alpha” would be mixing one audio channel with another or affecting the volume or filtering out frequencies (which could be done with gradient layers, etc.).
Colour selection tools and smart selection tools, such as those found under Photoshop, e.g., could allow a user to select regions of specific power or frequencies on a spectrum as well as draw and duplicate shapes and move them around easily.
The best thing about this paradigm is that it would offer the same sort of user-experience and non-destructive workflow as Photoshop/Illustrator with the ability to turn on and off individual layers and/or filters to quickly make and review changes. The possibilities are endless.
Note the following suggestion to consider in relation with this one: https://adobe-video.uservoice.com/forums/911356-cross-application-workflows/suggestions/35083549-node-based-editing
