This is a new commercial project from a local Glasgow based musician. They are looking to develop a prototype for a live show vocal effects system. The strong potential for use within education as well as live performance make for a very interesting development.
Below is a reduced version of the complete proposal:
Interpreted Project Outline:
The AES (Audio Control System) is a tool to help encourage artists to explore and discover the possibilities afforded through their voices & natural sound. It is to be a device that can be used as freely in a live show or performance as it is as an educating tool.
Via microphone input the AES will allow users to easily toggle and control various sampling and altering effects all in realtime through a unique wireless, touchscreen-based controller.
More complex inputs and alterations will be controllable via desktop-based GUI.
Users will be able to set, recall and toggle their own pre-set configurations of effects and parameters at any time during playback.
There are four (4) outlined effects for the first prototype of the AES:
Frequency alteration (boost or crush the input frequency)
Delay (stacking feature)
Freeze (stacking feature)
Both the activation and order of the effects will be user controllable. Eg A user will be able to decide which order the effects are triggered, as well as whether an effect should be enabled at all.
PFP Outline on Development:
We expect there to be a natural user flow from set up through to performance.
This will initially see the user operating the AES from a desktop position, setting major parameters and configuring input devices and output settings. Ensuring connections with the wireless control device etc. This will naturally progress into a wireless, touchscreen interaction when setup.
As such we see interactions within the final AES deliverable split into one of three categories:
Complex – Users are allowed more advanced control via the desktop GUI
Simple – users can quickly and easy alter or control from the supplied touchscreen interface
Hidden – is an interaction processed behind the scenes out of user control
On operating the AES users will naturally journey from a level of higher complexity through to no interaction as they move from a setup into performance style of usage.
The AES will have three main areas of operability: 1. Input 2. Process 3. Output
Users may go through multiple iterations of the AES flow during their performance time with it. The stages will remain the same. Unless there is error the user will not return to input without first visiting process & output.
To avoid incorrectly ring fencing the development of the effects detail will be spared on the actual internal operation of effects (black box development) until final delivery:
Delay (and Stacking):
Freeze (and stacking):
The initial brief asked for ‘drag and drop’ functionality however that may be to much given the timeline proposed.
Version 0 will attempt to remedy the usability by using dropdown limited menus, where as drag
and drop will be a stretch milestone goal of the development.
Users will also be able to define and control pre-set configurations at output stage:
Points for clarification: 1. Input - three separately selectable channels or a combined single mono track interpreted from all input microphones. 2. Effects – how much room for artists interpretation of the brief? Core function will be maintained in terms of scoping document. Project deliverables: 1. Packaged (exe) MAX/MSP host a. Effects loader/routing b. Microphone input control c. Audio output control d. OSC in/out routing e. Loading/setting present configurations 2. TouchOSC control template for use with TouchOSC app a. Multi-level controller 3. Various effect patches (source) for use within the host a. Frequency manipulator (-/+) b. Delay c. Time Stretch d. Freeze
There is also an accompanying video exploring the the plan and how it will be executed: