Do I Really Need That Input?
I see (and write) a lot of articles dealing with how to structure your digital console in terms of managing large amounts of inputs, with various people suggesting different solutions. You can try out grouping channels into busses for the added bonus of extra processing and retaining the correct gain structure or simply controlling them by assigning them to DCA faders. All of these are valid options and you should definitely figure out which workflow works best for you, but I want to explore the option of reducing the number of channels before they actually get to your mixing station. If we think about managing our channels at the source, there are a lot of benefits that can be seen later down the line - from reducing stage clutter and minimising phase issues to reducing the number of channels you have to control. Let me reveal my mental process for deciding what actually hits the inputs of my console.
The first consideration is of course the sound. If there is a discrete sound source on stage that I need to amplify for the audience, it has to either have a microphone put in front of it or be connected to a line level input. The trick here is to determine what exactly is a discrete sound source. For example, if you take a drum kit in a pop or rock genre where you need a lot of control for all of the various elements of the drum kit, those elements have to have their respective microphones. But if I am working with a jazz drum kit which has to sound like the listener is standing in front of it, I would consider the entire drum kit as a single sound source, or maybe separating it into a kick drum and the rest of the kit. Adding more microphones to a jazz drum kit would just mean more work, but not necessarily constructively add to its sound. When I consider how many microphones to use for a specific job, I always think in terms of how detailed are the elements of that sound source and what I need to bring them out in the mix. Having that consideration present while setting up the stage can vastly reduce the number of input channels you have to work with during the mixing process, without influencing the sound that you are going for.
The second consideration that I keep in mind when deciding how many microphones I use on stage is the proximity of those microphones to the same sound source. If you have a loud sound source being picked up buy two microphones in its proximity, you will absolutely have to deal with the bleed and the phasing issues that occur between those two inputs. Trying to figure out if your microphones are in or out of phase relative to each other and the proverbial lack of time that we all have to deal with can also influence the decision of how many microphones I might use. If I know I can successfully apply the 3-to-1 rule to minimize the effects of phasing between microphones, then I have no issues with placing them in my mix. If not, I think twice about either an alternative microphone placement or eliminating one of those microphones completely, if at all possible. That consideration is especially present when dealing with orchestras and choirs. The number of microphones for these large groups can quickly rise to numbers that are hard to manage, so careful deliberation in these situations can lead not only to an easier workflow, but also to a cleaner sound in general.
What about the deliberate doubling of microphones on a particular sound source? We can commonly see engineers using two microphones on kicks, snares, and guitar cabinets. Here it will be a judgment call. If your mixing style requires you to have doubled inputs of a particular source, then by all means, use them at your own deliberation. I personally use doubled microphones on a kick drum, but usually not simultaneously. I would shape the sound of these two microphones in such a way that I can use one of them for heavier beats or up-tempo songs, the other one for ballads and softer signals. On a digital console, however, you can try creating the same effect by duplicating the input channel and using different settings for different sounds, without any of the phasing problems. On guitar cabinets I try avoiding using two microphones as a general rule. If it is a mono cab, it makes no sense to me - the phasing problems usually outweigh any sonic differences of using two different microphones at different positions. If it is a stereo cabinet, I might consider it, but I don’t use it often. It all comes down to how I perceive the live sound system. I try to mix as much as I can in mono, because I try to recreate the same sonic image for everybody in the audience. If the people on the left side of the venue will not hear the right cabinet of a guitar amp, because you have placed it far-right in your stereo mix (and vice versa), then it doesn't make sense to use two different sounds for a stereo effect that only for a few people next to the mixing console can actually perceive. Unless you're using that stereo image for a particular sound effect that needs to happen within a specific song, then I would suggest against doubling up on microphones on guitar cabinets. Again, if you need that wider sound, you can try doubling the input and applying specific effects with just one input channel. Personally, I see much greater benefits from reducing phase cancellation and having a more direct, clearer sound from a single mic on a guitar cabinet, then using two microphones that will have all sorts of phasing cancellations between them.
The last thing I want to touch upon is questioning what we perceive a stereo source by default. 9 out of 10 times we would consider a keyboard with a stereo output as a stereo source. But if you start thinking in terms of mono mixing, as I have described in the previous paragraph, then that stereo image it sometimes irrelevant. For example, a piano sound will usually have the low frequencies of a piano distributed to the left side and the high frequencies of the piano distributed to the right side of a stereo output on the keyboard. But if you mix that sound in full stereo on a live sound system, half of the members of your audience might be stuck with only low-frequency information and the other half with only the high-frequency information. So why not try catching the full frequency response from a mono output of a keyboard and then also mixing at mono on your live sound system to create a unified sonic image for your entire audience? You can at least try connecting only mono to a keyboard if you know that it’s primarily used for lead synth sounds or bass synth sounds. These will be by default mixed in mono anyway, so why waste an extra channel on your console? I understand that for acts that you have just met at a festival this might not be an applicable solution, but if you are working with a specific band as their front of house engineer, try opening that discussion and experimenting by placing the two stereo channels in a mono position of a mix to see whether or not this creates a more uniform image for the entire audience at the show.
In a world of crazy channel counts with consoles that allow you to process more channels than ever possible, simplifying your setup is still a route that you should consider. By judiciously scrutinizing over your input list, you might realize that you can reduce the number of inputs on your console, the number of microphones and microphone stands on stage, and also the set-up time required for the act to get ready for the show. The biggest revelation for me in terms of deciding what to keep and what to discard was the way I started perceiving a live sound system, which went from being an oversized studio monitoring system to a system with speakers aimed where the people are in a room, trying to provide the same image for everybody, regardless of where they are positioned in a venue. Once I stopped thinking about live sound in absolute stereo terms, the entire world of possibilities for reducing the number of inputs on the stage became apparent. Hopefully it is a consideration that you can try out for yourself and see whether or not you can apply it to your own work.