For anyone doing voice-over, there is a lot of work to make sure the voice comes out correctly, in the person, in the environment, in the gear, in the recording, and in the final processing. For an effective voice-over, you need to make sure that all four aspects of the voice-over recording process is as good as you can make it. Some of these are one-time preparations, while others are session-specific, and yet others require regular practice for good quality.
In the Person
The voice is the most important part of the process; if your voice isn’t up to snuff, then there is absolutely no point in continuing from here.
Now, this is not to say that your voice has to fit a specific model of sound. A gruff voice can be just as useful as a clean one, and the ability to speak clearly can give you an edge over a golden-voiced mumbler. But, this is not something you can just have, it requires work in order to build and refine.
Remember, every single action a human can take requires specific sets of muscles. Speaking is no different. You have the tongue, the muscles controlling your lips, jaw, throat, and lungs. The more aware of these muscles you are, the more you can exercise them, and the stronger and more flexible they’ll become. And with stronger muscles, your vocal loudness, frequency range, and audible flexibility will become much more controllable.
Breathing – Vocal Amplitude
Do ensure your voice is as strong as you want, you need to perform breathing exercises. This mostly involves taking in as much air as you can into your lungs, so that it can fill all the way down to the diaphragm. Then, let the breath out, and take in the next lungful. Try to open your throat as widely as you can so that it will take in air faster, therefore filling the lungs faster.
A strong set of lungs allows you to project your voice more. This may not be essential in the studio, but live appearances can definitely benefit from vocal projection. Another aspect to this is that you can speak more between breaths without being rushed. Of course, good breathing also gives you better health in general, which means you can do voice-over for a while longer than someone in poorer health.
Throat – Frequency Range
To ensure your voice has a good range, nothing beats singing, especially songs that exercise the highest and lowest ranges of your voice. Look up good singers’ exercises; they should give you other exercises to widen your voice’s frequency range. Scales are another method, as is starting at one sound, and slowly working your way up to the highest freqencies, and then down to the lowest frequencies you can manage. Don’t force the sounds! Vocal cords depend on their flexibility, and if you damage them by forcing your voice, they will heal up and scar, which will reduce your vocal range in the long run.
Voice flexibility can give you more room for characterizations, or to adjust your voice for the script’s tone. Even in normal situations, having a wider vocal range means you have a stronger ability to emphasize in your voice-overs. This is especially important in announcement and advertising, where the peaks and valleys of your voice will emphasize what you are talking about.
Mouth – Phonetic Clarity
Next is working on the muscles in your mouth, including your tongue. There are a number of exercises that can help with this, but the one I recall being useful is to put a wide object in your teeth, and try to speak lines as clearly as possible. This forces you to pay more attention to the position of the lips and tongue to approximate the sounds around the foreign object to those sounds you would normally perform under a normal voice.
The object of this kind of practice is to emphasize enunciation, the practice of making sounds as clearly as possible. By default, humans will speak with as little energy toward speaking as possible. This means that the lips, tongue, and jaw will move as little as possible to produce the sounds they want. When doing voice-over, however, enunciation is important to ensure what you are saying is clearly heard by the recording equipment, and ultimately, the listener.
Brain – Faster and Better Reads
Nothing sharpens your vocal reading capability better than everyone’s favorite voice exercise: Tongue twisters. These phrases and paragraphs are designed to use a series of close-sounding words to trip up the reader, effectively prompting them to read the wrong word in the confusion. Don’t limit yourself to just one tongue twister, read one a few times and move on to the next one. Collect as many as you can for more practice.
Another exercise is a lifestyle choice… read. Read a lot. Read out loud. Your spouse will probably look at you crosswise, the children will probably complain to their friends about that crazy person at home who doesn’t know how to keep their mouth shut, or perhaps the authorities have started covertly fitting you for a white jacket, but keep reading out loud as much as possible.
While doing reads in a home studio may not require a lot of speed, it is helpful if you can do as few takes as possible in a read; if you can perform a read perfectly upon receiving the script (often referred to as a cold read, often used by news announcers), then you become a much more valuable talent than the person who has spent over two hours trying to get the read right, because they kept messing up.
Now that your voice has become all it can be, let’s get to the second aspect of the process… your studio environment.
In the Environment
If you work at another studio, where professional engineers with company budgets can set up soundproofing and acoustical optimization, then this section will probably not apply to you (in which case, why are you even reading a site about home recording, hmmm?).
The first and last words in studio design are “noise control.” Noise will ruin an otherwise clean recording, regardless of whether it is a chair creak, computer drone, children playing outside, or the reflection of your own voice off the walls. So, the goal in noise control is to eliminate noise as much as is possible. This can be performed in several ways.
The Great Enemy of Stray Sound
Sound travels in a sphere from its source. When you speak in a mic, your voice will be picked up by the microphone, but also in all 41249 other square degrees around your mouth. Your head will block the compression waves going at it, but other than that, the sound will travel until it is worn out. If you have hard walls in the room, then sound will bounce off, and the microphone will pick it up.
And it does not matter about which pickup pattern it has, even with a cardioid pattern; the sound can bounce off the wall in front of you, and then the one behind you, and hit the microphone anyways… as will the sounds from the ceiling, the areas where the sound bounced off walls off to the side, at angles to the microphone, and so forth… it’s like a billiard game where all the balls go in all directions; they will reflect, and a good number will return to the source location, and not necessarily from the same angle.
This is not optimal, because no amount of post-processing can remove audible reflections from a recording. Even noise removal tools are limited in their ability to handle this, and they can introduce noise artifacts if they try. So, you want to remove all non-sourced audio before it ever even touches the diaphragm of a microphone.
The main key to blocking sound is to soften the compression. In order to do so, the sound has to go through something that can absorb the kinetic energy without passing it on. In other words, soft materials that compress well. Lots of soft materials that compress well; the more the better.
The best means of sound absorption are foams that are designed specifically for noise cancellation; they will absorb the sound, and nothing will be reflected. Often, these materials will be shaped in different patterns, which can scatter any sound that is reflected, for whatever reason. The scatter can weaken the sounds further.
However, considering the cost of such materials, this is probably not the best use of your money if you’re just starting. After all, good equipment is very important for other reasons, so we can use alternative means. And other means do exist, and for a lot less, if you know where and how to look. More importantly, most treatments are designed to control noise produced from instruments; they need to cover a wider range of frequencies, and handle higher decibel levels. For decent voice-over results, the absorption does not need to be as heavy.
Cloth is another means of absorbing sound. The thicker the cloth, the better. If it’s stuffed with another soft material, like cotton, even better. Blankets, mattresses, carpeting… if it’s soft, it will absorb enough sound to help, and the more you have hanging around, the more sound will be absorbed. There are voice artists who will position their recording studios in walk-in closets, because all the hanging clothing will do an excellent job of absorbing the echoes of their recording practices.
Another thing to consider involves objects referred to as “diffusers.” These objects generally have oddly shaped fronts that, when sound is subjected to them, will scatter the sound. This can prevent sound from reverberating back and forth on otherwise parallel walls. If you intend to use diffusers, make sure they cover a good portion of the walls they’re applied to; a small object will scatter a little sound, but the rest will still be a problem.
While the cloth can reduce a lot of noise, it’s less effective against lower frequencies. For those, it is recommended to place bass traps in the room. Bass traps are boxes designed specifically to absorb the lower-frequency sounds. There are walkthroughs online that show how to make bass traps on the cheap if you need them; I may make an article on one once I’ve had the opportunity to make my own.
Keeping Out Unwanted Noise
Insulation in the more traditional sense is a very useful thing to have; if there are any locations in the studio room where air comes in, those locations can also allow sound in. Make sure you have the room checked for insulation, add winter windows (with double-panes), and not only will you be able to record while life is going on outside, but you will be able to do so in relative comfort, since you will be protected from any temperature changes outside… at least, until the family decides they want to spend the winter with you, since it’s so nice and toasty in there.
Of course, the other aspect of temperature, HVAC, can be a chore to deal with in the studio. When an air conditioner or furnace turns on, there are fans that blow the air. Fans, I might add, that can add a drone to the air. These days, there are HVAC units that can have silencers attached, but if you cannot handle the expense, there are other options available.
One option is pretty much more of the noise control; have something between the vent and your microphone. This will ensure that the sound of the vent is, once again, absorbed by the fabric. Another possibility is to simply turn off the HVAC when you intend to record.
We’ve focused on noise control, now we can get into the part where the voice leaves your throat and gets recorded.
In the Gear
Now that the environment preparations are complete, we can move on to the necessary gear to actually capture the audio and convert it to data so the computer can make use of it.
The first piece of equipment we will need is an audio interface. Most computers come with a basic audio interface, also known as a sound card, but for quality recordings, you will probably want to put some extra expense in a separate device. This has the advantage of allowing you to have a dedicated recording interface, saving your computer’s sound device for normal audio uses. More importantly, an audio device for audio production use will have analog ports compatible with other professional audio gear.
The analog ports are important, because very few, if any, professional audio devices will use the 35mm plugs common with consumer devices; the more common plugs used by professional audio equipment would be quarter-inch and XLR connectors.
The first analog connector is the quarter-inch plug. This plug looks like a much larger version of the 35mm plugs used by consumer devices. This connector is common for most analog gear, including dynamic microphones. If you plan to be using analog filters, mixers, and buses, then this type of plug will be more likely used.
The other primary connector connector for audio is the XLR connector, which consists of a large socket with a number of prongs inside; for microphones, this usually will show three prongs. This connector is commonly used with microphones making use of balanced audio, an electrical process which reduces RF interference in the line, resulting in longer cables with less interference from outside the cable. This can help a lot with reducing noise in the recordings. As such, I would recommend focusing an an audio interface with at least one XLR connector, if you plan to be using an analog microphone.
Recently, microphones have started appearing on the market with the audio interface built-in; these microphones will have digital ports, usually in the form of USB or Firewire. Since these are built into the microphones, they will be discussed further later.
Another important consideration is shielding of the device itself; I tend to recommend external interfaces for a couple reasons, but the most important one is that computer processes won’t generate its own sound.
Here’s an experiment I want you to try: stop all sound-generating programs on your system; even the desktop sound effects. Then, turn the volume on your speakers all the way to their maximum value. Now, type and move your mouse.
On may built-in interfaces, and some cheaper audio cards, you will hear, in addition to the drone of the system’s electric current, static consistent with your actions. This is especially the case with analog mouse and keyboard connectors. What you are hearing is the electric signals of your input devices as the computer is carrying out their commands. These will be picked up by any recording software using that interface, and can easily become noticeable once you start to use amplification and compression on the audio.
For those people using external audio devices, including those using USB headsets and speakers, these sounds will not be present. For this reason, you want to have an external audio device for your recording tasks, since this gives you a lower sound floor to work with from the signal standpoint, giving you a wider dynamic range to pick up audio.
The next thing to consider is the number of channels you’ll need. Do you plan to voice alone? Is this for a laptop to process the voices for an entire troupe, each using a different mic? Do you plan to have a midi device to trigger sound effects? Will someone be using a keyboard to play music as the project progresses, or will you be using pre-recorded music?
Sometimes, multiple people are fine with a single-input audio interface, since they may be sharing a single microphone, or employing the use of an analog mixing board. Others may have one microphone per person, and need an appropriate number of interface channels. Perhaps you’re recording for a remote project, and don’t need any more channels than the one microphone.
It is helpful to consider this, and save your money by purchasing an audio interface containing only those ports you plan to make use of; extra ports are more likely a waste of money that could be spent on other considerations.
The other considerations are the sample rate and the bit depth. In both cases, the higher the number, the better. However, since you will also need to purchase a microphone to go with the interface, it is important to start small, just slightly over the default levels of a consumer sound card.
The sample rate of an audio interface determines the highest frequency it can record, and how exact that replication is. The human ear can hear sounds from 20 to approximately 20,000 Hertz. The default sample rate of a CD is 44,100 Hertz, which means that it takes approximately 2 samples per cycle of the highest frequency in human hearing, which means that while you can hear decent highs, they are not going to be as clear as 48,000 or 96,000 Hz. Since this is considering the highest a human ear can hear, it’s not generally as important, but when performing edits such as pitch shifts or reverb, this can cause the resulting sound to be muddy, due to the small number of samples available to rebuild the wave as needed.
There is a distinct price difference between 48,000 and 96,000 Hz, and voice-over projects don’t really need to make use of the higher frequencies; the highest note in the opera is G6, which operates at a frequency of 1046.5 Hz, just a little over 1/20 of the way to the top frequency of 20 KHz. While the human voice does have a pretty wide range of sounds, the highest frequencies, caused by sibilants, the hissing sound you hear when using breath-release letters like “s” and “t”, are in the vicinity of 7KHz, still in the lower half of the spectrum. Because of this, going for a 96,000 Hz interface or higher would be a waste of money for voice-over projects.
The bit depth, however, is a different story. As was mentioned in the Audio Conversion article, each sample is an amplitude level. The bit depth determines the total number of amplitude values available. For a 16-bit depth, the sample number can be anywhere from 0 to 65,535. For a 24-bit depth, the sample number ranges from 0 to 16,777,215. The difference means, in simpler terms, that for every one 16-bit number, the 24-bit amplitude can exist within one of 256 values between that 16-bit number and the next one. This allows a more accurate sample of sound at any frequency. 32-bit interfaces, which can define the sound even more finely, generally start with a 192,000 Hz sample rate; way more expensive than we need at this time.
So, all this together comes up with my recommendation for a starter interface: a 48,000 Hz, 24-bit external audio interface with XLR support. If you plan to use a condenser microphone, make sure the interface includes support for phantom power, which I will explain below.
Now that you have the audio interface, you need a microphone to plug into it. There are several types of microphone, but the three major microphone types are commonly used by audio professionals, and some are within the average hobbyist’s price range.
Condenser microphones, or “capacitance microphones,” hold an electrical charge inside of a capacitor, and the vibration of the diaphragm, which consists of at least one side of the capacitor, causes it to discharge the stored electricity. These discharges vary in strength, allowing the DSP to treat them as the amplitude of the wave at that moment, and so, can transform the charge pulses into the waveform data the computer will work with.
Condenser microphones are primarily used as studio microphones, due to their sensitivity to sound. This sensitivity means they can make a much more accurate representation of sounds, but it also means that they will be more likely to pick up noise from the environment. For voice-over work, this microphone is generally a popular selection, although it’s not recommended for untreated locations. For amateur work, you might want to wait for this one.
Dynamic Microphones operate through the process of electrical induction; the vibration of the diaphragm actually generates pulses of electricity. These pulses are then transformed into the appropriate waveform. Dynamic microphones are hardy and noise-resistant, and they are much less likely to pick up handling noise, the thumping sounds you hear when a microphone is picked up or bumps against things. For this reason, dynamic microphones are much more popular for stage performances. This is also the reason these microphones are recommended for startup studios lacking the proper sound treatment.
Ribbon microphones are similar to dynamic microphones, in that they also generate electricity as they are affected by sound. However, their method involves suspending a thin ribbon in a magnetic field, and the electricity is generated when the air moves the ribbon. Ribbon microphones are extremely sensitive, as a result of this, but are also very fragile; since the ribbon is unprotected, too much wind can bend it out of shape, eliminating its ability to pick up sound properly. For voice-over work, where the speaker’s breath will be assaulting the microphone, this is pretty risky; this kind of microphone is better when recording the sounds of string and percussion instruments, where the air is vibrated without a direct wind component.
The size of a diaphragm is an important consideration. The human voice spreads over a wide range, all the way from the deeper vibrations of the voice itself to the high-pitched hiss of of the sibilant sounds. Smaller-diaphragm microphones will generally pick up the higher frequencies, but lack the pickup of the deeper sounds. Large-diaphragm microphones have a little more play at the center of the diaphragm, allowing for deeper pickups, resulting in a warmer sound overall. So, regardless of whether dynamic or condenser, make sure your microphone has a larger diaphragm.
In the case of a condenser microphone, you need to make sure that the audio interface supports phantom power, since the condenser microphone is the only one that cannot generate its own electricity. Phantom power is a 48-volt current that is supplied to condenser microphones; the capacitor in the microphone will “fill up” with the electricity, and “leak” every time that sound hits the diaphragm, which the audio interface will use to determine amplitude.
If you are using a dynamic or ribbon microphone, you want to make sure that the phantom power is disabled, as they generate their own electricity. This is especially the case in ribbon microphones, as the power can cause damage to the ribbon element.
Another consideration when picking out a microphone is the pickup pattern. The patterns you desire will be determined by the number of people speaking into the microphone, and whether you intend to pick up environmental sounds.
If you are recording for one, a cardioid pattern will be best. Cardioid is so named because the pickup pattern on a circular overhead chart will resemble a heart. Cardioid patten microphones will pick up sounds on one side of the microphone, with less pickup on the sides, and no pickup on the back. There are variations on this pattern that allow for a narrower pattern, but they tend to increase the pickup on the back-end, eventually reaching the second pickup pattern.
The figure-eight pickup pattern is good for picking up two people on opposite sides of a microphone. This pattern simply means that the microphone can pick up sounds from directly in front and directly in back of the microphone. This is the natural pickup pattern of the ribbon microphone, due to the fact that the ribbon in question is suspended between the two sides of the microphone; as such, it cannot pick up parallel sounds as easily as sounds coming perpendicular to the suspended ribbon.
The omnidirectional pickup pattern is exactly that. A microphone with omnidirectional pickup can pick up any sound coming from any direction. This is most common in dynamic microphones, although condenser microphones are known to have this pickup pattern. This is mostly useful if you want to collect environment audio; when outdoors, this can pick up the ambience of an area for use as background noise in a recording.
Recently, within the last 10 years, microphones have been coming out with built-in audio interfaces that plug directly into a USB or FireWire port. These can make excellent microphones for mobile use, and can even perform as a decent studio microphone. However, there are a couple things you need to take care of.
The microphone should have the capabilities mentioned in the audio interfaces section; if you’re going to get a mic that plugs directly in, it should have a good sampling rate and bit depth. This is very important, as you want the microphone’s quality to be sufficient for a good response. If the microphone is USB, make sure it is using USB 2.0 or later; USB version 2.0 included speed improvements that allow a much faster communication with the computer. Anything less cannot handle the capabilities just mentioned.
When using a condenser or ribbon microphone in your studio, a couple extra accessories are extremely important for making the best recording you can. You can perform the recording without them, and if you’re careful, the sound can come out just fine, but these will allow you to (in one case, literally) breathe easier.
A microphone stand is pretty much a given. Where movement can be picked up by the extremely sensitive condenser and ribbon microphones, a stand holding the microphone in one place will prevent some handling noise. Bonus points for a microphone stand with a flexible boom, which adds a lot of room for stand placement.
Another accessory is the shockmount. A shockmount consists of two rings, one inside the other. The inner ring holds the actual microphone, the outer ring is connected to the microphone stand, and the inner ring is connected to the outside ring by flexible strings, arranged in a “spoke” pattern, on both sides of the rings. This suspends the microphone in such a way that any movements of the microphone stand will not be picked up by the microphone, which can be useful when recording voice, since it will almost completely eliminate stand handling noise. Because microphone shapes differ between brands and models, the shockmounts are usually designed with specific microphones in mind.
A pop filter is a screen with a flexible arm that clamps to the microphone stand; the idea is to place the screen between the person speaking and the microphone. This causes the sound compressions from plosives (the sudden release of breath due to sounds like “b” and “p”) to reach the microphone, but will disperse the wind caused by the sudden release of breath. This can remove the booming noise caused by such wind hitting the microphone. For a ribbon microphone, this is especially important, due to the risk that the wind’s force poses to the fragile ribbon.
The Musical Instrument Digital Interface (called MIDI) is an entirely different beast than audio. Unlike audio, where sound starts life as a sound wave, MIDI starts its life as note data. And not notes like specific audio frequencies, but notes as their abstract representations (C2, G6, D#4), each with several values (strength, on and off values, and sustain status, for example). MIDI starts as data, and remains data all through its life until it is sent to a sampler or synthesizer, which then generates a sound that matches the note in question.
For the voice-over performer, this can have two benefits.
Inside a studio, this can allow the artist to use a control surface to make adjustments in their DAW. It is a lot easier and more natural to adjust sliders, turn knobs, and press buttons than it is to adjust everything using a mouse and keyboard. With a control surface, each knob, slider, and button has its own note data that can be used by the DAW to control its abilities.
If the artist or troupe is a public performer, MIDI can also be used with SFX triggers to make sound effects during a performance. This makes this even more useful, as public performance tends to have little room for error; sound effects need to be on cue in order for suspension of disbelief to be maintained.
In the Recording
Once all the hardware is in place, the artist then needs to start making recordings. It is assumed at this point that all hardware has been installed, and Jack and Ardour have been installed and are running. The process of recording using the tools have been covered elsewhere, so we will not repeat the process here. However, I will reiterate the only rules you need to know during recording, provided all the above is taken care of.
- Always make a clean recording first.
- Adjust the preamp so the volume never reaches the clipping point.
- Always make a clean recording first.
- Save the original recording before beginning the processing.
- Always make a clean recording first.
- Never edit the original recording.
- Always make a clean recording first!
Yes, it’s a broken record, but it’s broken in the best place; you do not want to have to re-record the original audio, so make sure that your original audio is saved and as close to what the microphone heard as possible; all processing happens after recording.
In the Final Processing
There are a lot of people who use Audacity to record and edit their audio. This is fine for basic work, but it introduces the bad habit of using destructive editing. When working with sound, you always want to try and keep the originals as pristine as possible. This will allow you to adjust changes without waiting, and without needing to undo those changes, because the changes have not actually been recorded yet.
In Linux, this is pretty easy to do, as you can add plugins to the post-recording host in Ardour in order to adjust the sound quality of your recording; they do not affect the stored recording at all, they just adjust what sound goes through the outputs. This is very important, as some effects will be important to ensure that your voice is the best you can make it.
The first adjustment should be the gain to the track. One clipped sound can ruin a whole mix, so you want it to be only loud enough that its loudest point will just barely miss the clipping point.
Once you have the track adjusted, it’s time to adjust the frequencies. How you adjust the sound of the voice depends on what effect you are actually looking for. Remember, only use the following steps if they are actually called for. They are best used to compensate for a loss of quality in the digitization process, or to accomplish a specific effect.
You might want to apply a highpass filter first. A highpass filter is a filter that will allow all frequencies above the threshold to pass unhindered to the next step in the processing, while frequencies underneath are attenuated (made softer) out of the chain.
We will probably want to keep it somewhere within the first 80 Hz; it doesn’t have to be at that mark, but keep it lower than that limit. This will clean out the deep rumble of equipment in the area. It would take a very gifted singer to reach below that mark; the otherwise lowest singing tone used by a bass voice occasionally occurs at 110 Hz. In both cases, it has to be done with specific intent; the lowest normal speaking voices occur at higher frequencies.
The next useful step is to boost the frequencies between the above cutoff and around the 110Hz mentioned. This will give the lower registers a helpful little boost, and give the voice more body and a warmer tone, which can make it more pleasant to listen to. And this does not just apply to men; women’s voices may peak at a higher level, but they also stretch down into the sub-registers.
Following this, you will also want to focus on the higher registers. You don’t generally need a lowpass filter, but a good EQ can help bring out the best in the speaker’s voice. If you want a lighter voice, you might want to do some boosts above the 10KHz level, but usually working on the 6-8KHz area will help bring more strength and definition to the speech; remember, this is the area where sibilants live, and sibilants and plosives are the strongest sounds that define the words you hear; for example, the difference between “b” and “p,” or “s” and “th,” are subtle, and when the sibilants are strengthened, they become better-defined and easier to recognize. Of course, the above practices with enunciation go a long way as well.
The next step is to adjust the final volume of the track. In order to allow the recording to reach the desired volume without the loud parts reaching the clipping point, you can apply dynamic range compression to the sound. Dynamic range compression is a filtering effect that takes the incoming audio, adjust all sounds above a specific loudness to only increase one decibel for a specific number of decibels; the amount is measured in a ratio; 2:1 would mean that the output volume past the threshold will increase one decibel for every two decibels going in. This ratio is a good starting point.
Once the loud parts are compressed, you can use the makeup gain to increase the track’s amplitude until the loudest parts are again very close to, but not passing, the clipping point. Adjust the ratio and makeup until the general track is at the desired loudness.
Additionally, you might want to adjust the softness of the knee and the attack speed of the filtering; the knee softness increases the ratio gradually instead of immediately going to the full ratio, and the attack rate will prevent the compressor from instantly switching on for sounds going over the threshold, as doing so would make the difference noticeable to the listener, even if the volume only barely passed the threshold limit.
Finally, after all the adjustments mentioned above, once you’ve verified that the track is at the desired loudness without going over, you can then link the signal chain’s output to a second track in the DAW and record that as the final. Once the final has been recorded, you can export the final in order to be burned to disc or transformed into an MP3 file.
Well, this should cover everything I know about voiceover recording. For Linux users, you should be careful to use hardware that is supported by Linux, and in all cases, remember, once again, to record clean first, and then record a second track with the signal chain’s output.
No matter what, have fun, and make something good!