If you’re on a Mac running a contemporary1 version of OS X with the Firewall engaged and trying to fire up RX 5, you might encounter the following error.
Do you want the application “Neuron Plugin Scanner.app” to accept incoming network connections?
What is Neuron Plugin Scanner? The alert dialog box warns: “Clicking Deny may limit the application’s behavior. This setting can be changed in the Firewall pane of Security & Privacy preferences.”
The only solid reference to this app that I could find is a tiny thread on this forum. A guy named Jonah (possibly this guy?), who claims to work for iZotope Inc. (software developers of the amazing RX 5, Ozone, Iris, etc.), states that Neuron Plugin Scanner is a “helper application to scan your host for plugins to use” and that “it never connects to any other computer, iZotope, the internet, or anything else!”
That seems harmless, but I wanted better proof that Jonah was legit and the app wasn’t something more malicious. I contacted iZotope customer service and got this reply:
Thank you for reaching out! Yes the Neuron Plugin Scanner is related to RX 5. RX 5 has the ability to host 3rd party plugins. Those plugins have to be scanned by the Neuron Scanner before they are instantiated. Allowing this functionality is recommended.
Thank you for your time!
iZotope Customer Care
Verdict: Neuron Plugin Scanner is safe. Simply click Allow and keep making music.
It turns out the app is harmless. But clicking Allow every time you open RX 5 is a pain. Shutting off the Firewall is not wise. So if you want to make this dialog box go away forever, here is what to do.
1. Open Security & Privacy panel in System Preferences
You can find System Preferences under the Apple logo () on the far left of the menubar. Click on the Security & Privacy icon.
Click the Security & Privacy icon
2. Unlock System Preferences
If your System Preferences are locked, unlock them. Enter your system account password when prompted.
Unlock to enable changes to the Firewall
3. Open Firewall Options
Once you enter your password, the Firewall Options button will no longer be grayed out. Click it.
Click on Firewall Options
4. Find the app Neuron Plugin Scanner in the list
Scroll down until you see Neuron Plugin Scanner. It will have a red dot and the words “Block incoming connections” beside it.
Find the Neuron Plugin Scanner.app
5. Allow Incoming Connections
Toggle the setting for the Neuron Plugin Scanner app to read “Allow incoming connections” and exit out by pressing the OK button.
Set the Neuron Plugin Scanner to “Allow incoming connections”
That should fix the problem. The alert dialog for Neuron Plugin Scanner will no longer pop up…at least until iZotope updates their software or Apple changes their operating system. This is what worked for me with RX 5.00.135 running on OS X 10.10.5. Not all systems are the same. YMMV.
1Contemporary = a version released within the last few years prior to this article being publish. If these issues still exist at some point years from the publish date, I would be surprised. If so, well hello dear reader from the future. Do we have flying cars in the time you are from?
You want to record audio in the modern age? You don’t have a Zildjillion dollars to be able to record to tape? Even so, it all ends up digital. You need some hard drives.
Five Audio Recording Hard Disk Drive Tips
Hard disk drives aren’t all the same. Picking out the right one can be tough. Here are some things I’ve learned — sometimes the hard way.
1. Heed the DAW makers’ suggestions.
If AVID says that Pro Tools doesn’t support it, don’t expect it to work. Legit DAW makers will post the system requirements for their software/hardware. Look them up. Follow their recommendations and instructions. Spoiler: You’re probably going to have to spend more than you had planned for.
2. Faster is better.
A faster drive means it read/writes faster. And faster read/writes means more tracks and/or higher quality.
Traditional hard disk drives have platters that spin. A hard disk drive that spins at 5200 rpm really isn’t fast enough — it’s like red-lining a Geo Metro. 7200 rpm is better. 10,000 rpm better still.
And then there’s flash drives, which are way faster than hard disk drives.
There are also seek times to consider, for which lower numbers are better. Seek time is the baseline amount of time in milliseconds that it takes for a drive to fetch data.
I have found that drive manufacturers don’t always make these stats readily available. When in doubt, assume the drive doesn’t meet spec (because it likely doesn’t).
3. Data interfaces matter.
Hard drives have come with lots of data interface flavors: PATA, SATA, USB (1, 2, 3), FireWire (400, 800), Thunderbolt, Ethernet, and some are even wireless. The data interface dictates bandwidth, which roughly translates to how many tracks you can record at once and how much latency your playback will suffer. More bandwidth is better, which translates into better recording and mixing experiences. Again, check your DAW maker’s system requirements and choose the drive with the fastest and most forward-compatible data interface.
Also make sure your computer can handle the data interface type you’re choosing. And find out if the data port you intend to use on your computer is sharing a bus with any other peripherals in your computer. That can adversely affect your bandwidth, causing a data bottleneck.
4. Bigger isn’t better.
For tracking and mixing, you don’t necessarily need a 3 TB drive. (Unless, of course, you’re recording a 10-piece prog-rock group with 40 minute “works” at 32-bit 192kHz.) Save the big, slow drives for backups and archiving. Use smaller, faster drives for works in progress. If you have more than one project going at a time, consider using a small drive for each project, so the different project files are not interleaved with each other on the drive. This will speed up read/write times, as the drive will not be jumping around on the platters trying to find the files for the current session. This also saves money, since really fast and really big drives are expensive.
5. Always have a backup.
Have a backup plan, because hard drives fail. All the time. More so than any other part of a computer. Make sure to always backup your work after every session, whether recording, editing, or mixing. And make sure you have an extra drive ready in case one goes down during a session. I can’t stress this enough. Millions of ones and zeros (i.e. your priceless recordings) can go poof at any time — and there’s never a right time for that. Buy more hard drives. Make backups like a chronic. Sleep well.
So there you have it: my top five hard drive tips. Comment below to let me know what you would add to the list.
And enjoy some “Tainted Love” made with old hard disk and floppy disk drives…
After doing a fresh install of Pro Tools and my Waves plugins, this Waves 9.2.100 Preferences dialog window (pictured below) kept popping up every time I fired up Pro Tools.
Checking the “Don’t ask me again” checkbox didn’t seem to be working.
I searched for some solutions on the Google machine and found some forums were recommending a complete uninstall and reinstall of all Waves plugins. This didn’t seem necessary. Here’s the fix I used:
Quit Pro Tools.
Trash the entire Waves Preferences folder. The folder is located in the Preferences folder in your user Library folder, not your system Library folder. A quick way to locate the folder is to switch to the Finder and hit Shift+Command+G. A Go to Folder dialog window will pop up. Copy and paste the following line in that field and hit enter.
Put that folder in the trash and empty the trash.
Start Pro Tools.
A window should pop up asking you to select the Waves 9.2 Plug-Ins folder. By default, it should be located in the Waves folder in your Applications folder.
Once you’ve located the folder, click Open.
The Waves 9.2.100 Preferences dialog window should pop up again. The “Don’t ask me again” box should be checked. If not, check it and hit OK.
To test if everything worked, quit Pro Tools and start it again. The Waves dialog window shouldn’t reappear.
Blame it on entropy or whatever. Things get messed up. Apple’s OS X is no exception.
In the last few months, I started getting this error a lot:
You are opening the application ”Pro Tools” for the first time. Are you sure you want to open this application?
Except, it’s not true. I open Pro Tools nearly every day. The alert isn’t very important, but it was beginning to get annoying seeing this pop up every time I wanted to record.
So, a little googling and I found an answer on StackExchange. It involves using the command line on your Mac, which can be a bit scary if you’ve never done that before. But it’s a single command, so you should do just fine. Here’s the quick and dirty summary…
This is where the Matrix is on your Mac. There’s no green falling code or woman in the red dress. There may Agent Smiths lurking though.
Open the Terminal application (found in /Applications/Utilities/).
Copy the following command (all of it… the whole long line) and paste it after the prompt.
The process will begin. It may take a minute or two to finish. Do not quit the Terminal application while the command is running.
Eventually the process will complete and another prompt will appear. Now you can quit the Terminal app.
This command resets all of the first run warnings. So any application that requires that will be reset. So you should see the alert one more time for each of those applications and then it will go away for good.
Apple’s GarageBand makes it relatively easy to sketch out an audio demo, but it does have some severe, intentionally-crippled limitations.
One of the biggest drawbacks is the lack of built-in support for exporting MIDI data.
Performances are stored inside the session file in some sort of MIDI fashion, but Apple doesn’t give users an easy way to get that information out. Major bummer. *looks west towards Cupertino, squints eyes, shakes fist in air, mutters under breath*
However, a nice guy named Lars Kobbe has put together a workaround/hack that extracts MIDI data from the reluctant clutches of GarageBand. You can download his GB2MIDI Apple droplet script from his site: MIDI-Export in Apples Garageband. Here’s the direct download: GB2MIDI.ZIP If that link doesn’t work, I’m providing the file hosted on my site here: GB2MIDI.ZIP
The article is in German, but instructions in English are found near the bottom of the article (just before the comments section). Getting the MIDI data out involves several steps. Here’s my summary of the process.
How to Extract MIDI Data from GarageBand
Join (Command-J) regions of a track you want to export
Convert that region to a loop via Edit > Add to Loop Library (NOTE: In GarageBand 10.1.0 this menu item is now located under File > Add Region to Loop Library )
Find the newly created loop file (an .AIF with MIDI data hidden inside it) in the folder: Macintosh HD (or whatever your system drive is named)/Users/(your home folder)/Library/Audio/Apple Loops/User Loops/SingleFiles/
or the abbreviated: ~/Library/Audio/Apple Loops/User Loops/SingleFiles/
Drop that .AIF file on Lars’ GB2MIDI droplet
Grab the freshly extracted .MID file, which should appear in the same folder where the .AIF loop was. If not, see the comment section below.
Import the .MID file into a respectable DAW (basically almost anything other than GarageBand).
Make next hit record.
That last step is optional, but I say go for it. 😉 Let me know if this helped you.
If you’re having trouble locating the loop file, it may be because your Library and/or Users folders are hidden, as later OS X versions have been wont to do.
To unhide the Library folder, open the Terminal application, which is found in the /Applications/Utilities/ folder. At the prompt type the following: chflags nohidden ~/Library/
To unhide the Users folder, type this into Terminal: sudo chflags nohidden /Users
Then enter your administrator password.
Look for the newly unhidden Users folder in your hard drive’s root folder. It should look something like this:
After running “sudo chflags no hidden /Users” you should see the Users folder (highlighted in red in the image above) appear under the root folder of your hard drive (often named “Macintosh HD” by default).
This GarageBand MIDI article has regularly been one of the most popular posts on my site. That means there are a lot of people using GarageBand and discovering its unfortunate MIDI limitations. The best bit of advice I can give to any musician or audio engineer still using GarageBand is STOP. I know that may sound harsh, but GarageBand is intentionally made to be consumer-grade software. If you’re serious about recording, take the time to investigate other DAWs. Find an alternative solution. There are many to choose from and nearly every one of them is less limited than GarageBand. They range from super affordable to “professionally priced.” Here’s a list to get you started. (Some links are affiliated.)
Sometime last year, my friend Autumn Ashley asked if I’d help her complete her next EP. She ran a Kickstarter to raise funds and anyone who contributed got the album early. On Friday, Autumn Ashley’s BATTLEGROUNDS album was finally made available for everyone.
BATTLEGROUNDS by Autumn Ashley
When Autumn first contacted me about BATTLEGROUNDS, she had all the songs written, rough demos recorded, and a handful of local arrangers putting together the individual song scores. She asked if I’d help engineer the recording sessions. As we got into it she asked if I’d also play some instruments and design the artwork.
A few months later, I headed out to Autumn’s place in Connecticut for a week of turning demos and scores into album-ready recorded audio. We tracked friends new and old playing a variety of orchestral instruments in a few different locations.
Pianist Tim Lillis performing nocturnally, on a piano I tuned with a drum key
It was a great learning experience.
Autumn and Scott at the helm while Nate Brown, arranger for the title track “Battlegrounds,” confirms proper execution of his score
I’d do it again in a heartbeat. And that’s why I really appreciate the people that pre-ordered the album and the people that are about to buy the album on iTunes. For a few bucks, you’ll get 5 bloodsweatandtears songs plus you’ll be supporting indie music and local (if you live in Connecticut) artists!
Setting up microphones for recording Autumn Ashley playing acoustic guitar
A friend gave me a Pro Tools session on a thumb drive. I copied the entire session folder to my external hard drive and opened it. After changing the routing to work on my system, everything played back fine. Then I tried to clean up the session.
Every time I attempted to cross fade or consolidate an audio or MIDI region, I would get an error like this:
“Could not complete your request because You do not have appropriate access privileges (-5000)…” Why do you build me up, Buttercup? Capitalize ‘You,’ then award me negative five thousand points…pssh.
Seeing the “access privileges” bit, I figured the problem was probably an operating system issue, not a Pro Tools thing. The session files were indeed set to ‘Read Only,’ which is why I could play back the session, but couldn’t do anything to the regions or fades.
Here’s how to fix the issue.
Close the session. You shouldn’t have your Pro Tools session open while changing its permissions.
Select the session folder in Finder. Make sure the session folder is highlighted, not the files inside the session.
Get Info. Hit Command-I (capital i) or from the Finder menu select File > Get Info. An Info window will pop open.
Change all privileges to ‘Read & Write.’ At the very bottom of the Info window is a box with a list of users and their privileges. They should all be set to ‘Read & Write.’ You may be asked for user password to unlock and verify the change.
Not listed are NSA permissions, which by default are set to ‘Collect All,’ but, like, totally isn’t a violation of your privacy.
Close the Info window. After making the privilege changes, try reopening your Pro Tools session and editing some regions. If you can, this fix worked for you.
Why does this error occur?
Many common problems that Macs develop are related to file permissions errors. Files are given various permissions to maintain privacy between computer users and prevent users from easily messing up the operating system.
Permissions can get wrecked when disks are removed without being ejected and during unexpected shut downs. That’s why it is important always to try to eject disks and shut down your Mac properly.
Permissions can also get messed up during copying and moving of files or while installing software. That appears to be why I experienced this error. During the copying of the files, the permissions were never changed to grant me access. Simple problem, simple fix.
After encountering this problem on several other sessions, I tried another method and found a better (and probably more proper) solution. Try this in addition to or instead of the above fix:
In the problematic Pro Tools session, pop open the Disk Allocation dialog (Setup > Disk Allocation…).
When the dialog window opens, you’ll be presented with a list of all the tracks in your session and the location where that track should be located. If you’re having problems creating fade files and getting the sort of error that brought you to this page, then you’ll probably see something like the picture below.
As you can see, not all of the tracks had their disk allocation pointing to the right place. To fix them, select all of the incorrectly allocated tracks, then click and hold the little up/down arrows on the right hand side. A little window will appear and ask you to select a folder. In my case, the session file was looking on my internal system drive instead of my external audio drive. Choose the correct location of your session files and click OK. That should solve the issue. Let me know if this worked for you.
Error dialog windows can be really frustrating. They pop up and demand your attention, when you just want to get to work on something. Sibelius 7 has thrown this missing font error for me a few times:
There are fonts missing. Sibelius 7 will still work without these fonts, but some scores may not display properly. The missing fonts are: Reprise Std, Reprise Special Std, Reprise Title Std, Reprise Stamp Std, Reprise Rehearsal Std, Reprise Script Std, Reprise Text Std
Most likely the fonts aren’t missing, but simply disabled, which makes the fix really easy. Here’s how to re-enable the “missing” fonts.
First, open the application Font Book. This native OS X font manager should be located in your Mac’s Applications folder.
Second, search for the missing fonts. Font Book has a search field in the upright corner. Type in the names of the missing fonts.
Enabled fonts are shown in black text. Disabled fonts are grayed out and are labeled “Off” on the right hand side.
In my case, all of my “missing” fonts were part of the Reprise family, I typed in “reprise” and all of the fonts in question appeared in the filtered list.
Third, enable the fonts. Select the fonts you want to re-enable. Then hit Shift-Command-D. You can also enable fonts by using the menu bar by selecting Edit > Enable Fonts. The fonts should turn black and the “Off” label will disappear.
I see you checking out my wallpaper.
Lastly, close Font Book and reopen Sibelius. If you enabled all the “missing” fonts, you should be good to go. The error shouldn’t pop up this time, however, it may happen again in the future.
Why does this error occur?
I’ve had to run the fix a couple times now. I don’t know why this error seems to reoccur. If you know why those Reprise fonts sometimes disable themselves, please send me an email or comment below.
Being a graphic artist as well, I know that fonts are notorious for becoming corrupt, conflicting with other fonts, and generally being a hassle to manage. You might think being a musician is a good way to get away from graphic design problems, but unfortunately software like Sibelius relies on fonts to display notation. At least the fix for this error is easy to do and only takes a minute.
The fix I posted above seemed to only work for a while. Occasionally, I would have to run the fix again, which is to say, it wasn’t much of a fix. So, I dug in further and found a real, permanent fix.
The issue was with duplicate fonts. The strange bit was that it wasn’t duplicates of the Reprise family, which was the family of fonts that Sibelius said were missing. Instead it was duplicates of various other fonts that Sibelius uses.
By referencing this forum post and this forum post, I figured out which fonts Sibelius requires and, thus, which ones might be causing problems. Then, for clarity’s sake, in the Font Book application I created a new Collection (File > New Collection or ⌘N). After that I did a search for duplicate fonts (Edit > Look for Enabled Duplicates… or ⌘L) and looked in the Sibelius font collection for any that were flagged. Sure enough, about a third of the fonts that Sibelius uses had duplicate copies. One by one, I “resolved” (deleted) the duplicate fonts, then rebooted. Problem solved.
I made a dummy head baffle to test out binaural recording techniques on an upcoming session. The baffle was super simple to make, looks sleek, and works quite well, so I thought I’d share how I made it.
Note: The microphones shown here are not the same brand or model. I recommend using a matched pair of omni mics for the best stereo imaging results.
Before we get into the nitty gritty details, let’s get some questions out of the way first.
What’s a dummy head?
Dummy head is either an insult you used in third grade while playing kickball at recess or the term you use for the baffle placed between two microphones while making a binaural recording.
What’s binaural recording?
Binaural recording is a technique that attempts to record audio in a way that replicates the way our human ears encode three-dimensional audio information. This is done by simulating a human head by arranging two microphones (the ears) in relationship to an acoustic baffle (the head). The result is recorded audio with a stereo image that when played back through good headphones is supposed to sound exactly like “being there.” The dummy head acts like a proxy for your own head in whatever environment it is placed in. You get to hear whatever the dummy head heard.
One of the best known binaural recordings is the inconspicuously named album Binaural by Pearl Jam. Note: If you click that link and buy the album, Amazon will give me a little kickback, which I would totally appreciate. I’m sure Amazon and Pearl Jam’s label would appreciate it too.
What’s a baffle?
In audio jargon, a baffle is an object made of sound absorbing and/or acoustic dampening materials used to block or reduce transmission, reflection, or propagation of sound waves. Baffles are like shields that can prevent or impede sounds. They can be used to isolate a particular sound source from other sound sources in the same room. Baffles are often placed around loud things like drums or guitar amps. Sometimes engineers will place small baffles on the back side of microphones to reduce early reflections and room sounds or give more directionality to an omni microphone.
Shouldn’t a dummy head look like a head?
Binaural purists say that a binaural dummy head baffle must closely resemble a human head to capture all the nuances of how sound reflects off our faces, is absorbed by the mass of our heads, tickles our nose hairs, and gets caught by those biologically amazing curvatures of our outer ears.
The purists might be right, but if we’re going to replicate a human head down to the smallest details, whose head should we use as the model specimen? When I last checked, human heads still come in all kinds of neat shapes and sizes. Sure, we could build something will all sorts of exacting specifications, but I say a board roughly 20 cm by 25 cm that’s covered in felt is Good Enough™.
If you build one and test it out, I think you’ll agree. All we really need to get a decent binaural recording is something roughly head-sized that blocks reflections between two quality microphones.
How to Make a DIY Dummy Head Binaural Baffle
Materials Needed for This Project
Wood Board – Solid or plywood, roughly 20 cm x 25 cm, whatever thickness you want. I happened to have a piece of solid oak lying around. Good enough!
Thick Felt – Enough to cover the board on both sides. You can use multiple layers to get the thickness you want. I had enough thick black felt left over from another project to do three layers on each side. I suppose you can buy this stuff at a fabric store or directly from your local feltsmith.
Short Screws – Pan head wood screws, quantity 8, long enough to secure the felt to the wood without poking out the other side.
Longer Screws – Pan head wood screws, quantity 3-4, for securing the mounting bar to the bottom of the wood.
Before Getting Started
You’ll need a few other things to build this baffle. I used a circular saw to cut the wood, razor blade to cut the felt, power drill/driver with drill bits to pre-drill and drive screws, clamps to hold things together, and a bandage to put on my finger.
This is probably a good time to give the obligatory reminder to be careful when you use power tools. Really that applies to any time you do anything in life. I find it silly that from a legal stand point it’s necessary to post a disclaimer about the dangers of power tools when writing about them. Cars kill people all the time, but to my knowledge articles about using cars don’t require disclaimers. Anyway…you should probably wear gloves, eye protection, ear plugs, and a respiratory mask. Maybe put on some pants too.
Putting it Together
Measure and cut the board. It should measure about 20 cm x 25 cm. That’s the approximate size of a human head when looking at one from the side. Yes, I used the metric system, because it’s way better than imperial. And no, that does not make me an anti-American, unpatriotic traitor. If you want to use imperial dimensions for human head size, may I suggest starting here?
Cut the felt. The felt should be the exact same dimensions as the board. A razor blade works well for making nice clean cuts. A sharp knife or strong scissors could probably work too.
Make a sandwich. Stack up the layers of felt with the wood sandwiched in the middle. I clamped this together to keep everything in place for the next step.
Attach the felt. Pre-drill through the felt into the wood approximately 2-3 cm in from each of the four corners. Try not to let the wood dust get embedded into the felt, which would look bad. Do this on both sides, but offset the location slightly on each side so the screws from the back side don’t end up hitting the screws from the front side. Drive the short wood screws in deep enough to hold the felt taut, but not too tight. Puckered felt looks unprofessional.
Drill holes in the Microphone Bar. Figure out where you want the long screws to be. Mark those spots on metal bar and drill holes just slightly larger in diameter than the long wood screws. When drilling metal, a little oil helps to cool the drill bit, making the drilling process easier. You can use cooking oil from the kitchen; it works just as well as anything else. Also, be careful with the metal shavings this produces, which can cause trouble if they get into electronics and/or your body.
Attach the Microphone Bar. Once the holes are drilled in the microphone bar, align the bar to the bottom of the baffle. Mark where the holes are and pre-drill the wood deep enough for the long wood screws. Again, avoid getting the wood dust on the felt. Screw the microphone bar to the baffle.
Ready to Use. Mount the baffle on a microphone stand using the center mounting hole. Use the shorter adjustable arms to place the microphone shockmounts or clips so the microphones’ capsules are approximately in the center of the baffle vertically and horizontally. The microphones should be about 20 cm apart from each other, which is about an average distance between most human ear pairs.
So does it work? In testing the dummy head I made, I was really surprised at how accurately the stereo field mapped sounds to the real world. I was kind of expecting it not to work very well. I had two different brands and models of microphones for my test. For the record the microphones you use to make binaural recording should be a matched pair with an omni pattern. Other patterns can sort of work too, just not as well.
I’m not posting audio samples here just yet, as I didn’t have the rights microphones on hand. But I did build this for an upcoming session, so once that session is done, I’ll post some clips for you to hear just how well a DIY dummy head can work.
I somewhat coincidentally stumbled across an article about a thing called a Jecklin Disk, which is a lot like this dummy head baffle only larger. Check out this Wikipedia article for more about it.
PACE has changed how their customers interface with their infamous iLok. The iLok is a DRM dongle, that many software manufacturers use to manage licensing. Formerly, all licenses were managed (mostly just fine) through the ilok.com website, which is now an insufferable “support” site. The new, prematurely launched system PACE requires users to install the iLok License Manager application on their computer.
Ok, no big deal, right?
I recently purchased several plugins to use in my audio production. I’d love to use these great new plugins, but I can’t because the PACE application is horrible.
In order to use the plugins, I need an iLok 2, which has to have the licenses on it, which must be loaded onto the iLok only by using the iLok License Manager, which won’t even allow me to sign in. This is the error I get.
The session you were using is no longer valid. Press OK to establish a new session.
Pressing OK makes the error go away, but it comes right back when I click “Sign In.” The iLok support site doesn’t list this problem as a issue I can submit a support ticket for. So that’s it. I can’t sign in.
If this were a football game, PACE fumbled at kickoff, bungled the whole first half, refuse to answer any questions at half time, and amazingly the fumbled ball is still loose in the second half.
I think this screen grab from the iLok.com website says perfectly what many digital audio workers are thinking.
A funny thing happened with some of the content on this page. I can’t tell the story just yet, but I bet it’s going to be a good laugh when it’s all over. Interweb lulz.
As promised…a funny story. After poking around my site stats and hits, I discovered someone was hot linking me.
If you’re not familiar with hot linking, it’s like stealing cable TV from a neighbor, except it hurts the neighbor instead of the cable company. I had a bandwidth leech!
Anyway, a very popular, well-respected pro audio plug-in development company (who will remain unnamed, because it ended well) was using an image from my site on their support page. It was the photograph I took of two iLoks, which is featured at the top of this very blog post.
I knew I could do something funny with the hot link and maybe get a free plug-in out of it. So I created this new image to replace the one they were linking to on my server.
The names of people and plug-ins are blurred out to protect both the guilty and the innocent.
This meant that the above image would now show up on their site. Zing!
I had formatted it to look nearly identical to their artist endorsements in hopes that it might ride under the radar, remaining visible on their support page for as long as possible. For a short while this unofficial endorsement was live on their site.
Long story short…I uploaded the image and went to bed.
Surprisingly, less than 12 hours later I received an email from one of the company’s developers. He basically said, “well played,” thanked me for not goatse-ing them (If you don’t know what that is, don’t Google it.), and let me pick out a free plug-in. Woohoo!
Moral of the story: Hot linking costs everyone something.
Side note: The very same iLok 2 that’s in the picture featured in this debacle must have a desire to make me famous/infamous. It is the very same iLok I photographed to use in the satirical movie poster THE SNOWDEN ULTIMATUM, which was featured in Forbes and lots of other places. There’s something strange about that iLok.
A smart guy named Helmut Haas discovered a bunch of cool things about the way our human brains decode the sounds we hear to determine the direction of where those sounds originate.
Back in 1949, Mr. Haas found that early reflections of sounds help our brains decipher where the sounds came from. We can tell a noise came from the left not simply because we hear it in our left ear, but also because the sound bounces off a wall to our right and hits our right ear a very short time after it hit our left ear. Almost instantaneously, the brain detects the short time between the two signals and tells us, “Hey, that sound you just heard came from your left. Better turn your head to see what it was!” This happens so quickly that we don’t really even think about it. We just “know” it came from the left.
Haas also recognized that early reflections are basically copies of the initial sound that are delayed slightly. He started messing with people’s heads. He pointed speakers at them and firing sounds with very short delay differences. Then he asked the test subjects which direction the sound seemed to come from.
His conclusion: Not only is it fun to play with sounds, but also 40 ms (milliseconds) is some kind of magic point for our brains. If an echo is more than 40 ms after the initial sound, then we hear the sounds as separate instances. But if the delays happen within 40 ms or less of each other, then we perceive them together as merely directionality cues of a single sound.
For example, if a sound hits our right ear and the same sound hits our left ear 0.3 ms later, we don’t hear two sounds, we only hear one sound coming from approximately our 1 o’clock position.
Engineers have implemented the Haas effect as an alternative to panning. Most of the time panning works just fine, but it does have limits.
Sometimes panning leaves the location of the audio feeling indeterminate, smeared, mono, or one dimensional. This is why a lot of engineers skip the pan knob altogether and mix LCR.
To effectively localize a track in a stereo field using the Haas effect, engineers have to do a couple things. They duplicate the track, pan the two tracks hard left and right, and then apply a delay to only one of the sides. The delay is applied to the side opposite of the side from which the sound is intended to perceived as originating.
Typical delay times for this technique are increments of 0.1 ms from 0.1 to 0.7 ms. This yields linear movement across the stereo field. You can think of it like this chart shows.
Example: Want the sound to come from 9 o’clock on the left? Delay the right side by about 0.4 or 0.5 ms.
After researching the Haas Effect, I decided I wanted to try it out in a mix. Since the settings must be very exact, setting it up correctly can be a bit confusing. Presets to the rescue!
I made these presets for the stock Digidesign Mod Delay II plug-in. These presets only work for this specific plug-in and Pro Tools. If there’s interest, maybe I’ll make more presets for other DAWs in the future.
Download this ZIP file, unzip it, and drop the folder and included presets in the Mod Delay II folder in the Plug-in Settings folder. On a Mac it’s probably located at Library / Application Support / Digidesign / Plug-In Settings / Mod Delay II, but may be in a different location on your system.
Setting up the tracks
Insert an instance of the Mod Delay II (mono/stereo) plug-in on the mono track you want to Haas-ify. Select the preset you want. No need to duplicate tracks. Bingo.
Understanding how to use the Haas effect properly means you need to understand and pay attention to things like stereo-to-mono compatibility and comb filtering, as well as other stereo field mixing techniques. As with all effects, have fun but be careful not to over do it. Experiment and do your homework. Then let me know if you find learn or discover anything cool. Here’s a cool video that got me thinking about the Haas effect.This video no longer available.
Ever get this error? Can’t open your session, right? Not only is it a major workflow stopper, but the double punctuation typo at the end is annoying as well.
Luckily, the solution is quite simple.
This is the quick fix that works for me and my particular setup of hardware/software. Your mileage may vary.
Quit Pro Tools
Restart Pro Tools
Open the session that wouldn’t open before
Get back to work
The IT mantra “Have you tried turning if off and on again?” waves the problem away like a magic wand, but why is this problem happening in the first place?
The last time this error occurred for me, I noticed that it was after I had ejected my audio hard drive, removed my iLok, and left Pro Tools open, but put my machine to sleep before Pro Tools could issue the panic message: “Hey! Where’s your iLok, buddy?! That’s it! We’re shutting this whole thing down.” Then when I went to reopen the last session I was working on, boom, the error in question occured.
I’m guessing that between the time I ejected everything and the time I plugged it all back in and tried to fire it up again, Pro Tools had switched its default sample rate from whatever my Mbox 2 Pro says it was to whatever my MacBook Pro thinks it should be. Then when I try to open a session with a particular sample rate that doesn’t jive with what the current rate is, Pro Tools freaks out because it thought it knew what was right, but doesn’t even know anymore, man.
Disclaimer: I don’t actually know how or why the error is occurring. These are just my slightly educated stabs in the dark. If you know anything more about this error, why it happens, and, most importantly, why there’s a typo in it, please leave your thoughts in the comments section below.
Mixing audio is not easy. I’m no expert, but something just struck me…
Maybe making a great mix simply comes down to listening to a song a thousand times and removing all the little things that annoy you until there’s nothing left to dislike. Hopefully the subtraction leaves you with enough material to reveal the goodness of the song. I bet great mixing engineers can get there in fewer than a 1000 listens. Maybe there’s more to it. Just a thought.
Sound is basically waves of pressure changes. The exact definition is more complicated, but essentially we perceive sound because our ears decode the frequencies of oscillating movement of particles in gases, liquids, and solids. There are many ways to generate sound waves, such as plucking guitar strings so they vibrate, or hitting a membrane like a drum head.
A long time ago, people discovered that sound could also be made by blowing air through a pipe with a opening on the side, thus inventing the whistle. They also found that a range of tones could be produced by assembling a group of whistles with varying lengths and diameters. Then they attached a controller (called a keyboard or manual) so that one person could “play” this collection of pipes. Their invention is what we now know as the pipe organ.
At the start, pipe organs had only one timbre – a basic whistle sound, but over the next several hundred years, smart inventors and musicians made improvements in the technology. They found ways to emulate lots of other instruments, like brass, woodwinds, percussion, and even human voices. Their hope was to fully replicate those real life instruments.
As features were added, pipe organs evolved into enormous, elaborate, and expensive installations, increasingly more complicated to play and maintain. While these pipe organs were truly amazing inventions, capable of creating complex and beautiful music, they were actually quite poor emulations of the real life instruments they were intended to replace.
Still, we humans are adaptable and we fell in love with the sound of pipe organs, learning to appreciate the instrument for what it was, not what it wasn’t.
Eventually, we discovered electricity and began to harness its power to create electromechanical instruments. Creative minds developed things like vacuum tubes, tone wheels, and transistors. Companies like Hammond and Wurlitzer implemented tone wheels to generate sounds approximating a pipe organ.
However, similar to the pipe organ, this new technology was a brilliant invention that poorly emulated its predecessor. These new organs were affordable alternatives to pipe organs, so in spite of being a bad imitation they became popular with smaller houses of worship. Traveling musicians took advantage of the portability of these smaller organs too, making their sound common in popular jazz, blues, and rock music.
Once again, our ears grew accustomed to the sound of the imitation, developing an affinity for the quirks of its particular aesthetic.
As the march of progress continued, electronics became smaller and more powerful. Engineers found ways to replace the delicate mechanical parts in electric organs, which were subject to wear and tear, with completely electronic sound generators. Lightweight, all electronic keyboard synthesizers used a variety of methods in attempts to replicate the sounds of their heavier electromechanical ancestors.
But just like before, history would repeat itself. The new emulators were incredible technological achievements that fell short of their goal of replacing the old technology. Though they lacked the ability to fully replicate the previous generation, they possessed attributes that eventually found an audience of connoisseurs that valued them not just in spite of their glitches, but because of their unique properties.
Today, we synthesize the sounds of the old technologies with computers and keyboard MIDI controllers. While initially computers could only crudely imitate the old masters, DSP technology is progressing rapidly. CPU speed and available RAM are no longer the main limitation factors. As the computational power ceiling continues to rise higher and higher, software programmers are able to provide increasingly nuanced emulators that can easily fool the listener into believing that the software is actually the real thing.
At this point, if you’re still reading, then you probably can see how this history correlates to the plot of the film Inception. Each new technological breakthrough has been like a deeper dream state, where the simulation moves further and further away from reality.
→ Pipe organs
→ → Electric organs
→ → → Keyboards
→ → → → Software
However, just like in the film, while each level becomes more strange and abstract, the deepest level — Limbo — actually approaches something most like the real thing or maybe even better. Today’s emulators delve into such detail and are able to control even the most minute aspects of the sound, that it won’t be long before they easily eclipse the believability of the old technology. In fact, we may already be there.
A few years ago (when the emulators weren’t half as good as they are now), a friend of mine (who has very good ears) dropped by the studio to hear a song I was working on. When the B3 organ kicked in during the chorus, he declared, “That organ sounds great. There’s nothing like the real thing!” Muwhahaha! The smoke and mirrors of software emulation had worked.
Inspiration for This Article
This idea of how keyboard technology relates to Inception came about through a discussion with my friend Hoss. Over the weekend we were working on the keyboard parts for our band Rudisill’s next album Take To Flight. In between takes of an organ part we marveled at the realization that the software he was using was an emulation of an emulation of an emulation — a truly strange scenario.
Follow Rudisill to hear about the new album when it is released later.
One of the first lessons in the long, ugly self-education process of teaching yourself to play guitar is how to tune your instrument. When you’re learning something new you’re bound to make mistakes and sometimes those mistakes lead to new discoveries.
My early mistakes while trying to wrangle my guitar into tune accidentally opened the door to exploring alternate or alternative tunings. After realizing that EADGBE or “standard” tuning is not the only way to tune a guitar, I intentionally began playing around with tunings, discovering things like DADGBD (Double Drop D) and EADF♯BE.
Since then, I’ve read about Nick Drake, who some consider to be the godfather of alternate tunings, and learned that you can’t really play Rolling Stones or Led Zeppelin tunes faithfully or easily in standard tuning.
Armed with that knowledge and even more curiosity, I’ve added to my repertoire more tunings like DACGAD, CGAGCE, DGDGBD, DADDAD, and even DDDGDD (thanks to Ben Albright). But perhaps the most interesting tuning I’ve discovered is one I made up.
One day I was thinking about how the B string in standard tuning stands alone. Standard tuning is based on intervals of fourths (or 2½ steps), so the pitch for each string can be found by fretting the next lower pitched string at the fifth fret. For example, fretting the low E string at the fifth fret sounds the note A, which is the note of the next higher string. And the A string can be fretted at the fifth fret to give a D. This works for all of the strings on the guitar except the B string. To find the pitch of the B string the G string must be fretted on the fourth fret, which produces a major third.
This break in the pattern bothered me. Sure, standard tuning is a solid, time-tested system with many good reasons for why it is the way it is, but I wondered what would happen if I used the fourth fret to tune all the way across.
What came out of that little experiment is a weird tuning that I often use: FAC♯FAA. I call it my two-step tuning, not because it’s good for songs with a two step feel, but because each string is two steps higher than the previous string.
Feel free to use this tuning, but don’t blame me for broken strings. 😉
Like standard tuning, I allowed one string to be an exception to the rule. If I had continued the pattern across, the high E should have been another C♯, but it proved difficult to make chord shapes this way. I thought I’d drop the string to A instead. This created a nice unison effect, but the string was too loose and easily fell out of tune. So I replaced the high E string with a string of the same gauge as the B string. And taa-daa! A new tuning!
But sadly, I could’t write much of anything with it.
An open strum produced an augmented triad, an interesting, but somewhat unsettling chord (take a major chord and sharp the fifth i.e. C-E-G♯). Plucking each string in succession revealed a tritonic scale of major thirds, which is not a scale Western ears (mine included) are accustomed to hearing in musical contexts. When all the notes of a scale are equidistant to each other, it becomes very difficult to determine the key. The scale is the same no matter where you start. John Coltrane used this peculiar aspect of major thirds to create a disorienting progression of chords now known as Coltrane changes.
None of the familiar chord shapes and scale patterns of standard tuning carried over to this new tuning either. My brain was flummoxed by its’ own invention. Having created something interesting, but not knowing what to do with it, I set it aside.
Sometime later I worked a summer as a truck driver for a fireworks company. I decided to take my guitar on the road with me to see if I could crack this tuning’s code. My truck route took me near where my friend Brian Fetter lived. Instead of sitting in a hotel, I was able to hang out with him for the evening. It was at his apartment that this tuning produced its’ first tune, a song called “If Ever In Doubt.”
For a long time, that was the only song that I could find in that tuning. I often referred to it as my “If Ever In Doubt” tuning. Over time the tuning and I became more comfortable with each other. A handful of songs have come to life through it. My latest album All Is Sideways features several of these songs (including the title track).
Reasons to Try Alternate Tunings
Create unique vibes standard tuning can’t make
Drone-like effects with open strings
Strange chords can be played with easier fingerings
Forces you to think about the sound and not resort the familiarity of what you know and muscle memory
In that article, I gave 50 technical questions as “homework” for the musician that wants to get better at being a musician. The broad list covers a lot of little things that musicians really ought to know, but think they don’t need to know.
While we could easily get sidetracked judging ourselves based on whether we can answer those specific questions or not, the real issue I’m hoping to address is our attitudes about learning.
Learning is tough. Really tough. It takes dedication, willingness, and humility to learn new things. It’s not surprising that we make a lot of excuses to avoid it.
Excuses, excuses, excuses
Over the years, I have cited lots of reasons for why I wasn’t progressing as a musician, but they were simply excuses. Here are a few of my mental blocks.
1. My fingers are too fat.
Back in high school I picked up the guitar because I wanted to write songs. After a year or two of trying to learn how to play, I told Nathan Hamlin, my trusted friend and songwriting partner, that my fingers were too fat to play guitar well. His response?
Scott, my dad Vance has huge sausage fingers and he can play guitar better than I can. You have no excuse.
Nathan was right. I stopped making excuses and learned how to play guitar. Now people ask me to play guitar for them.
Still want to make excuses? Phil Keaggy has only 9 digits, Chad James has only one hand, and Mark Goffeney has no hands, but it hasn’t stopped any of them from playing guitar.
2. I need a better guitar.
For years I was convinced that if I just had a more expensive guitar, I too could play like a pro. Wrong.
In college I met Ben Albright, a guy who was known for his guitar prowess. Time and time again, I watched as he would pick up the same crappy instrument I had just laid down and play something inspiring. Clearly the guitar was not the problem.
The roadblock was in my mind. There was a reason I couldn’t make a guitar sing like Ben could. Besides not putting in the many hours of practice that he had, I had already decided that I couldn’t make great music without great instruments.
In a previous post called “How to Get Perfect Guitar Tone,” I included a video clip from It Might Get Loud of Jack White building and then playing a makeshift guitar on his front porch. The improvised “guitar” he makes proves his point that great music is possible even if the instrument is not very good.
I can’t blame my guitar.
3. I need better recording equipment.
We live in such a wonderful time. Recording has never been more accessible, affordable, or high quality.
My soon-to-be released album All Is Sideways was recorded in locations all over the U.S. over the past 3 years. Some of the songs have more than 50 layered tracks. I was privileged to be able to record with talented players on great instruments with really nice microphones and preamps into a sweet computer.
The funny thing I have to remind myself is that some of the greatest albums of all time have been made with much less. The Beatles recorded their highly complex Sgt. Pepper’s Lonely Hearts Club Band with a pair of 4-track tape machines.
Compared to the tools we have available to us today, musicians and engineers of the past worked with sticks and stones. Men have flown to outer space and back in rocket ships with computers on board that pale in comparison to the iPods in our pockets. Yet somehow we’ve convinced ourselves that to make an album like Led Zeppelin’s IV today, we need million dollar systems with all the latest technology.
In case you missed all the promotional efforts on Facebook and Twitter, in 2011 I released my version of “Go Tell It On The Mountain” as a free download. Try one of the following links to get the song now.
Many thanks go to Lynn Graber of The Recording House for offering to record this Christmas song for free as part of his Christmas 2011 compilation. Six other artists recorded songs with Lynn. I’ve embedded their tracks below for you to enjoy.
As for my recording, I had a lot of fun working with Lynn at his swanky studio. We experimented with new microphone placement and techniques while recording the upright piano. I also was able to track harmonica using an Alesis iO Dock with an iPad and the Ground Up AudioAmps & Cabs iOS app.
“Go Tell It On The Mountain” by Scott Troyer
“O Come, O Come Emmanuel” by Autumn Ashley
“Some Children See Him” by Nathan Metz
“Emmanuel” by Larisa Grisham
“What Child Is This?” by Vanessa Ann Grisham
“Oh Holy Night” by Escaping Yesterday
“Free (A Christmas Song)” by Troy Erbe
In 1907, John W. Work, Jr. published a collection called Folk Song of the American Negro, which contained the first publication of “Go Tell It On The Mountain.” For those listening closely to my version of the song, some of the lyrics have been modified from the original. I altered a few of the words and added a couple lines. Some may want to stone me for changing a classic, but I believe the changes to be improvements that are faithful to our best understanding of the gospel. Review the lyrics on the discography page to see if you can find the changes I made. Let me know what you think via the comments section below.
Go Tell It
This song may seem old-fashioned or out-of-date, but here’s the thing: there are places in the world where people have never heard that “Jesus Christ is born.” They may know the name Jesus Christ (possibly as it is used as a profanity in movies or TV), or they may have limited information (or even disinformation) about this Messiah guy. In spite of the nearly omnipresent accessibility of the internet and prevalence of computers, smart phones, and iDevices, there are still many people uninformed about the central character of the Christian faith. Often, governments prevent their people from receiving information about Christianity or persecute their citizens for spreading the information.
One of the most notorious of these regions of the world is North Korea. With the recent passing of dictator Kim Jong-Il, the North Korean government is likely to change its policies in regards to religious practice. Please read this article from Vernon Brewer, president of WorldHelp, to find out how you can “go tell it on the mountain.” Then donate via this link.
I met my maker. I made him cry.
And on my shoulder he asked me why
His people won’t fly through the storm.
I said: ‘Listen up man they don’t even know you’re born.’
It’s an interesting concept. The wars between analog and digital rage on because they are systems separated by technologies that both have pros and cons. As technology progresses, what new pros and cons will we have to debate against older systems? Initially I answered with the following:
Realizing there’s much more to this debate than just a tweet, I thought I’d talk more about it here.
We Need Better Words to Describe How We’ll Make Music in the Future
In my original tweet, I used the phrase “Cerebral vs. Digital” to describe the future debate I imagine will happen. Maybe my choice of opposites wasn’t perfect. Better words can probably be found. This concept of diametrics I have in mind could be expressed in a variety of ways.
Cerebral vs. Physical
Solitary vs. Collaborative
Internal vs. External
Each of those word combinations is describing the same contrast of ideas. But how to best describe it?
The New System of Mind Music
In the (maybe not so distant) future, musicians will have the ability to directly output music from their heads. Technology will be developed that will allow artists to simply think/imagine/hear the music in his/her head and output this as audio and/or notation. This cerebrally generated “audio feed” could be routed (maybe even wirelessly) to a recording device to be documented, distributed, and sold. Theoretically, this process could happen as a live performance. The signal could be routed to a sound system for a concert, to an internet connection for worldwide streaming, or even directly injected (almost telepathically) into the head of a “listener” outfitted with the proper “receiver” device.
The possibilities are fantastic. Composers could direct an entire imaginary orchestra as they hear it in their minds. Dancers could dance to their own music in real time. Musicians could play exactly what they intend to play. Singers could sing in whatever voices they can imagine. Handicapped artists suddenly would be unrestricted by their handicaps.
This technological breakthrough in music will follow a path familiar to video games. With the Wii, Nintendo brought wireless motion-sensing accelerometer action to everyday people. The developers of Guitar Hero and Rock Band banked a lot of cash by making it really easy to “play” popular music without having to learn an instrument. Microsoft’s Kinect for Xbox removed the need for a controller, allowing the person to become the controller. I don’t know who will create the first mind-controlled music technology, but somebody’s going to do it.
Cool meant something totally different back then. Don’t judge.
As with any change, it’s going to get worse before it gets better. Unfortunately, music will experience yet another Regrettable Period in which we have to learn how to use this new technology properly. I predict some gross and unsavory abuse of the technology, much like the ubiquity of terrible synthesizers in the 1980s or prevalence of auto-tuned vocals since Cher started believing in life after love. But some lucky artist is going to enjoy the honor of being known as the one that mastered this wonderful new system, thus becoming the “Grand Master Flash of whatever-this-thing-may-become-known-as.” Someone will figure out how to use it right, but it might take some time. In the meantime, wear earplugs.
Why We’ll Argue About This
At first, this newfangled gadgetry will be heralded as the end of “real” music and musicianship. The critics will say it’s too easy and not authentic music. Traditional composers and invested players will complain that no one has to learn how to write or play anymore. And much in the same way that digital was derided as a poor substitute for analog, purists will say that this cerebral form loses something in the process. Those arguments all might be right, but there may be a bigger issue lurking.
Trapped “In The Box”
When the process of making music becomes entirely internalized it will be really great because of it’s purity and singularity of thought, but will it simultaneously suffer from lack of external influences? When digital recording became popular, the question was often asked by one artist or engineer to another: “Was this all done ‘in the box?’” – meaning: was the audio signal created, mixed, and mastered on the same computer? Early on, music created entirely in this fashion lacked the beneficial effects that analog systems inherently imparted upon the audio signal. Today, the line has been blurred by better technology, so it’s harder to tell if something was recorded analog or digital. Only engineers with “golden ears” can hear the difference (even then I suspect shenanigans). At any rate, the question still remains: What benefits will be lost due to the signal remaining “in the box” of your head?
Potential Musical Influences
People – The comradery, inspiration, ideas, criticism, differing views, and friction found when people work together often makes for better music. Being alone can lead to dead ends and boring or bad music. Collaboration can make beautiful things.
Hardware – Though they are inanimate objects, the instruments and devices used to make music come with their own inspirations, challenges, rewards, frustrations to overcome, and occasional good glitches. Sometimes a piece of gear has to be conquered and relinquishes its magic upon defeat.
Criticism – The critic is the archenemy of the artist, but every good story needs a villain. Without judgement, no work is ever as best as it can be. Words are often revealed for their folly only after they’ve left the head.
Movement – Music and movement are very strongly related. When making music, movement is both part of the instigation of sound, but also a reaction to the sound being created. Performance and dance are like cousins. So if movement is not necessary for the creation of music, what effect will that have on the final product?
Good Things Will Happen
A lot of things can go wrong in this new system, but a lot of things can go right too. Eventually we’ll work out the kinks. We’ll figure out the typical pitfalls. We’ll master this medium like we have with all the others. One day amazing music will be generated using nothing but musicians’ brains. I’m hedging a bet it will be the direct output of some ridiculously young Mozart’s mind that will blow us all away. Perhaps this new interface will teach us something about how our brains work. Maybe it will allow us to communicate more precisely on ever deeper levels. What if it develops into a new universal language? Hmm.
The audio device buffer underflowed. If this occurs frequently, try decreasing the “H/W Buffer Size” in the Playback Engine panel or remove other devices from the audio firewire bus. (-6085)
Occasionally this error pops up in Pro Tools, usually after I return from a meal in the middle of a long recording or mixing session. The session file will only playback audio for 1 second or less and then the error message pops up. Apparently, Pro Tools 9 is a workaholic and doesn’t like taking lunch breaks, at least when running on the particular combination of MacBook Pro, Mbox 2 Pro, and Western Digitalhard drive that I’m using.
Following the directions to decrease the “H/W Buffer Size” in the Playback Engine panel doesn’t seem to help. In fact, not only does decreasing the buffer size seems contrary to the suggested way to solve a buffer underrun, but it then sometimes throws this error message:
A CPU overload occured. If this happens often, try increasing the “H/W Buffer Size” in the Playback Engine Dialog, or removing some plug-ins. (-6101)
I’ve tried a lot of things and the problem seems to be related to the hard drive and firewire ports. Here’s how I fix it.
Save and Close the session.
Quit Pro Tools.
Eject the hard drive used for recording audio.
Unplug the audio hard drive and Mbox 2 Pro (or the audio interface you’re using).
Wait 10 seconds.
Reconnect the audio hard drive and audio interface.
Restart Pro Tools.
Reopen the session and press Play.
If the session plays back without stopping, then it worked. If not, then I don’t know what to tell you, which reminds me of a “Deep Thought” by Jack Handey.
If you ever crawl inside an old hollow log and go to sleep, and while you’re in there some guys come and seal up both ends and then put it on a truck and take it to another city, boy, I don’t know what to tell you.
Hopefully this solution worked for you. Let me know if you’ve had the same problem, what hardware you are running and if this solved the problem.
This is cool. My inner nerd had to come out and dance for bit. This is video by Kyle Jones, a designer, animator and illustrator from Nashville. Check out his website here and follow him on Twitter. He decided to record himself playing guitar using his iPhone from inside the guitar. Genius. Rejoice with me, […]
This is cool. My inner nerd had to come out and dance for bit. This is video by Kyle Jones, a designer, animator and illustrator from Nashville. Check out his website here and follow him on Twitter. He decided to record himself playing guitar using his iPhone from inside the guitar. Genius. Rejoice with me, all you audio and science loving geeks.
Pro Tools hardware is either not installed or used by another program. If you thought that having Pro Tools 9 installed meant no more “Hey, Mr. Engineer Genius, where’s your fancy hardware?” errors, then this nagging error probably came as a surprise. It did for me. Since installing Pro Tools 9, my workflow has allowed […]
Pro Tools hardware is either not installed or used by another program.
If you thought that having Pro Tools 9 installed meant no more “Hey, Mr. Engineer Genius, where’s your fancy hardware?” errors, then this nagging error probably came as a surprise. It did for me. Since installing Pro Tools 9, my workflow has allowed me to jump around from my Mbox 2 Pro, Mbox 2 Micro, and MacBook Pro’s built-in sound card. This has been really handy while trying to finish up my album on the road. But, apparently, all that hardware hopping can cause the playback engine to get stuck in some funky states that don’t so work –if at all. See my previous post “FIX: Pro Tools could not set sample rate to specified value” for a similar issue.
Obviously, the problem has something to do with the playback engine. Since the error dialog only offers an ‘OK’ button, which closes Pro Tools, there doesn’t seem to be a way to work around the problem. There is not even a way to know what hardware Pro Tools is expecting.
I found a simple solution via this Sweetwater forum. The answer given there details how to get Pro Tools running on a PC, but I found that it worked for Macs too and without having to install any drivers. The fix is kind of like booting Pro Tools in safe mode. Simply hold the ‘N’ key while starting up Pro Tools. This will bypass the normal start up sequence and open up the Playback Engine window. Now you can select the correct playback engine and continue using Pro Tools.
In my situation, Pro Tools was looking for the last connected device (my Mbox 2 Pro), but since it wasn’t available it opted for the next available option: my MacBook Pro’s line input, which doesn’t make a very good playback engine.
Let me know if this fix worked for you.
This problem may have been fixed in the Pro Tools 9.0.2 update that came out yesterday, though I’ve not been able look through the 9.0.2 Readme file in detail or to test this out on the updated software. I’ll update this page when I find out more.
A couple of weeks ago, my friend David, a young and very talented musician/singer/songwriter, asked me the following question. Hi Scott! Hey, how many GB of hard drive space do you recommend for recording on a laptop? Thanks, David To which I responded: Hey David, The recommended practice for digital recording is to record to […]
A couple of weeks ago, my friend David, a young and very talented musician/singer/songwriter, asked me the following question.
Hey, how many GB of hard drive space do you recommend for recording on a laptop?
To which I responded:
The recommended practice for digital recording is to record to an external hard drive instead of the internal drive. This is done for performance reasons. Recording to an external drive keeps your data separate from the rest of your computer data, allowing the computer to use the internal drive for the dedicated purpose of running the operating system. This also makes your recording data more portable for taking it to a studio and prevents trouble if you ever need to send your computer in for service (the recording data stays with you).
It is also recommended to use an additional external drive that serves as a backup so if anything goes wrong with a drive you won’t lose everything. So ideally, you would have two identical drives. They can be any size, but should be the same size. A typical song (2-5 min with 4-5 instruments with multiple takes for each instrument/voice) at 24 bit resolution and 48k sample rate will take up approximately 1-3 GB. If you’re lacking hard drive space, after the tracks are finalized the unused takes can be deleted, which reduces the file size of the song, thus giving you more room for additional songs. But as cheap as hard drives are these days, getting a decent sized drive shouldn’t be a problem.
The cost of external drives for computer-based recording is much cheaper than the cost of memory cards for hard disk recorders.
With all that in mind, I recommend buying 2 of the largest hard drives you can get within the budget you have. Remember, these drives should be the same size and used ONLY for your recordings.
Western Digital has good drives for reasonable prices.*
Modern recording takes lots of hard drive space. It’s easy to eat up several GB on a song of average length and track depth. I’ve filled a drive or two already with various recording sessions, Photoshop files, and media. Over the weekend I had to pick up another drive just so I can finish my […]
Modern recording takes lots of hard drive space. It’s easy to eat up several GB on a song of average length and track depth. I’ve filled a drive or two already with various recording sessions, Photoshop files, and media. Over the weekend I had to pick up another drive just so I can finish my upcoming album. I went to the nearest big box electronics shop and picked up the the biggest drive with the best price. What I found was the Western Digital 2 TB My Book Studio LX. The size should be enough for the next year or so (let’s hope!) and the simple grey metal design suits my preference for the minimalist Mac aesthetic. Surprisingly, this is the first drive I’ve purchased that came preformatted for Mac OS. I know that some drives come advertised as such, but this was just a standard off-the-shelf one-kind-fits-all drive. Maybe this indicates a shift in the Apple/PC market share?
The only thing that bothers me about WD is their pre-installed SmartWare software. It’s a huge can of donkey sauce. This multi-function bloatware takes up over half a GB of space, is loaded into the drive firmware (so it cannot just be formatted away), appears as a separate VCD that pops up everytime you connect to the drive, and cannot be completely removed without voiding the warranty. The only option WD gives the user is to download two more software packages, one that updates the firmware so you can run the second package that allows you to hide the VCD. Blehhhh…
The whole point I want to make is this:
Dear Western Digital,
I like you and your drives. I like the design, reliability, and affordability of your drives. I can’t stand your SmartWare. Please stop making it. If you can’t do that, then please make it an opt-in thing. If you feel you really, truly, just absolutely must preinstall it (instead of offering it available as a free download), then at least make it easy to permanently remove with just one or two clicks. I do not want to download more software to remove software I already don’t want. Thank you.
A regular and loyal customer,
While removing the the VCD completely is possible and would be my preferred solution, doing so voids the warranty, which is extremely valuable should the drive ever fail. So in my opinion, doing something to void the warranty on the device that stores my invaluable data is a bad idea. Until WD decides that such action no longer voids the warranty, I cannot recommend this.
How to Hide SmartWare
WD doesn’t make it easy to hide the VCD. There are two major steps. You’ll need to download the firmware update for your particular drive and the VCD Manager. Visit the WD Product Updates page to find out how to hide the VCD for your specific device and OS.
…at least not in a permanently defined state. It is always changing depending on context. There’s not a one-size-fits-all solution for guitar tone and the guy who is showing you exactly how to get “perfect” tone is either demonstrating his idea of a good sound for a very particular context or selling you something. Let the buyer beware!
I’ve seen a zildjillion YouTube videos and magazine articles in which an “expert” outlines in very fine detail the “preferred” gear or “professional” way to play/mic/mix. They have shown me how to dial in that Clapton tone, place ribbon mics like Eno, mix a hit song like the Lord-Alge brothers, mod my guitar and amp like SRV, and even dress like a rockstar. In each circumstance I think, “Yes, that might just work. I could sound like that, if I do everything else exactly the same way as Mr. Famous Rockstarpants.”
They have it right. It truly is the small stuff that matters. In fact, all these tiny details matter so much and there is such a vast quantity of them, that replicating such performances is nearly inconceivable. Every part of the signal chain plays a role – from player to instrument to amp to room to microphone to preamp and all the cables, power supplies, recording/storage media, surfaces, and recording/mixing/mastering engineers in between. Even weather, location, and moods can make a difference.
Needless to say, it’s nearly impossible to replicate that one sound by that one artist on that one record. So many factors are involved in the making of a sound, that in many cases the original artist that recorded it might not be able to make that precise sound again, even when given identical circumstances. (I’d like to point out that perhaps the very reason we enjoy certain sounds is because a beautiful moment was captured – something unique that will never happen again – and trying to recreate it verbatim would somehow make it less amazing. Frankenstein’s monster wasn’t very pretty, was he? I digress.)
“We all have idols. Play like anyone you care about, but try to be yourself while you’re doing so.” – quote attributed to B. B. King
And The Good News
Proper tone (the right tone at the right time) can be bought. You can pay for it with practice and critical listening. Good equipment is nice, but not necessary, as Jack White demonstrates so well in It Might Get Loud.
After upgrading to the newly released Pro Tools 9, I couldn’t open sessions or create new ones. I got this error: “Could not complete the Open Session… command because Pro Tools could not set sample rate to specified value..” I hunted around on the web and various forums, but couldn’t find a solution that fit. […]
After upgrading to the newly released Pro Tools 9, I couldn’t open sessions or create new ones. I got this error: “Could not complete the Open Session… command because Pro Tools could not set sample rate to specified value..” I hunted around on the web and various forums, but couldn’t find a solution that fit. I found several items relating to Windows and Pro Tools 8, but nothing for a Mac running Pro Tools 9. After messing around a bit I figured out the problem was with my playback engine. Here’s how I solved it. Let me know if it works for you too.
Open the Playback Engine dialog under the Setup menu item.
From the menu bar select Setup > Playback Engine… to open the Playback Engine dialog window.
The fix is easy. Simply select the right playback engine. Your options may differ based on your setup.
In my case, I usually would edit with my Mbox 2 Micro, but since Pro Tools 9 gives us so many more options for hardware compatibility, I selected Built-in Output. I was able to edit some vocal takes using my Macbook Pro’s speakers instead of pulling out my headphones. Nice!