Just recently I our data backup drive at work failed, and so we needed to purchase a new one – a Quantum SuperLoader3. This device was very similar to the previous one, so we knew that we would simply be able to plug it into our host controller and have things operational as soon as we did. And things would have worked perfectly well if our host controller hadn’t blown out the raid and died on us. This meant I had to recreate the tape backup script, as we had no real backup of the host server.
Daisy-chained cron scripts
Since we have multiple databases and multiple services that need to have their data backed up, we have several scripts that run nightly in a chain. First one script pulls data from one server, then it calls another script which pulls from several servers, and all of that data is written to our network file system (NFS). These scripts are run in succession because the one cannot run until the other has completed, and guessing at when a process will complete is fraught with error. The last script in the backup process calls the script that runs the tape archive which I had recreated and placed on our SuperLoader host. So, two different servers, both needing access to one script and the same data.
Setting up the environment
First I set up the drive mapping to the same directory on the NFS that our backups were stored in so that we wouldn’t have to bother with transferring the files via scp and could just read from the same location. I then set up the script on the tape drive host to run with the proper permissions so that it could access the mtx and mt controllers. The final step was to make sure that the script could be called without a password. To do that, I set up RSA key pairs, and used the following command in the final script before tape backup:
ssh -t user@tape_drive_controller sudo -u root /home/script_location/write_to_tape.sh
This worked from the command line, so I thought, great, “I’ll check everything in the morning.” It failed to work. Instead, I received the following error in my log:
Pseudo-terminal will not be allocated because stdin is not a terminal. sudo: sorry, you must have a tty to run sudo
Tracking down the error
The fact that it worked in the command line, but not when called from a script confused me, since I was calling the pseudo-terminal with the -t flag; a requirement when you use sudo – user in the command on the command line. And at first I jumped in to Google searches without really reading and trying to understand the error. Lovely, right? I ran across multiple forums that contained twice as many suggestions for what was wrong. Then I read the error again, and realized that the clue was in “stdin is not a terminal” since I have one script calling another. Calling it from the command line gives me the memory allocation because I’m already logged in via tty. But when calling from one script to another, there is no tty session, so it fails — especially when the sudoers file is set to require tty*. I had two options, turn the Defaults requiretty in the sudoers file to off, which wouldn’t be too swift, or force memory allocation even when not in tty, which I discovered in the ssh man page. I chose the second option. The command in the script should have read:
ssh -tt user@tape_drive_controller sudo -u root /home/script_location/write_to_tape.sh
The extra t flag is what forced it to work: “Multiple -t options force tty allocation, even if ssh has no local tty.” I’m sure that there are much more Linux-y reasons for why I needed to force a tty connection, one of which, I think, is that the tape server would be creating command line output that would need to be placed into the log file on the calling server; hence the requirement for pseudo-tty.
*(from the man page: requiretty If set, sudo will only run when the user is logged in to a real tty. When this flag is set, sudo can only be run from a login session and not via other means such as cron(8) or cgi-bin scripts.)