Can't get crontab to run backup script properly

theRealBassist@lemmy.world to Linux@lemmy.ml – 19 points –

Hello everyone!

I'm running a few different services off of my Ubuntu VM on ProxMox, and they've all been running great for about 6 months now. However, I'm trying to setup some better backups and such of individual services, and I wrote a bash script to do that for me and delete older backups once I accumulate enough.

All of that works 100% fine. Like absolutely no issues with the script when I run it myself. However, I can not for the life of me get crontab to run it.

If I run sudo ./folder/directory/backup.sh then everything runs perfectly. However, if I setup my crontab with 0 * * * * ./folder/directory/backup.sh I get absolutely nothing.

I have also tried setting the crontab with sudo, sh, sudo sh, and both combinations without the dot in front of the path to the shell script.

Does anyone have any idea what I am doing wrong?

Thank you so much for any help

Update: I have edited /etc/crontab with the following 0 * * * * * root /mnt/nas/freshrss/backups/backup.sh. After waiting for the crontab to fire off, nothing happened. Still not really sure what's going on.

21

I use to have a problem like that and i had to put #!/bin/bash at the top of the script

The crontab has no concept of . meaning the current directory. Try with the full path to the script. You might also need a user (but you might not if it's a user's crontab as opposed to the system one).

So if those help and report back either way.

The crontab has no concept of . meaning the current directory.

Not quite true. . exists in all directories so will work in any application. But it raises the question of what is the directory cron is running in. Probably not what you expect, definitely not your users home dir and you probably should not rely on it. So you should not use relative paths inside it - even if you can get them to work. Best to just stick to absolute paths or explicit cd to the right location before hand (that is on the same cron line or in the script it calls).

That's probably the issue, crontab has another workdir, so calling the script with a relative path won't work.

Just use the full path to the script, something like /home/username/folder/directory/backup.sh and it'll probably just work.

I have edited /etc/crontab with the following 0 * * * * * root /mnt/nas/freshrss/backups/backup.sh. After waiting for the crontab to fire off, nothing happened.

Too many numbers/stars

MM HH DAY MON DAYOFWEEK USER /full/path/to/script >/path/to/logfile.log 2>&1

(The log part is optional)

So, right now I'm trying the system crontab instead of my user crontab.

Just to reiterate from my post, however, I have tried the full path. I was giving example paths. I should have been more explicit that by just "using dot" I meant using relative and absolute paths.

All paths have been full paths from the get go, though I did try cd-ing into the folder and running it with a relative path. My hope at this point is that it's somehow a permissions issue as my storage setup is a bit odd with TrueNAS Scale running as a VM on ProxMox. Permissions with docker are usually hell, and I have to run literally everything that touches my NAS as root to get the permissions to play nicely, so it would make sense here that it's just the permissions being upset and preventing access to the files.

I set a backup to run on the hour, so I'll report back with whatever happens.

So, right now I’m trying the system crontab instead of my user crontab.

Yes, you should never use sudo inside a users crontab. If you want to run as root then use the system crontab.

I would also encourage looking at systemd timers. They are more verbose then crontab, but far easier to debug and see what is going on. They work off services so automatically log to journald like all other services and you can easily see when they last ran, if it was successful and when it will next run with systemctl list-timers. All things you can do with cron, but requires a lot more setup yourself.

Yes, you should never use sudo inside a users crontab. If you want to run as root then use the system crontab.

I appreciate the advice! I had never really heard about the distinction between the system crontab and user crontabs. While it makes sense in retrospect, I am entirely self-taught about this stuff, and nowhere I had looked had ever mentioned that there were two separate crontabs.

I would also encourage looking at systemd timers

Do you happen to know of a good resource to learn about those off the top of your head? I appreciate the suggestion!

The arch wiki is always a good place to look. There are a lot of introduction blog posts around that I have not read so cannot recommend - but plenty to look at if you need more information or a more beginner friendly guide than the arch wiki.

The freedesktop manuals are also worth a look at for more advanced stuff you can do with them - but are not really required for basic things. They just detail all the settings you have available and are much more of a reference than a guide.

I have edited /etc/crontab with the following 0 * * * * * root /mnt/nas/freshrss/backups/backup.sh. After waiting for the crontab to fire off, nothing happened.

There's an extra *. There should be 5 time fields, but there's a zero followed by 5 *s. If that's not what's causing it, next spot I'd check is output from the cron logs. Not sure where that is in Ubuntu, though, might be in/var/log/messages or in the systemd journal. Cron sometimes sends mail when there's an error, too, so checking the users mail might give you some clues as well.

Looking at all the replies here I don't see anyone asking what's in your script. Is everything in the script pointing to full folder names?

Try adding this at the end of the entry:

1>/home/{username}/crontabscript.out 2>/home/{username}/crontabscript.err

Replace {username} with your login id. See what you get in those files when it runs. That might give you some better clues as to what's going on.

0 * * * * * root /mnt/nas/freshrss/backups/backup.sh

Why do you have root in there? If you need something to run as root do sudo crontab -e and edit the root user's crontab accordingly. The user shouldn't be specified in the crontab directly.

The crontab that is found at /etc/crontab very specifically states that it has a user field. I will readily admit that I might be misunderstanding it, but that feels pretty explicit to me.

What distro are you using? I haven't seen /etc/crontab in quite a while with the advent of the /etc/cron.d directory. That said, crontab -e will handle this stuff for you.

Edit: I see, Ubuntu. I'm not too familiar with what they're doing over there. I have an /etc/cron.d dir on my Arch boxes. Some other stuff to check though: does any cron job run? If not, is the service running? You could also redirect this script's output to a file under /tmp or something to check if it's running and what might be going wrong. Beyond that, check the systemd logs for any errors.

Thre could be two other things that I can think of:

Permissions maybe: Try "sudo chmod +x /path/yourscript.sh" to make your script explicitly executable.

Also, the environment of cron doing something may be different from when you do it as root or user. So you should always use the full path to every command in your script; like "/bin/tar" instead of just "tar". To find out, where things are, you can use "whereis tar", and it will tell you, whether it's in /bin, /usr/bin or elsewhere.

I know it is not really what is asked, but cron is a pain in the ass to handle and manage. I am not sure if it is officially deprecated yet, but I would migrate everything to systemd timers instead it is so much better. It provides configuration tools and proper integrated logging and troubleshooting tools.

Just create a service file of type oneshot which runs your backup script and a timer unit with the same base name. Set the timer to hourly, place both files into /etc/systemd/system, do a daemon-reload and enable the timer. You can see the status or journal for output and list-timers to see the schedule and wether or not it ran.

Usually if programs can run in a user context but don't work as some automated process it is either due to environment differences. Most importantly PATH which can be solved by using absolute paths for programs. Another very common problem is the systems MAC implantation although it happens more often with SEL. Still you might want to check your AppArmor configuration and (audit) logs.

If you want to stick with cron also make sure to read the mails (/var/mail/root by default), because most cron implementations dump their output/logs there.

Try this journalctl -xb -u cronie. It will show you any errors

Cron does not like dots in the filename of your script. Does it work if it is just "backup" instead of "backup.sh"?

Have a look at the most upvoted comment.