Absolutely. Or maybe curl. If you can get the file's URL, and if the site you're downloading from will let you, some won't. Usually you can right-click on a link and do "copy link location" then type wget <paste>
You can also start the download (or on a failed one), go to tools -> downloads, right-click on the file in there and do copy link location. In the past years few some sites won't let you do this because it somehow knows it's not your browser anymore. Then you're doomed to download with Firefox. I used to do this on dialup at 10 MB/hour at best. I'd make a file where I wanted the download called url.txt (they come in handy later too sometimes) with just the url in it. Then call wget from a script that loops until it's done. I call this getit, chmod +x so it's executable, and keep it in the path like /usr/local/bin ------- #!/bin/sh # Wget gives up on being unable to resolve a host and quits returning an # error. This retries until no error is returned. # Problem is: # 416 Requested Range Not Satisfiable # The file is already fully retrieved; nothing to do. # also returns non-zero so it gets into a loop sometimes at the end while true do wget -c --no-check-certificate -i url.txt && break echo "Restarting wget" sleep 2 done ------- On 7/1/19, gru...@mailfence.com <gru...@mailfence.com> wrote: >> I had a hell of a time pulling it as I did a google search, and when the >> download had started, google was still in the firefox address bar, and >> I had to restart a timedout fail several hundred times. So I'm doing it >> again, and its coming it at an average of 1 meg/sec this time. I got the >> sha512 and info files too. >> > > try wget > > -- ------------- No, I won't call it "climate change", do you have a "reality problem"? - AB1JX Cities are cages built to contain excess people and keep them from cluttering up nature. Impeach Impeach Impeach Impeach Impeach Impeach Impeach Impeach