I want to save web pages for offline reading. Currently I save it using firefox
. For bulk saving I want to automate the process using a script (or what about a web site copier like webhttrack
?). From terminal I can save the .html file of the URL (using wget URL
) but can't view the page as no image, .js, etc.
Sometimes I want to save numbered pages, ie. https://askubuntu.com/posts/1, https://askubuntu.com/posts/2, https://askubuntu.com/posts/3, https://askubuntu.com/posts/4 .. (like mirroring) in one shot.
How can I bulk save web pages with all necessary file to view it properly?