blob: 6098728118a471f45396e6ba558b02c147775e75 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
Feed Crawler
============
Download all links from a feed using httrack. This is the engine behind the
"Cache" feature used by https://links.sarava.org Semantic Scuttle instance.
Usage
-----
Place this script somewhere and setup a cronjob like this:
`*/5 * * * * /var/sites/arquivo/httracker/httracker &> /dev/null`
TODO
----
- Include all sites already donwloaded by scuttler.
- Support for other fetchers like youtube-dl and quvi.
- Rename project and repository to "httruta".
|