new component - upload to NAS

This commit is contained in:
Remy Moll
2022-07-23 17:21:00 +02:00
parent 79e3f54955
commit 8e46f30f07
29 changed files with 132 additions and 63 deletions

View File

@@ -1,41 +1,65 @@
# Auto_news
# COSS_ARCHIVING
A utility to fetch article requests from slack and generate pdfs for them, fully automatically.
A utility to
* fetch article requests from slack
* generate pdfs for them
* compress them
* send them via slack + email
* upload them to the COSS NAS
... fully automatically. Run it now, thank me later.
---
## Running - Docker compose
A rudimentary docker compose file makes for much simpler command linde calls.
The included `docker-compose` file is now necessary for easy orchestration of the various services.
* For normal, `production` mode and `upload` mode, run:
All relevant passthroughs and mounts are specified through the env-file, for which I configured 4 versions:
`docker compose --env-file env/<desired mode> up`
* production
* debug (development in general)
* upload
* check
These files will have to be adapted to your individual setup but won't change significantly once set up.
### Overview of the modes
The production mode performs all automatic actions and therfore does not require any manual intervention. It queries the slack workspace, adds the new requests to the database, downloads all files and metadata, uploads the urls to archive.org and sends out the downloaded article. As a last step the newly created file is synced to the COSS-NAS.
The debug mode is more sophisticated and allows for big code changes without the need to recompile. It directly mounts the code-directory into the cotainer. As a failsafe the environment-variable `DEBUG=true` is set. The whole utility is then run on a sandbox environment (slack-channel, database, email) so that Dirk is not affected by any mishaps.
The check mode is less sophisticated but shows the downloaded articles to the host for visual verification. This requires passthroughs for X11.
Upload mode is much simpler, it goes over the exisiting database and operates on the articles, where the upload to archive.org has not yet occured (archive.org is slow and the other operations usually finish before the queue was consumed). It retries their upload.
* For normal `production` mode run:
`docker compose --env-file env/production up`
All relevant passthroughs and mounts are specified through the env-file, for which I configured 4 versions: production, debug (development in general), upload and check. These files will have to be adapted to your individual setup but can be reused more easily.
* For `debug` mode, you will likely want interactivity, so you need to run:
`docker compose --env-file env/debug up -d && docker compose --env-file env/debug exec auto_news bash && docker compose --env-file env/debug down`
`docker compose --env-file env/debug up -d && docker compose --env-file env/debug exec news_fetch bash && docker compose --env-file env/debug down`
which should automatically shutdown the containers once you are done. (`ctrl+d` to exit the container shell). If not, re-run `docker compose --env-file env/debug down` manually.
> Note:
> The live-mounted code is then under `/code`. Note that the `DEBUG=true` environment variable is still set. If you want to test things on production, run `export DEBUG=false`. Running `python runner.py` will now run the newly written code but, with the production database and storage.
> The live-mounted code is now under `/code`. Note that the `DEBUG=true` environment variable is still set. If you want to test things on production, run `export DEBUG=false`. Running `python runner.py` will now run the newly written code but, with the production database and storage.
* For `check` mode, some env-variables are also changed and you still require interactivity. You don't need the geckodriver service however. The simplest way is to run
`docker compose --env-file env/check run auto_news`
`docker compose --env-file env/check run news_fetch`
* Finally, for `upload` mode no interactivity is required and no additional services are required. Simply run:
`docker compose --env-file env/upload run news_fetch`
## Building
> The software (firefox, selenium, python) changes frequently. For non-breaking changes it is useful to regularly clean build the docker image! This is also crucial to update the code itself.
In docker, simply run:
`docker build -t auto_news --no-cache .`
where the `Dockerfile` has to be in the working directory
In docker compose, run
`docker compose --env-file env/production build`
@@ -43,10 +67,6 @@ In docker compose, run
## Roadmap:
:x: automatically upload files to NAS
:x: handle paywalled sites like faz, spiegel, .. through their dedicated edu-friendly sites
[_] handle paywalled sites like faz, spiegel, ... through their dedicated sites (see nexisuni.com for instance), available through the ETH network