32 Commits

Author SHA1 Message Date
ced4d63382 Merge pull request 'Linting src in backend' (#40) from backend/linting-src into main
Some checks failed
Build and deploy the backend to production / Build and push image (push) Successful in 1m59s
/ push-to-remote (push) Failing after 13s
Build and deploy the backend to production / Deploy to production (push) Successful in 14s
Reviewed-on: #40
2024-12-01 17:01:32 +00:00
70a93c7143 shorter lines
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m27s
Run linting on the backend code / Build (pull_request) Failing after 36s
Run testing on the backend code / Build (pull_request) Failing after 1m25s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 15s
2024-11-30 18:27:52 +01:00
eec3be5122 adding pylint control to disable E0401 error
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m36s
Run linting on the backend code / Build (pull_request) Failing after 29s
Run testing on the backend code / Build (pull_request) Failing after 1m10s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-30 18:00:18 +01:00
41e2746d82 more testing and better pylint score
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m48s
Run linting on the backend code / Build (pull_request) Failing after 29s
Run testing on the backend code / Build (pull_request) Failing after 1m13s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 47s
2024-11-30 17:55:33 +01:00
4f169c483e first pylint correction
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m26s
Run linting on the backend code / Build (pull_request) Failing after 30s
Run testing on the backend code / Build (pull_request) Successful in 2m12s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 15s
2024-11-22 11:55:25 +01:00
02e9b13a98 Merge pull request 'auto test report' (#39) from backend/auto-test-report into main
Reviewed-on: #39
2024-11-22 10:09:15 +00:00
1b955f249e skipping boundaries
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m41s
Run linting on the backend code / Build (pull_request) Failing after 30s
Run testing on the backend code / Build (pull_request) Successful in 2m18s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-22 11:03:13 +01:00
d35ff30864 better details handling
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m31s
Run linting on the backend code / Build (pull_request) Failing after 30s
Run testing on the backend code / Build (pull_request) Failing after 1m47s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 15s
2024-11-21 15:36:20 +01:00
b56647f12e pylint update and better tour viz
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m41s
Run linting on the backend code / Build (pull_request) Failing after 30s
Run testing on the backend code / Build (pull_request) Failing after 1m47s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 38s
2024-11-21 15:28:12 +01:00
a718b883d7 pylint update
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m19s
Run linting on the backend code / Build (pull_request) Failing after 26s
Run testing on the backend code / Build (pull_request) Failing after 1m45s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-21 13:55:13 +01:00
29a0c715dd corrected report path
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m9s
Run linting on the backend code / Build (pull_request) Failing after 26s
Run testing on the backend code / Build (pull_request) Successful in 1m59s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-20 17:21:54 +01:00
f939b1bd6a bellecour set 60 minutes to test
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m31s
Run linting on the backend code / Build (pull_request) Failing after 26s
Run testing on the backend code / Build (pull_request) Successful in 1m59s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 17s
2024-11-20 16:49:08 +01:00
840eb40247 auto test report
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m57s
Run linting on the backend code / Build (pull_request) Failing after 25s
Run testing on the backend code / Build (pull_request) Failing after 1m11s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 20s
2024-11-20 16:47:51 +01:00
881f6a901d Merge pull request 'fix/backend/veiwpoint-nodes-and-churches' (#38) from fix/backend/veiwpoint-nodes-and-churches into main
Reviewed-on: #38
2024-11-18 16:09:19 +00:00
2810d93f98 migrated tests
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m56s
Run linting on the backend code / Build (pull_request) Failing after 25s
Run testing on the backend code / Build (pull_request) Failing after 1m9s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 15s
2024-11-18 16:52:01 +01:00
4305b21329 first fixes 2024-11-18 15:39:20 +01:00
e18a9c63e6 Merge pull request 'feature/backend/better_time_management' (#34) from feature/backend/better_time_management into main
Reviewed-on: #34
2024-11-06 13:36:51 +00:00
5fcadbe8d8 extended backend readme
All checks were successful
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m8s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-05 18:05:34 +01:00
5afb646381 Update backend/README.md
All checks were successful
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m37s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 15s
2024-11-05 14:46:06 +00:00
d0e837377b Update backend/README.md
All checks were successful
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m12s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-05 14:45:47 +00:00
d94c69c545 somewhat better durations
All checks were successful
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m39s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-04 19:59:52 +01:00
9e595ad933 fixed the error
All checks were successful
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m40s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-04 17:32:38 +01:00
53d56f3e30 remove cmakelists from vscode settings
All checks were successful
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m41s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-11-04 16:55:23 +01:00
f39d02f967 better readme setup backend
All checks were successful
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m15s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 14s
2024-10-29 12:14:12 +01:00
94a7adac6c Merge pull request 'build id fixes' (#32) from fix/frontend/yet-another-fastlane-fix into main
Some checks failed
Build and deploy the backend to production / Deploy to production (push) Has been cancelled
Build and deploy the backend to production / Build and push image (push) Has been cancelled
/ push-to-remote (push) Successful in 13s
Reviewed-on: #32
2024-10-22 14:24:23 +00:00
4d99715447 build id fixes
Some checks failed
Build and release debug APK / Build APK (pull_request) Has been cancelled
2024-10-22 16:23:53 +02:00
48555e7429 Merge pull request 'adjust path' (#31) from fix/frontend/ci-adjustments into main
Some checks failed
Build and deploy the backend to production / Deploy to production (push) Has been cancelled
Build and deploy the backend to production / Build and push image (push) Has been cancelled
/ push-to-remote (push) Failing after 11s
Reviewed-on: #31
2024-10-22 13:52:32 +00:00
8b24876fd1 adjust path
Some checks failed
Build and release debug APK / Build APK (pull_request) Has been cancelled
2024-10-22 15:52:03 +02:00
c832461f29 Merge pull request 'fixes for fastlane and gitea actions' (#30) from fix/frontend/ci-adjustments into main
Reviewed-on: #30
2024-10-22 13:26:02 +00:00
6f1a019d4f fixes for fastlane and gitea actions
All checks were successful
Build and release debug APK / Build APK (pull_request) Successful in 7m36s
Build and deploy the backend to production / Build and push image (push) Successful in 1m44s
/ push-to-remote (push) Successful in 12s
Build and deploy the backend to production / Deploy to production (push) Successful in 15s
2024-10-22 15:11:38 +02:00
e6ccb7078b Merge pull request 'also upload aab' (#29) from fix/frontend/fastlane-config into main
Some checks are pending
Build and deploy the backend to production / Deploy to production (push) Blocked by required conditions
Build and deploy the backend to production / Build and push image (push) Waiting to run
/ push-to-remote (push) Successful in 12s
Reviewed-on: #29
2024-10-22 12:51:22 +00:00
84839c5a02 also upload aab
Some checks failed
Build and release APK / Build APK (pull_request) Has been cancelled
2024-10-22 14:50:59 +02:00
36 changed files with 3397 additions and 769 deletions

View File

@@ -0,0 +1,34 @@
on:
pull_request:
branches:
- main
paths:
- backend/**
name: Run linting on the backend code
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: https://gitea.com/actions/checkout@v4
- name: Install dependencies
run: |
apt-get update && apt-get install -y python3 python3-pip
pip install pipenv
- name: Install packages
run: |
ls -la
# only install dev-packages
pipenv install --categories=dev-packages
pipenv run pip freeze
working-directory: backend
- name: Run linter
run: pipenv run pylint src
working-directory: backend

View File

@@ -0,0 +1,40 @@
on:
pull_request:
branches:
- main
paths:
- backend/**
name: Run testing on the backend code
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- uses: https://gitea.com/actions/checkout@v4
- name: Install dependencies
run: |
apt-get update && apt-get install -y python3 python3-pip
pip install pipenv
- name: Install packages
run: |
ls -la
# install all packages, including dev-packages
pipenv install --dev
pipenv run pip freeze
working-directory: backend
- name: Run Tests
run: pipenv run pytest src --html=report.html --self-contained-html
working-directory: backend
- name: Upload HTML report
if: always()
uses: https://gitea.com/actions/upload-artifact@v3
with:
name: pytest-html-report
path: backend/report.html

View File

@@ -6,7 +6,7 @@ on:
- frontend/**
name: Build and release APK
name: Build and release debug APK
jobs:
build:
@@ -55,7 +55,7 @@ jobs:
ls -lah android
working-directory: ./frontend
- run: flutter build apk --release --split-per-abi --build-number=${{ gitea.run_number }}
- run: flutter build apk --debug --split-per-abi --build-number=${{ gitea.run_number }}
working-directory: ./frontend
- name: Upload APKs to artifacts

6
.vscode/launch.json vendored
View File

@@ -14,9 +14,9 @@
"DEBUG": "true"
},
"args": [
"--app-dir",
"src",
"main:app",
// "--app-dir",
// "src",
"src.main:app",
"--reload",
],
"jinja": true,

3
.vscode/settings.json vendored Normal file
View File

@@ -0,0 +1,3 @@
{
"cmake.ignoreCMakeListsMissing": true
}

648
backend/.pylintrc Normal file
View File

@@ -0,0 +1,648 @@
[MAIN]
# Analyse import fallback blocks. This can be used to support both Python 2 and
# 3 compatible code, which means that the block might have code that exists
# only in one or another interpreter, leading to false positives when analysed.
analyse-fallback-blocks=no
# Clear in-memory caches upon conclusion of linting. Useful if running pylint
# in a server-like mode.
clear-cache-post-run=no
# Load and enable all available extensions. Use --list-extensions to see a list
# all available extensions.
#enable-all-extensions=
# In error mode, messages with a category besides ERROR or FATAL are
# suppressed, and no reports are done by default. Error mode is compatible with
# disabling specific errors.
#errors-only=
# Always return a 0 (non-error) status code, even if lint errors are found.
# This is primarily useful in continuous integration scripts.
#exit-zero=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code.
extension-pkg-allow-list=
# A comma-separated list of package or module names from where C extensions may
# be loaded. Extensions are loading into the active Python interpreter and may
# run arbitrary code. (This is an alternative name to extension-pkg-allow-list
# for backward compatibility.)
extension-pkg-whitelist=
# Return non-zero exit code if any of these messages/categories are detected,
# even if score is above --fail-under value. Syntax same as enable. Messages
# specified are enabled, while categories only check already-enabled messages.
fail-on=
# Specify a score threshold under which the program will exit with error.
fail-under=10
# Interpret the stdin as a python script, whose filename needs to be passed as
# the module_or_package argument.
#from-stdin=
# Files or directories to be skipped. They should be base names, not paths.
ignore=CVS
# Add files or directories matching the regular expressions patterns to the
# ignore-list. The regex matches against paths and can be in Posix or Windows
# format. Because '\\' represents the directory delimiter on Windows systems,
# it can't be used as an escape character.
ignore-paths=
# Files or directories matching the regular expression patterns are skipped.
# The regex matches against base names, not paths. The default value ignores
# Emacs file locks
ignore-patterns=^\.#
# List of module names for which member attributes should not be checked and
# will not be imported (useful for modules/projects where namespaces are
# manipulated during runtime and thus existing member attributes cannot be
# deduced by static analysis). It supports qualified module names, as well as
# Unix pattern matching.
ignored-modules=
# Python code to execute, usually for sys.path manipulation such as
# pygtk.require().
#init-hook=
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
# number of processors available to use, and will cap the count on Windows to
# avoid hangs.
jobs=1
# Control the amount of potential inferred values when inferring a single
# object. This can help the performance when dealing with large functions or
# complex, nested conditions.
limit-inference-results=100
# List of plugins (as comma separated values of python module names) to load,
# usually to register additional checkers.
load-plugins=
# Pickle collected data for later comparisons.
persistent=yes
# Resolve imports to .pyi stubs if available. May reduce no-member messages and
# increase not-an-iterable messages.
prefer-stubs=no
# Minimum Python version to use for version dependent checks. Will default to
# the version used to run pylint.
py-version=3.12
# Discover python modules and packages in the file system subtree.
recursive=no
# Add paths to the list of the source roots. Supports globbing patterns. The
# source root is an absolute path or a path relative to the current working
# directory used to determine a package namespace for modules located under the
# source root.
source-roots=
# When enabled, pylint would attempt to guess common misconfiguration and emit
# user-friendly hints instead of false-positive error messages.
suggestion-mode=yes
# Allow loading of arbitrary C extensions. Extensions are imported into the
# active Python interpreter and may run arbitrary code.
unsafe-load-any-extension=no
# In verbose mode, extra non-checker-related info will be displayed.
#verbose=
[BASIC]
# Naming style matching correct argument names.
argument-naming-style=snake_case
# Regular expression matching correct argument names. Overrides argument-
# naming-style. If left empty, argument names will be checked with the set
# naming style.
#argument-rgx=
# Naming style matching correct attribute names.
attr-naming-style=snake_case
# Regular expression matching correct attribute names. Overrides attr-naming-
# style. If left empty, attribute names will be checked with the set naming
# style.
#attr-rgx=
# Bad variable names which should always be refused, separated by a comma.
bad-names=foo,
bar,
baz,
toto,
tutu,
tata
# Bad variable names regexes, separated by a comma. If names match any regex,
# they will always be refused
bad-names-rgxs=
# Naming style matching correct class attribute names.
class-attribute-naming-style=any
# Regular expression matching correct class attribute names. Overrides class-
# attribute-naming-style. If left empty, class attribute names will be checked
# with the set naming style.
#class-attribute-rgx=
# Naming style matching correct class constant names.
class-const-naming-style=UPPER_CASE
# Regular expression matching correct class constant names. Overrides class-
# const-naming-style. If left empty, class constant names will be checked with
# the set naming style.
#class-const-rgx=
# Naming style matching correct class names.
class-naming-style=PascalCase
# Regular expression matching correct class names. Overrides class-naming-
# style. If left empty, class names will be checked with the set naming style.
#class-rgx=
# Naming style matching correct constant names.
const-naming-style=UPPER_CASE
# Regular expression matching correct constant names. Overrides const-naming-
# style. If left empty, constant names will be checked with the set naming
# style.
#const-rgx=
# Minimum line length for functions/classes that require docstrings, shorter
# ones are exempt.
docstring-min-length=-1
# Naming style matching correct function names.
function-naming-style=snake_case
# Regular expression matching correct function names. Overrides function-
# naming-style. If left empty, function names will be checked with the set
# naming style.
#function-rgx=
# Good variable names which should always be accepted, separated by a comma.
good-names=i,
j,
k,
ex,
Run,
_
# Good variable names regexes, separated by a comma. If names match any regex,
# they will always be accepted
good-names-rgxs=
# Include a hint for the correct naming format with invalid-name.
include-naming-hint=no
# Naming style matching correct inline iteration names.
inlinevar-naming-style=any
# Regular expression matching correct inline iteration names. Overrides
# inlinevar-naming-style. If left empty, inline iteration names will be checked
# with the set naming style.
#inlinevar-rgx=
# Naming style matching correct method names.
method-naming-style=snake_case
# Regular expression matching correct method names. Overrides method-naming-
# style. If left empty, method names will be checked with the set naming style.
#method-rgx=
# Naming style matching correct module names.
module-naming-style=snake_case
# Regular expression matching correct module names. Overrides module-naming-
# style. If left empty, module names will be checked with the set naming style.
#module-rgx=
# Colon-delimited sets of names that determine each other's naming style when
# the name regexes allow several styles.
name-group=
# Regular expression which should only match function or class names that do
# not require a docstring.
no-docstring-rgx=^_
# List of decorators that produce properties, such as abc.abstractproperty. Add
# to this list to register other decorators that produce valid properties.
# These decorators are taken in consideration only for invalid-name.
property-classes=abc.abstractproperty
# Regular expression matching correct type alias names. If left empty, type
# alias names will be checked with the set naming style.
#typealias-rgx=
# Regular expression matching correct type variable names. If left empty, type
# variable names will be checked with the set naming style.
#typevar-rgx=
# Naming style matching correct variable names.
variable-naming-style=snake_case
# Regular expression matching correct variable names. Overrides variable-
# naming-style. If left empty, variable names will be checked with the set
# naming style.
#variable-rgx=
[CLASSES]
# Warn about protected attribute access inside special methods
check-protected-access-in-special-methods=no
# List of method names used to declare (i.e. assign) instance attributes.
defining-attr-methods=__init__,
__new__,
setUp,
asyncSetUp,
__post_init__
# List of member names, which should be excluded from the protected access
# warning.
exclude-protected=_asdict,_fields,_replace,_source,_make,os._exit
# List of valid names for the first argument in a class method.
valid-classmethod-first-arg=cls
# List of valid names for the first argument in a metaclass class method.
valid-metaclass-classmethod-first-arg=mcs
[DESIGN]
# List of regular expressions of class ancestor names to ignore when counting
# public methods (see R0903)
exclude-too-few-public-methods=
# List of qualified class names to ignore when counting class parents (see
# R0901)
ignored-parents=
# Maximum number of arguments for function / method.
max-args=5
# Maximum number of attributes for a class (see R0902).
max-attributes=7
# Maximum number of boolean expressions in an if statement (see R0916).
max-bool-expr=5
# Maximum number of branch for function / method body.
max-branches=12
# Maximum number of locals for function / method body.
max-locals=15
# Maximum number of parents for a class (see R0901).
max-parents=7
# Maximum number of positional arguments for function / method.
max-positional-arguments=5
# Maximum number of public methods for a class (see R0904).
max-public-methods=20
# Maximum number of return / yield for function / method body.
max-returns=6
# Maximum number of statements in function / method body.
max-statements=50
# Minimum number of public methods for a class (see R0903).
min-public-methods=2
[EXCEPTIONS]
# Exceptions that will emit a warning when caught.
overgeneral-exceptions=builtins.BaseException,builtins.Exception
[FORMAT]
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
expected-line-ending-format=
# Regexp for a line that is allowed to be longer than the limit.
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
# Number of spaces of indent required inside a hanging or continued line.
indent-after-paren=4
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
# tab).
indent-string=' '
# Maximum number of characters on a single line.
max-line-length=105
# Maximum number of lines in a module.
max-module-lines=1000
# Allow the body of a class to be on the same line as the declaration if body
# contains single statement.
single-line-class-stmt=no
# Allow the body of an if to be on the same line as the test if there is no
# else.
single-line-if-stmt=no
[IMPORTS]
# List of modules that can be imported at any level, not just the top level
# one.
allow-any-import-level=
# Allow explicit reexports by alias from a package __init__.
allow-reexport-from-package=no
# Allow wildcard imports from modules that define __all__.
allow-wildcard-with-all=no
# Deprecated modules which should not be used, separated by a comma.
deprecated-modules=
# Output a graph (.gv or any supported image format) of external dependencies
# to the given file (report RP0402 must not be disabled).
ext-import-graph=
# Output a graph (.gv or any supported image format) of all (i.e. internal and
# external) dependencies to the given file (report RP0402 must not be
# disabled).
import-graph=
# Output a graph (.gv or any supported image format) of internal dependencies
# to the given file (report RP0402 must not be disabled).
int-import-graph=
# Force import order to recognize a module as part of the standard
# compatibility libraries.
known-standard-library=
# Force import order to recognize a module as part of a third party library.
known-third-party=enchant
# Couples of modules and preferred modules, separated by a comma.
preferred-modules=
[LOGGING]
# The type of string formatting that logging methods do. `old` means using %
# formatting, `new` is for `{}` formatting.
logging-format-style=old
# Logging modules to check that the string format arguments are in logging
# function parameter format.
logging-modules=logging
[MESSAGES CONTROL]
# Only show warnings with the listed confidence levels. Leave empty to show
# all. Valid levels: HIGH, CONTROL_FLOW, INFERENCE, INFERENCE_FAILURE,
# UNDEFINED.
confidence=HIGH,
CONTROL_FLOW,
INFERENCE,
INFERENCE_FAILURE,
UNDEFINED
# Disable the message, report, category or checker with the given id(s). You
# can either give multiple identifiers separated by comma (,) or put this
# option multiple times (only on the command line, not in the configuration
# file where it should appear only once). You can also use "--disable=all" to
# disable everything first and then re-enable specific checks. For example, if
# you want to run only the similarities checker, you can use "--disable=all
# --enable=similarities". If you want to run only the classes checker, but have
# no Warning level messages displayed, use "--disable=all --enable=classes
# --disable=W".
disable=raw-checker-failed,
bad-inline-option,
locally-disabled,
file-ignored,
suppressed-message,
useless-suppression,
deprecated-pragma,
use-symbolic-message-instead,
use-implicit-booleaness-not-comparison-to-string,
use-implicit-booleaness-not-comparison-to-zero,
import-error
# Enable the message, report, category or checker with the given id(s). You can
# either give multiple identifier separated by comma (,) or put this option
# multiple time (only on the command line, not in the configuration file where
# it should appear only once). See also the "--disable" option for examples.
enable=
[METHOD_ARGS]
# List of qualified names (i.e., library.method) which require a timeout
# parameter e.g. 'requests.api.get,requests.api.post'
timeout-methods=requests.api.delete,requests.api.get,requests.api.head,requests.api.options,requests.api.patch,requests.api.post,requests.api.put,requests.api.request
[MISCELLANEOUS]
# List of note tags to take in consideration, separated by a comma.
notes=FIXME,
XXX,
TODO
# Regular expression of note tags to take in consideration.
notes-rgx=
[REFACTORING]
# Maximum number of nested blocks for function / method body
max-nested-blocks=5
# Complete name of functions that never returns. When checking for
# inconsistent-return-statements if a never returning function is called then
# it will be considered as an explicit return statement and no message will be
# printed.
never-returning-functions=sys.exit,argparse.parse_error
# Let 'consider-using-join' be raised when the separator to join on would be
# non-empty (resulting in expected fixes of the type: ``"- " + " -
# ".join(items)``)
suggest-join-with-non-empty-separator=yes
[REPORTS]
# Python expression which should return a score less than or equal to 10. You
# have access to the variables 'fatal', 'error', 'warning', 'refactor',
# 'convention', and 'info' which contain the number of messages in each
# category, as well as 'statement' which is the total number of statements
# analyzed. This score is used by the global evaluation report (RP0004).
evaluation=max(0, 0 if fatal else 10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10))
# Template used to display messages. This is a python new-style format string
# used to format the message information. See doc for all details.
msg-template=
# Set the output format. Available formats are: text, parseable, colorized,
# json2 (improved json format), json (old json format) and msvs (visual
# studio). You can also give a reporter class, e.g.
# mypackage.mymodule.MyReporterClass.
#output-format=
# Tells whether to display a full report or only the messages.
reports=no
# Activate the evaluation score.
score=yes
[SIMILARITIES]
# Comments are removed from the similarity computation
ignore-comments=yes
# Docstrings are removed from the similarity computation
ignore-docstrings=yes
# Imports are removed from the similarity computation
ignore-imports=yes
# Signatures are removed from the similarity computation
ignore-signatures=yes
# Minimum lines number of a similarity.
min-similarity-lines=4
[SPELLING]
# Limits count of emitted suggestions for spelling mistakes.
max-spelling-suggestions=4
# Spelling dictionary name. No available dictionaries : You need to install
# both the python package and the system dependency for enchant to work.
spelling-dict=
# List of comma separated words that should be considered directives if they
# appear at the beginning of a comment and should not be checked.
spelling-ignore-comment-directives=fmt: on,fmt: off,noqa:,noqa,nosec,isort:skip,mypy:
# List of comma separated words that should not be checked.
spelling-ignore-words=
# A path to a file that contains the private dictionary; one word per line.
spelling-private-dict-file=
# Tells whether to store unknown words to the private dictionary (see the
# --spelling-private-dict-file option) instead of raising a message.
spelling-store-unknown-words=no
[STRING]
# This flag controls whether inconsistent-quotes generates a warning when the
# character used as a quote delimiter is used inconsistently within a module.
check-quote-consistency=no
# This flag controls whether the implicit-str-concat should generate a warning
# on implicit string concatenation in sequences defined over several lines.
check-str-concat-over-line-jumps=no
[TYPECHECK]
# List of decorators that produce context managers, such as
# contextlib.contextmanager. Add to this list to register other decorators that
# produce valid context managers.
contextmanager-decorators=contextlib.contextmanager
# List of members which are set dynamically and missed by pylint inference
# system, and so shouldn't trigger E1101 when accessed. Python regular
# expressions are accepted.
generated-members=
# Tells whether to warn about missing members when the owner of the attribute
# is inferred to be None.
ignore-none=yes
# This flag controls whether pylint should warn about no-member and similar
# checks whenever an opaque object is returned when inferring. The inference
# can return multiple potential results while evaluating a Python object, but
# some branches might not be evaluated, which results in partial inference. In
# that case, it might be useful to still emit no-member and other checks for
# the rest of the inferred objects.
ignore-on-opaque-inference=yes
# List of symbolic message names to ignore for Mixin members.
ignored-checks-for-mixins=no-member,
not-async-context-manager,
not-context-manager,
attribute-defined-outside-init
# List of class names for which member attributes should not be checked (useful
# for classes with dynamically set attributes). This supports the use of
# qualified names.
ignored-classes=optparse.Values,thread._local,_thread._local,argparse.Namespace
# Show a hint with possible names when a member name was not found. The aspect
# of finding the hint is based on edit distance.
missing-member-hint=yes
# The minimum edit distance a name should have in order to be considered a
# similar match for a missing member name.
missing-member-hint-distance=1
# The total number of similar names that should be taken in consideration when
# showing a hint for a missing member.
missing-member-max-choices=1
# Regex pattern to define which classes are considered mixins.
mixin-class-rgx=.*[Mm]ixin
# List of decorators that change the signature of a decorated function.
signature-mutators=
[VARIABLES]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid defining new builtins when possible.
additional-builtins=
# Tells whether unused global variables should be treated as a violation.
allow-global-unused-variables=yes
# List of names allowed to shadow builtins
allowed-redefined-builtins=
# List of strings which can identify a callback function by name. A callback
# name must start or end with one of those strings.
callbacks=cb_,
_cb
# A regular expression matching the name of dummy variables (i.e. expected to
# not be used).
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
# Argument names that match this expression will be ignored.
ignored-argument-names=_.*|^ignored_|^unused_
# Tells whether we should check for unused import in __init__ files.
init-import=no
# List of qualified module names which can have objects that can redefine
# builtins.
redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io

View File

@@ -4,6 +4,14 @@ verify_ssl = true
name = "pypi"
[dev-packages]
pylint = "*"
pytest = "*"
tomli = "*"
httpx = "*"
exceptiongroup = "*"
pytest-html = "*"
typing-extensions = "*"
dill = "*"
[packages]
numpy = "*"

1336
backend/Pipfile.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,37 @@
# Backend
This repository contains the backend code for the application. It utilizes FastAPI that allows to quickly create a RESTful API that exposes the endpoints of the route optimizer.
This repository contains the backend code for the application. It utilizes **FastAPI** to quickly create a RESTful API that exposes the endpoints of the route optimizer.
## Getting Started
- The code of the python application is located in the `src` directory.
- Package management is handled with `pipenv` and the dependencies are listed in the `Pipfile`.
- Since the application is aimed to be deployed in a container, the `Dockerfile` is provided to build the image.
### Directory Structure
- The code for the Python application is located in the `src` directory.
- Package management is handled with **pipenv**, and the dependencies are listed in the `Pipfile`.
- Since the application is designed to be deployed in a container, the `Dockerfile` is provided to build the image.
### Setting Up the Development Environment
To set up your development environment using **pipenv**, follow these steps:
1. Install `pipenv` by running:
```bash
sudo apt install pipenv
```
2. Create and activate a virtual environment:
```bash
pipenv shell
```
3. Install the dependencies listed in the `Pipfile`:
```bash
pipenv install
```
4. The virtual environment will be created under:
```bash
~/.local/share/virtualenvs/...
```
### Deployment
To deploy the backend docker container, we use kubernetes. Modifications to the backend are automatically pushed to a two-stage environment through the CI pipeline. See [deployment/README](deployment/README.md] for further information.

47
backend/conftest.py Normal file
View File

@@ -0,0 +1,47 @@
import pytest
pytest_plugins = ["pytest_html"]
def pytest_html_report_title(report):
"""modifying the title of html report"""
report.title = "Backend Testing Report"
def pytest_html_results_table_header(cells):
cells.insert(2, "<th>Detailed trip</th>")
cells.insert(3, "<th>Trip Duration</th>")
cells.insert(4, "<th>Target Duration</th>")
cells[5] = "<th>Execution time</th>" # rename the column containing execution times to avoid confusion
def pytest_html_results_table_row(report, cells):
trip_details = getattr(report, "trip_details", "N/A") # Default to "N/A" if no trip data
trip_duration = getattr(report, "trip_duration", "N/A") # Default to "N/A" if no trip data
target_duration = getattr(report, "target_duration", "N/A") # Default to "N/A" if no trip data
cells.insert(2, f"<td>{trip_details}</td>")
cells.insert(3, f"<td>{trip_duration}</td>")
cells.insert(4, f"<td>{target_duration}</td>")
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
report = outcome.get_result()
report.description = str(item.function.__doc__)
# Attach trip_details if it exists
if hasattr(item, "trip_details"):
report.trip_details = " - ".join(item.trip_details) # Convert list to string
else:
report.trip_details = "N/A" # Default if trip_string is not set
# Attach trip_duration if it exists
if hasattr(item, "trip_duration"):
report.trip_duration = item.trip_duration + " min"
else:
report.trip_duration = "N/A" # Default if duration is not set
# Attach target_duration if it exists
if hasattr(item, "target_duration"):
report.target_duration = item.target_duration + " min"
else:
report.target_duration = "N/A" # Default if duration is not set

1094
backend/report.html Normal file

File diff suppressed because one or more lines are too long

0
backend/src/__init__.py Normal file
View File

View File

@@ -1,6 +1,9 @@
import logging.config
from pathlib import Path
"""Module allowing to access the parameters of route generation"""
import logging
import os
from pathlib import Path
LOCATION_PREFIX = Path('src')
PARAMETERS_DIR = LOCATION_PREFIX / 'parameters'
@@ -9,12 +12,10 @@ LANDMARK_PARAMETERS_PATH = PARAMETERS_DIR / 'landmark_parameters.yaml'
OPTIMIZER_PARAMETERS_PATH = PARAMETERS_DIR / 'optimizer_parameters.yaml'
cache_dir_string = os.getenv('OSM_CACHE_DIR', './cache')
OSM_CACHE_DIR = Path(cache_dir_string)
import logging
# if we are in a debug session, set verbose and rich logging
if os.getenv('DEBUG', "false") == "true":
from rich.logging import RichHandler

View File

@@ -1,14 +1,16 @@
import logging
from fastapi import FastAPI, Query, Body, HTTPException
"""Main app for backend api"""
from structs.landmark import Landmark
from structs.preferences import Preferences
from structs.linked_landmarks import LinkedLandmarks
from structs.trip import Trip
from utils.landmarks_manager import LandmarkManager
from utils.optimizer import Optimizer
from utils.refiner import Refiner
from persistence import client as cache_client
import logging
from fastapi import FastAPI, HTTPException
from .structs.landmark import Landmark
from .structs.preferences import Preferences
from .structs.linked_landmarks import LinkedLandmarks
from .structs.trip import Trip
from .utils.landmarks_manager import LandmarkManager
from .utils.optimizer import Optimizer
from .utils.refiner import Refiner
from .persistence import client as cache_client
logger = logging.getLogger(__name__)
@@ -20,26 +22,54 @@ refiner = Refiner(optimizer=optimizer)
@app.post("/trip/new")
def new_trip(preferences: Preferences, start: tuple[float, float], end: tuple[float, float] | None = None) -> Trip:
'''
def new_trip(preferences: Preferences,
start: tuple[float, float],
end: tuple[float, float] | None = None) -> Trip:
"""
Main function to call the optimizer.
:param preferences: the preferences specified by the user as the post body
:param start: the coordinates of the starting point as a tuple of floats (as url query parameters)
:param end: the coordinates of the finishing point as a tuple of floats (as url query parameters)
:return: the uuid of the first landmark in the optimized route
'''
Args:
preferences : the preferences specified by the user as the post body
start : the coordinates of the starting point
end : the coordinates of the finishing point
Returns:
(uuid) : The uuid of the first landmark in the optimized route
"""
if preferences is None:
raise HTTPException(status_code=406, detail="Preferences not provided")
if preferences.shopping.score == 0 and preferences.sightseeing.score == 0 and preferences.nature.score == 0:
raise HTTPException(status_code=406, detail="All preferences are 0.")
raise HTTPException(status_code=406,
detail="Preferences not provided or incomplete.")
if (preferences.shopping.score == 0 and
preferences.sightseeing.score == 0 and
preferences.nature.score == 0) :
raise HTTPException(status_code=406,
detail="All preferences are 0.")
if start is None:
raise HTTPException(status_code=406, detail="Start coordinates not provided")
raise HTTPException(status_code=406,
detail="Start coordinates not provided")
if not (-90 <= start[0] <= 90 or -180 <= start[1] <= 180):
raise HTTPException(status_code=423,
detail="Start coordinates not in range")
if end is None:
end = start
logger.info("No end coordinates provided. Using start=end.")
start_landmark = Landmark(name='start', type='start', location=(start[0], start[1]), osm_type='start', osm_id=0, attractiveness=0, must_do=True, n_tags = 0)
end_landmark = Landmark(name='finish', type='finish', location=(end[0], end[1]), osm_type='end', osm_id=0, attractiveness=0, must_do=True, n_tags = 0)
start_landmark = Landmark(name='start',
type='start',
location=(start[0], start[1]),
osm_type='start',
osm_id=0,
attractiveness=0,
must_do=True,
n_tags = 0)
end_landmark = Landmark(name='finish',
type='finish',
location=(end[0], end[1]),
osm_type='end',
osm_id=0,
attractiveness=0,
must_do=True,
n_tags = 0)
# Generate the landmarks from the start location
landmarks, landmarks_short = manager.generate_landmarks_list(
@@ -50,20 +80,22 @@ def new_trip(preferences: Preferences, start: tuple[float, float], end: tuple[fl
# insert start and finish to the landmarks list
landmarks_short.insert(0, start_landmark)
landmarks_short.append(end_landmark)
# First stage optimization
try:
base_tour = optimizer.solve_optimization(preferences.max_time_minute, landmarks_short)
except ArithmeticError:
raise HTTPException(status_code=500, detail="No solution found")
except TimeoutError:
raise HTTPException(status_code=500, detail="Optimzation took too long")
except ArithmeticError as exc:
raise HTTPException(status_code=500, detail="No solution found") from exc
except TimeoutError as exc:
raise HTTPException(status_code=500, detail="Optimzation took too long") from exc
# Second stage optimization
refined_tour = refiner.refine_optimization(landmarks, base_tour, preferences.max_time_minute, preferences.detour_tolerance_minute)
refined_tour = refiner.refine_optimization(landmarks, base_tour,
preferences.max_time_minute,
preferences.detour_tolerance_minute)
linked_tour = LinkedLandmarks(refined_tour)
# upon creation of the trip, persistence of both the trip and its landmarks is ensured
# upon creation of the trip, persistence of both the trip and its landmarks is ensured.
trip = Trip.from_linked_landmarks(linked_tour, cache_client)
return trip
@@ -71,17 +103,36 @@ def new_trip(preferences: Preferences, start: tuple[float, float], end: tuple[fl
#### For already existing trips/landmarks
@app.get("/trip/{trip_uuid}")
def get_trip(trip_uuid: str) -> Trip:
"""
Look-up the cache for a trip that has been previously generated using its identifier.
Args:
trip_uuid (str) : unique identifier for a trip.
Returns:
(Trip) : the corresponding trip.
"""
try:
trip = cache_client.get(f"trip_{trip_uuid}")
return trip
except KeyError:
raise HTTPException(status_code=404, detail="Trip not found")
except KeyError as exc:
raise HTTPException(status_code=404, detail="Trip not found") from exc
@app.get("/landmark/{landmark_uuid}")
def get_landmark(landmark_uuid: str) -> Landmark:
"""
Returns a Landmark from its unique identifier.
Args:
landmark_uuid (str) : unique identifier for a Landmark.
Returns:
(Landmark) : the corresponding Landmark.
"""
try:
landmark = cache_client.get(f"landmark_{landmark_uuid}")
return landmark
except KeyError:
raise HTTPException(status_code=404, detail="Landmark not found")
except KeyError as exc:
raise HTTPException(status_code=404,
detail="Landmark not found") from exc

View File

@@ -45,7 +45,6 @@ sightseeing:
- gallery
- artwork
- aquarium
historic: ''
amenity:
- planetarium
@@ -72,8 +71,10 @@ sightseeing:
- castle
- museum
museums:
tourism:
- museum
- aquarium
# to be used later on
restauration:

View File

@@ -1,11 +1,12 @@
city_bbox_side: 7500 #m
radius_close_to: 50
church_coeff: 0.5
church_coeff: 0.9
nature_coeff: 1.25
overall_coeff: 10
tag_exponent: 1.15
image_bonus: 10
viewpoint_bonus: 15
wikipedia_bonus: 6
wikipedia_bonus: 4
name_bonus: 3
N_important: 40
pay_bonus: -1

View File

@@ -3,4 +3,4 @@ detour_corridor_width: 300
average_walking_speed: 4.8
max_landmarks: 10
max_landmarks_refiner: 30
overshoot: 1.8
overshoot: 1.1

View File

@@ -1,28 +1,74 @@
from pymemcache.client.base import Client
from pymemcache import serde
"""Module used for handling cache"""
import constants
from pymemcache.client.base import Client
from .constants import MEMCACHED_HOST_PATH
class DummyClient:
"""
A dummy in-memory client that mimics the behavior of a memcached client.
This class is designed to simulate the behavior of the `pymemcache.Client`
for testing or development purposes. It stores data in a Python dictionary
and provides methods to set, get, and update key-value pairs.
Attributes:
_data (dict): A dictionary that holds the key-value pairs.
Methods:
set(key, value, **kwargs):
Stores the given key-value pair in the internal dictionary.
set_many(data, **kwargs):
Updates the internal dictionary with multiple key-value pairs.
get(key, **kwargs):
Retrieves the value associated with the given key from the internal
dictionary.
"""
_data = {}
def set(self, key, value, **kwargs):
def set(self, key, value, **kwargs): # pylint: disable=unused-argument
"""
Store a key-value pair in the internal dictionary.
Args:
key: The key for the item to be stored.
value: The value to be stored under the given key.
**kwargs: Additional keyword arguments (unused).
"""
self._data[key] = value
def set_many(self, data, **kwargs):
def set_many(self, data, **kwargs): # pylint: disable=unused-argument
"""
Update the internal dictionary with multiple key-value pairs.
Args:
data: A dictionary containing key-value pairs to be added.
**kwargs: Additional keyword arguments (unused).
"""
self._data.update(data)
def get(self, key, **kwargs):
def get(self, key, **kwargs): # pylint: disable=unused-argument
"""
Retrieve the value associated with the given key.
Args:
key: The key for the item to be retrieved.
**kwargs: Additional keyword arguments (unused).
Returns:
The value associated with the given key if it exists.
"""
return self._data[key]
if constants.MEMCACHED_HOST_PATH is None:
if MEMCACHED_HOST_PATH is None:
client = DummyClient()
else:
client = Client(
constants.MEMCACHED_HOST_PATH,
timeout = 1,
allow_unicode_keys = True,
encoding = 'utf-8',
serde = serde.pickle_serde
MEMCACHED_HOST_PATH,
timeout=1,
allow_unicode_keys=True,
encoding='utf-8'
)

View File

@@ -1,10 +1,41 @@
"""Definition of the Landmark class to handle visitable objects across the world."""
from typing import Optional, Literal
from uuid import uuid4
from pydantic import BaseModel, Field
from uuid import uuid4
# Output to frontend
class Landmark(BaseModel) :
"""
A class representing a landmark or point of interest (POI) in the context of a trip.
The Landmark class is used to model visitable locations, such as tourist attractions,
natural sites, shopping locations, and start/end points in travel itineraries. It
holds information about the landmark's attributes and supports comparisons and
calculations, such as distance between landmarks.
Attributes:
name (str): The name of the landmark.
type (Literal): The type of the landmark, which can be one of ['sightseeing', 'nature',
'shopping', 'start', 'finish'].
location (tuple): A tuple representing the (latitude, longitude) of the landmark.
osm_type (str): The OpenStreetMap (OSM) type of the landmark.
osm_id (int): The OpenStreetMap (OSM) ID of the landmark.
attractiveness (int): A score representing the attractiveness of the landmark.
n_tags (int): The number of tags associated with the landmark.
image_url (Optional[str]): A URL to an image of the landmark.
website_url (Optional[str]): A URL to the landmark's official website.
description (Optional[str]): A text description of the landmark.
duration (Optional[int]): The estimated time to visit the landmark (in minutes).
name_en (Optional[str]): The English name of the landmark.
uuid (str): A unique identifier for the landmark, generated by default using uuid4.
must_do (Optional[bool]): Whether the landmark is a "must-do" attraction.
must_avoid (Optional[bool]): Whether the landmark should be avoided.
is_secondary (Optional[bool]): Whether the landmark is secondary or less important.
time_to_reach_next (Optional[int]): Estimated time (in minutes) to reach the next landmark.
next_uuid (Optional[str]): UUID of the next landmark in sequence (if applicable).
"""
# Properties of the landmark
name : str
@@ -26,27 +57,63 @@ class Landmark(BaseModel) :
# Additional properties depending on specific tour
must_do : Optional[bool] = False
must_avoid : Optional[bool] = False
is_secondary : Optional[bool] = False # TODO future
is_secondary : Optional[bool] = False
time_to_reach_next : Optional[int] = 0
next_uuid : Optional[str] = None
def __str__(self) -> str:
time_to_next_str = f", time_to_next={self.time_to_reach_next}" if self.time_to_reach_next else ""
is_secondary_str = f", secondary" if self.is_secondary else ""
"""
String representation of the Landmark object.
Returns:
str: A formatted string with the landmark's type, name, location, attractiveness score,
time to the next landmark (if available), and whether the landmark is secondary.
"""
t_to_next_str = f", time_to_next={self.time_to_reach_next}" if self.time_to_reach_next else ""
is_secondary_str = ", secondary" if self.is_secondary else ""
type_str = '(' + self.type + ')'
if self.type in ["start", "finish", "nature", "shopping"] : type_str += '\t '
return f'Landmark{type_str}: [{self.name} @{self.location}, score={self.attractiveness}{time_to_next_str}{is_secondary_str}]'
if self.type in ["start", "finish", "nature", "shopping"] :
type_str += '\t '
return (f'Landmark{type_str}: [{self.name} @{self.location}, '
f'score={self.attractiveness}{t_to_next_str}{is_secondary_str}]')
def distance(self, value: 'Landmark') -> float:
"""
Calculates the squared distance between this landmark and another.
Args:
value (Landmark): Another Landmark object to calculate the distance to.
Returns:
float: The squared Euclidean distance between the two landmarks.
"""
return (self.location[0] - value.location[0])**2 + (self.location[1] - value.location[1])**2
def __hash__(self) -> int:
"""
Generates a hash for the Landmark based on its name.
Returns:
int: The hash of the landmark.
"""
return hash(self.name)
def __eq__(self, value: 'Landmark') -> bool:
"""
Checks equality between two Landmark objects based on UUID, OSM ID, and name.
Args:
value (Landmark): Another Landmark object to compare.
Returns:
bool: True if the landmarks are equal, False otherwise.
"""
# eq and hash must be consistent
# in particular, if two objects are equal, their hash must be equal
# uuid and osm_id are just shortcuts to avoid comparing all the properties
# if they are equal, we know that the name is also equal and in turn the hash is equal
return self.uuid == value.uuid or self.osm_id == value.osm_id or (self.name == value.name and self.distance(value) < 0.001)
return (self.uuid == value.uuid or
self.osm_id == value.osm_id or
(self.name == value.name and self.distance(value) < 0.001))

View File

@@ -1,21 +1,30 @@
"""Linked and ordered list of Landmarks that represents the visiting order."""
from .landmark import Landmark
from utils.get_time_separation import get_time
from ..utils.get_time_separation import get_time
class LinkedLandmarks:
"""
A list of landmarks that are linked together, e.g. in a route.
Each landmark serves as a node in the linked list, but since we expect these to be consumed through the rest API, a pythonic reference to the next landmark is not well suited. Instead we use the uuid of the next landmark to reference the next landmark in the list. This is not very efficient, but appropriate for the expected use case ("short" trips with onyl few landmarks).
Each landmark serves as a node in the linked list, but since we expect
these to be consumed through the rest API, a pythonic reference to the next
landmark is not well suited. Instead we use the uuid of the next landmark
to reference the next landmark in the list. This is not very efficient,
but appropriate for the expected use case
("short" trips with onyl few landmarks).
"""
_landmarks = list[Landmark]
total_time: int = 0
def __init__(self, data: list[Landmark] = None) -> None:
"""
Initialize a new LinkedLandmarks object. This expects an ORDERED list of landmarks, where the first landmark is the starting point and the last landmark is the end point.
Initialize a new LinkedLandmarks object. This expects an ORDERED list of landmarks,
where the first landmark is the starting point and the last landmark is the end point.
Args:
data (list[Landmark], optional): The list of landmarks that are linked together. Defaults to None.
data (list[Landmark], optional): The list of landmarks that are linked together.
Defaults to None.
"""
self._landmarks = data if data else []
self._link_landmarks()
@@ -23,7 +32,8 @@ class LinkedLandmarks:
def _link_landmarks(self) -> None:
"""
Create the links between the landmarks in the list by setting their .next_uuid and the .time_to_next attributes.
Create the links between the landmarks in the list by setting their
.next_uuid and the .time_to_next attributes.
"""
# Mark secondary landmarks as such
@@ -35,30 +45,34 @@ class LinkedLandmarks:
time_to_next = get_time(landmark.location, self._landmarks[i + 1].location)
landmark.time_to_reach_next = time_to_next
self.total_time += time_to_next
self.total_time += landmark.duration
self._landmarks[-1].next_uuid = None
self._landmarks[-1].time_to_reach_next = 0
def update_secondary_landmarks(self) -> None:
"""
Mark landmarks with lower importance as secondary.
"""
# Extract the attractiveness scores and sort them in descending order
scores = sorted([landmark.attractiveness for landmark in self._landmarks], reverse=True)
# Determine the 10th highest score
if len(scores) >= 10:
threshold_score = scores[9]
else:
# If there are fewer than 10 landmarks, use the lowest score in the list as the threshold
# If there are fewer than 10 landmarks, use the lowest score as the threshold
threshold_score = min(scores) if scores else 0
# Update 'is_secondary' for landmarks with attractiveness below the threshold score
for landmark in self._landmarks:
if landmark.attractiveness < threshold_score and landmark.type not in ["start", "finish"]:
if (landmark.attractiveness < threshold_score and landmark.type not in ["start", "finish"]):
landmark.is_secondary = True
def __getitem__(self, index: int) -> Landmark:
return self._landmarks[index]
def __str__(self) -> str:
return f"LinkedLandmarks [{' ->'.join([str(landmark) for landmark in self._landmarks])}]"

View File

@@ -1,12 +1,26 @@
from pydantic import BaseModel
"""Defines the Preferences used as input for trip generation."""
from typing import Optional, Literal
from pydantic import BaseModel
class Preference(BaseModel) :
"""
Type of preference.
Attributes:
type: what kind of landmark type.
score: how important that type is.
"""
type: Literal['sightseeing', 'nature', 'shopping', 'start', 'finish']
score: int # score could be from 1 to 5
# Input for optimization
class Preferences(BaseModel) :
""""
Full collection of preferences needed to generate a personalized trip.
"""
# Sightseeing / History & Culture (Musées, bâtiments historiques, opéras, églises)
sightseeing : Preference
@@ -16,5 +30,5 @@ class Preferences(BaseModel) :
# Shopping (diriger plutôt vers des zones / rues commerçantes)
shopping : Preference
max_time_minute: Optional[int] = 6*60
max_time_minute: Optional[int] = 3*60
detour_tolerance_minute: Optional[int] = 0

View File

@@ -1,17 +1,31 @@
"""Definition of the Trip class."""
import uuid
from pydantic import BaseModel, Field
from pymemcache.client.base import Client
from .linked_landmarks import LinkedLandmarks
import uuid
class Trip(BaseModel):
""""
A Trip represents the final guided tour that can be passed to frontend.
Attributes:
uuid: unique identifier for this particular trip.
total_time: duration of the trip (in minutes).
first_landmark_uuid: unique identifier of the first Landmark to visit.
Methods:
from_linked_landmarks: create a Trip from LinkedLandmarks object.
"""
uuid: str = Field(default_factory=uuid.uuid4)
total_time: int
first_landmark_uuid: str
@classmethod
def from_linked_landmarks(self, landmarks: LinkedLandmarks, cache_client: Client) -> "Trip":
def from_linked_landmarks(cls, landmarks: LinkedLandmarks, cache_client: Client) -> "Trip":
"""
Initialize a new Trip object and ensure it is stored in the cache.
"""
@@ -22,8 +36,11 @@ class Trip(BaseModel):
# Store the trip in the cache
cache_client.set(f"trip_{trip.uuid}", trip)
# make sure to await the result (noreply=False). Otherwise the cache might not be inplace when the trip is actually requested
cache_client.set_many({f"landmark_{landmark.uuid}": landmark for landmark in landmarks}, expire=3600, noreply=False)
# Make sure to await the result (noreply=False).
# Otherwise the cache might not be inplace when the trip is actually requested.
cache_client.set_many({f"landmark_{landmark.uuid}": landmark for landmark in landmarks},
expire=3600, noreply=False)
# is equivalent to:
# for landmark in landmarks:
# cache_client.set(f"landmark_{landmark.uuid}", landmark, expire=3600)

View File

@@ -1,79 +0,0 @@
import logging
import yaml
from utils.landmarks_manager import LandmarkManager
from utils.optimizer import Optimizer
from utils.refiner import Refiner
from structs.landmark import Landmark
from structs.linked_landmarks import LinkedLandmarks
from structs.preferences import Preferences, Preference
logger = logging.getLogger(__name__)
def test(start_coords: tuple[float, float], finish_coords: tuple[float, float] = None) -> list[Landmark]:
manager = LandmarkManager()
optimizer = Optimizer()
refiner = Refiner(optimizer=optimizer)
preferences = Preferences(
sightseeing=Preference(type='sightseeing', score = 5),
nature=Preference(type='nature', score = 5),
shopping=Preference(type='shopping', score = 5),
max_time_minute=100,
detour_tolerance_minute=0
)
# Create start and finish
if finish_coords is None :
finish_coords = start_coords
start = Landmark(name='start', type='start', location=start_coords, osm_type='', osm_id=0, attractiveness=0, n_tags = 0)
finish = Landmark(name='finish', type='finish', location=finish_coords, osm_type='', osm_id=0, attractiveness=0, n_tags = 0)
#finish = Landmark(name='finish', type=LandmarkType(landmark_type='finish'), location=(48.8777055, 2.3640967), osm_type='finish', osm_id=0, attractiveness=0, must_do=True, n_tags = 0)
#start = Landmark(name='start', type=LandmarkType(landmark_type='start'), location=(48.847132, 2.312359), osm_type='start', osm_id=0, attractiveness=0, must_do=True, n_tags = 0)
#finish = Landmark(name='finish', type=LandmarkType(landmark_type='finish'), location=(48.843185, 2.344533), osm_type='finish', osm_id=0, attractiveness=0, must_do=True, n_tags = 0)
#finish = Landmark(name='finish', type=LandmarkType(landmark_type='finish'), location=(48.847132, 2.312359), osm_type='finish', osm_id=0, attractiveness=0, must_do=True, n_tags = 0)
# Generate the landmarks from the start location
landmarks, landmarks_short = manager.generate_landmarks_list(
center_coordinates = start_coords,
preferences = preferences
)
# Store data to file for debug purposes
# write_data(landmarks, "landmarks_Strasbourg.txt")
# Insert start and finish to the landmarks list
landmarks_short.insert(0, start)
landmarks_short.append(finish)
# First stage optimization
base_tour = optimizer.solve_optimization(max_time=preferences.max_time_minute, landmarks=landmarks_short)
# Second stage using linear optimization
refined_tour = refiner.refine_optimization(all_landmarks=landmarks, base_tour=base_tour, max_time = preferences.max_time_minute, detour = preferences.detour_tolerance_minute)
linked_tour = LinkedLandmarks(refined_tour)
total_time = 0
logger.info("Optimized route : ")
for l in linked_tour :
logger.info(f"{l}")
logger.info(f"Estimated length of tour : {linked_tour.total_time} mintutes and visiting {len(linked_tour._landmarks)} landmarks.")
# with open('linked_tour.yaml', 'w') as f:
# yaml.dump(linked_tour.asdict(), f)
return linked_tour
# test(tuple((48.8344400, 2.3220540))) # Café Chez César
# test(tuple((48.8375946, 2.2949904))) # Point random
# test(tuple((47.377859, 8.540585))) # Zurich HB
# test(tuple((45.758217, 4.831814))) # Lyon Bellecour
test(tuple((48.5848435, 7.7332974))) # Strasbourg Gare
# test(tuple((48.2067858, 16.3692340))) # Vienne

View File

View File

@@ -0,0 +1,62 @@
"""Collection of tests to ensure correct handling of invalid input."""
from fastapi.testclient import TestClient
import pytest
from ..main import app
@pytest.fixture(scope="module")
def invalid_client():
"""Client used to call the app."""
return TestClient(app)
@pytest.mark.parametrize(
"start,preferences,status_code",
[
# Invalid case: no preferences at all.
([48.8566, 2.3522], {}, 422),
# Invalid cases: incomplete preferences.
([48.084588, 7.280405], {"sightseeing": {"type": "nature", "score": 5}, # no shopping
"nature": {"type": "nature", "score": 5},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "nature", "score": 5}, # no nature
"shopping": {"type": "shopping", "score": 5},
}, 422),
([48.084588, 7.280405], {"nature": {"type": "nature", "score": 5}, # no sightseeing
"shopping": {"type": "shopping", "score": 5},
}, 422),
# Invalid cases: unexisting coords
([91, 181], {"sightseeing": {"type": "nature", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
}, 423),
([-91, 181], {"sightseeing": {"type": "nature", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
}, 423),
([91, -181], {"sightseeing": {"type": "nature", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
}, 423),
([-91, -181], {"sightseeing": {"type": "nature", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
}, 423),
]
)
def test_input(invalid_client, start, preferences, status_code): # pylint: disable=redefined-outer-name
"""
Test new trip creation with different sets of preferences and locations.
"""
response = invalid_client.post(
"/trip/new",
json={
"preferences": preferences,
"start": start
}
)
assert response.status_code == status_code

View File

@@ -0,0 +1,98 @@
"""Collection of tests to ensure correct implementation and track progress. """
from fastapi.testclient import TestClient
import pytest
from .test_utils import landmarks_to_osmid, load_trip_landmarks, log_trip_details
from ..main import app
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
def test_turckheim(client, request): # pylint: disable=redefined-outer-name
"""
Test n°1 : Custom test in Turckheim to ensure small villages are also supported.
Args:
client:
request:
"""
duration_minutes = 15
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [48.084588, 7.280405]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# checks :
assert response.status_code == 200 # check for successful planning
assert isinstance(landmarks, list) # check that the return type is a list
assert duration_minutes*0.8 < int(result['total_time']) < duration_minutes*1.2
assert len(landmarks) > 2 # check that there is something to visit
def test_bellecour(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°2 : Custom test in Lyon centre to ensure proper decision making in crowded area.
Args:
client:
request:
"""
duration_minutes = 30
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [45.7576485, 4.8330241]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
osm_ids = landmarks_to_osmid(landmarks)
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# checks :
assert response.status_code == 200 # check for successful planning
assert duration_minutes*0.8 < int(result['total_time']) < duration_minutes*1.2
assert 136200148 in osm_ids # check for Cathédrale St. Jean in trip
# def test_new_trip_single_prefs(client):
# response = client.post(
# "/trip/new",
# json={
# "preferences": {"sightseeing": {"type": "sightseeing", "score": 1},
# "nature": {"type": "nature", "score": 1},
# "shopping": {"type": "shopping", "score": 1},
# "max_time_minute": 360,
# "detour_tolerance_minute": 0},
# "start": [48.8566, 2.3522]
# }
# )
# assert response.status_code == 200
# def test_new_trip_matches_prefs(client):
# pass

View File

@@ -0,0 +1,90 @@
"""Helper methods for testing."""
from typing import List
from fastapi import HTTPException
from ..structs.landmark import Landmark
def landmarks_to_osmid(landmarks: List[Landmark]) -> List[int] :
"""
Convert the list of landmarks into a list containing their osm ids for quick landmark checking.
Args :
landmarks (list): the list of landmarks
Returns :
ids (list) : the list of corresponding OSM ids
"""
ids = []
for landmark in landmarks :
ids.append(landmark.osm_id)
return ids
def fetch_landmark(client, landmark_uuid: str):
"""
Fetch landmark data from the API based on the landmark UUID.
Args:
landmark_uuid (str): The UUID of the landmark.
Returns:
dict: Landmark data fetched from the API.
"""
response = client.get(f"/landmark/{landmark_uuid}")
if response.status_code != 200:
raise HTTPException(status_code=999,
detail=f"Failed to fetch landmark with UUID {landmark_uuid}: {response.status_code}")
json_data = response.json()
if "detail" in json_data:
raise HTTPException(status_code=999, detail=json_data["detail"])
return json_data
def load_trip_landmarks(client, first_uuid: str) -> List[Landmark]:
"""
Load all landmarks for a trip using the response from the API.
Args:
first_uuid (str) : The first UUID of the landmark.
Returns:
landmarks (list) : An list containing all landmarks for the trip.
"""
landmarks = []
next_uuid = first_uuid
while next_uuid is not None:
landmark_data = fetch_landmark(client, next_uuid)
# # Convert UUIDs to strings explicitly
# landmark_data = {
# key: str(value) if isinstance(value, UUID) else value
# for key, value in landmark_data.items()
# }
landmarks.append(Landmark(**landmark_data)) # Create Landmark objects
next_uuid = landmark_data.get('next_uuid') # Prepare for the next iteration
return landmarks
def log_trip_details(request, landmarks: List[Landmark], duration: int, target_duration: int) :
"""
Allows to show the detailed trip in the html test report.
Args:
request:
landmarks (list): the ordered list of visited landmarks
duration (int): the total duration of this trip
target_duration(int): the target duration of this trip
"""
trip_string = [f"{landmark.name} ({landmark.attractiveness} | {landmark.duration}) - {landmark.time_to_reach_next}" for landmark in landmarks]
# Pass additional info to pytest for reporting
request.node.trip_details = trip_string
request.node.trip_duration = str(duration) # result['total_time']
request.node.target_duration = str(target_duration)

View File

@@ -1,9 +1,9 @@
import yaml
from math import sin, cos, sqrt, atan2, radians
import constants
from ..constants import OPTIMIZER_PARAMETERS_PATH
with constants.OPTIMIZER_PARAMETERS_PATH.open('r') as f:
with OPTIMIZER_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f)
DETOUR_FACTOR = parameters['detour_factor']
AVERAGE_WALKING_SPEED = parameters['average_walking_speed']

View File

@@ -5,10 +5,11 @@ import logging
from OSMPythonTools.overpass import Overpass, overpassQueryBuilder
from OSMPythonTools.cachingStrategy import CachingStrategy, JSON
from structs.preferences import Preferences
from structs.landmark import Landmark
from ..structs.preferences import Preferences
from ..structs.landmark import Landmark
from .take_most_important import take_most_important
import constants
from ..constants import AMENITY_SELECTORS_PATH, LANDMARK_PARAMETERS_PATH, OPTIMIZER_PARAMETERS_PATH, OSM_CACHE_DIR
# silence the overpass logger
logging.getLogger('OSMPythonTools').setLevel(level=logging.CRITICAL)
@@ -27,10 +28,10 @@ class LandmarkManager:
def __init__(self) -> None:
with constants.AMENITY_SELECTORS_PATH.open('r') as f:
with AMENITY_SELECTORS_PATH.open('r') as f:
self.amenity_selectors = yaml.safe_load(f)
with constants.LANDMARK_PARAMETERS_PATH.open('r') as f:
with LANDMARK_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f)
self.max_bbox_side = parameters['city_bbox_side']
self.radius_close_to = parameters['radius_close_to']
@@ -39,18 +40,19 @@ class LandmarkManager:
self.overall_coeff = parameters['overall_coeff']
self.tag_exponent = parameters['tag_exponent']
self.image_bonus = parameters['image_bonus']
self.name_bonus = parameters['name_bonus']
self.wikipedia_bonus = parameters['wikipedia_bonus']
self.viewpoint_bonus = parameters['viewpoint_bonus']
self.pay_bonus = parameters['pay_bonus']
self.N_important = parameters['N_important']
with constants.OPTIMIZER_PARAMETERS_PATH.open('r') as f:
with OPTIMIZER_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f)
self.walking_speed = parameters['average_walking_speed']
self.detour_factor = parameters['detour_factor']
self.overpass = Overpass()
CachingStrategy.use(JSON, cacheDir=constants.OSM_CACHE_DIR)
CachingStrategy.use(JSON, cacheDir=OSM_CACHE_DIR)
def generate_landmarks_list(self, center_coordinates: tuple[float, float], preferences: Preferences) -> tuple[list[Landmark], list[Landmark]]:
@@ -61,7 +63,7 @@ class LandmarkManager:
and current location. It scores and corrects these landmarks, removes duplicates, and then selects the most important
landmarks based on a predefined criterion.
Parameters:
Args:
center_coordinates (tuple[float, float]): The latitude and longitude of the center location around which to search.
preferences (Preferences): The user's preference settings that influence the landmark selection.
@@ -94,6 +96,8 @@ class LandmarkManager:
if preferences.shopping.score != 0:
score_function = lambda score: score * 10 * preferences.shopping.score / 5
current_landmarks = self.fetch_landmarks(bbox, self.amenity_selectors['shopping'], preferences.shopping.type, score_function)
# set time for all shopping activites :
for landmark in current_landmarks : landmark.duration = 45
all_landmarks.update(current_landmarks)
@@ -200,18 +204,29 @@ class LandmarkManager:
"""
return_list = []
if landmarktype == 'nature' : query_conditions = []
else : query_conditions = ['count_tags()>5']
# caution, when applying a list of selectors, overpass will search for elements that match ALL selectors simultaneously
# we need to split the selectors into separate queries and merge the results
for sel in dict_to_selector_list(amenity_selector):
self.logger.debug(f"Current selector: {sel}")
query_conditions = ['count_tags()>5']
element_types = ['way', 'relation']
if 'viewpoint' in sel :
query_conditions = []
element_types.append('node')
query = overpassQueryBuilder(
bbox = bbox,
elementType = ['way', 'relation'],
elementType = element_types,
# selector can in principle be a list already,
# but it generates the intersection of the queries
# we want the union
selector = sel,
conditions = ['count_tags()>5'],
conditions = query_conditions, # except for nature....
includeCenter = True,
out = 'body'
)
@@ -227,18 +242,23 @@ class LandmarkManager:
name = elem.tag('name')
location = (elem.centerLat(), elem.centerLon())
osm_type = elem.type() # Add type: 'way' or 'relation'
osm_id = elem.id() # Add OSM id
# TODO: exclude these from the get go
# skip if unprecise location
# handle unprecise and no-name locations
if name is None or location[0] is None:
continue
if osm_type == 'node' and 'viewpoint' in elem.tags().values():
name = 'Viewpoint'
name_en = 'Viewpoint'
location = (elem.lat(), elem.lon())
else :
continue
# skip if part of another building
if 'building:part' in elem.tags().keys() and elem.tag('building:part') == 'yes':
continue
osm_type = elem.type() # Add type: 'way' or 'relation'
osm_id = elem.id() # Add OSM id
elem_type = landmarktype # Add the landmark type as 'sightseeing,
n_tags = len(elem.tags().keys()) # Add number of tags
score = n_tags**self.tag_exponent # Add score
@@ -246,59 +266,78 @@ class LandmarkManager:
image_url = None
name_en = None
# remove specific tags
# Adjust scoring, browse through tag keys
skip = False
for tag in elem.tags().keys():
if "pay" in tag:
# payment options are a good sign
for tag_key in elem.tags().keys():
if "pay" in tag_key:
# payment options are misleading and should not count for the scoring.
score += self.pay_bonus
if "disused" in tag:
if "disused" in tag_key:
# skip disused amenities
skip = True
break
if "wiki" in tag:
if "boundary" in tag_key:
# skip "areas" like administrative boundaries and stuff
skip = True
break
if "historic" in tag_key and elem.tag('historic') in ['manor', 'optical_telegraph', 'pound', 'shieling', 'wayside_cross']:
# skip useless amenities
skip = True
break
if "name" in tag_key :
score += self.name_bonus
if "wiki" in tag_key:
# wikipedia entries count more
score += self.wikipedia_bonus
if "viewpoint" in tag:
score += self.viewpoint_bonus
duration = 10
if "image" in tag:
if "image" in tag_key:
# images must count more
score += self.image_bonus
if elem_type != "nature":
if "leisure" in tag and elem.tag('leisure') == "park":
if "leisure" in tag_key and elem.tag('leisure') == "park":
elem_type = "nature"
if landmarktype != "shopping":
if "shop" in tag:
if "shop" in tag_key:
skip = True
break
if tag == "building" and elem.tag('building') in ['retail', 'supermarket', 'parking']:
if tag_key == "building" and elem.tag('building') in ['retail', 'supermarket', 'parking']:
skip = True
break
if tag in ['website', 'contact:website']:
website_url = elem.tag(tag)
if tag == 'image':
# Extract image, website and english name
if tag_key in ['website', 'contact:website']:
website_url = elem.tag(tag_key)
if tag_key == 'image':
image_url = elem.tag('image')
if tag =='name:en':
if tag_key =='name:en':
name_en = elem.tag('name:en')
if skip:
continue
# Don't visit random apartments
if 'apartments' in elem.tags().values():
continue
score = score_function(score)
if "place_of_worship" in elem.tags().values():
score = score * self.church_coeff
duration = 15
duration = 10
if 'viewpoint' in elem.tags().values() :
# viewpoints must count more
score += self.viewpoint_bonus
duration = 10
elif "museum" in elem.tags().values():
score = score * self.church_coeff
elif "museum" in elem.tags().values() or "aquarium" in elem.tags().values() or "planetarium" in elem.tags().values():
duration = 60
else:

View File

@@ -4,9 +4,9 @@ import numpy as np
from scipy.optimize import linprog
from collections import defaultdict, deque
from structs.landmark import Landmark
from ..structs.landmark import Landmark
from .get_time_separation import get_time
import constants
from ..constants import OPTIMIZER_PARAMETERS_PATH
@@ -26,7 +26,7 @@ class Optimizer:
def __init__(self) :
# load parameters from file
with constants.OPTIMIZER_PARAMETERS_PATH.open('r') as f:
with OPTIMIZER_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f)
self.detour_factor = parameters['detour_factor']
self.average_walking_speed = parameters['average_walking_speed']
@@ -487,7 +487,7 @@ class Optimizer:
# Raise error if no solution is found
if not res.success :
raise ArithmeticError("No solution could be found, the problem is overconstrained. Please adapt your must_dos")
raise ArithmeticError("No solution could be found, the problem is overconstrained. Try with a longer trip (>30 minutes).")
# If there is a solution, we're good to go, just check for connectiveness
order, circles = self.is_connected(res.x)

View File

@@ -2,11 +2,12 @@ import yaml, logging
from shapely import buffer, LineString, Point, Polygon, MultiPoint, concave_hull
from math import pi
from typing import List
from structs.landmark import Landmark
from ..structs.landmark import Landmark
from . import take_most_important, get_time_separation
from .optimizer import Optimizer
import constants
from ..constants import OPTIMIZER_PARAMETERS_PATH
@@ -24,7 +25,7 @@ class Refiner :
self.optimizer = optimizer
# load parameters from file
with constants.OPTIMIZER_PARAMETERS_PATH.open('r') as f:
with OPTIMIZER_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f)
self.detour_factor = parameters['detour_factor']
self.detour_corridor_width = parameters['detour_corridor_width']
@@ -37,11 +38,11 @@ class Refiner :
Create a corridor around the path connecting the landmarks.
Args:
landmarks (list[Landmark]): the landmark path around which to create the corridor
width (float): Width of the corridor in meters.
landmarks (list[Landmark]) : the landmark path around which to create the corridor
width (float) : width of the corridor in meters.
Returns:
Geometry: A buffered geometry object representing the corridor around the path.
Geometry: a buffered geometry object representing the corridor around the path.
"""
corrected_width = (180*width)/(6371000*pi)
@@ -133,6 +134,21 @@ class Refiner :
i += 1
return tour
def integrate_landmarks(self, sub_list: List[Landmark], main_list: List[Landmark]) :
"""
Inserts 'sub_list' of Landmarks inside the 'main_list' by leaving the ends untouched.
Args:
sub_list : the list of Landmarks to be inserted inside of the 'main_list'.
main_list : the original list with start and finish.
Returns:
the full list.
"""
sub_list.append(main_list[-1]) # add finish back
return main_list[:-1] + sub_list # create full set of possible landmarks
def find_shortest_path_through_all_landmarks(self, landmarks: list[Landmark]) -> tuple[list[Landmark], Polygon]:
@@ -253,6 +269,11 @@ class Refiner :
except :
better_tour_poly = concave_hull(MultiPoint(coords)) # Create concave hull with "core" of tour leaving out start and finish
xs, ys = better_tour_poly.exterior.xy
"""
ERROR HERE :
Exception has occurred: AttributeError
'LineString' object has no attribute 'exterior'
"""
# reverse the xs and ys
@@ -315,26 +336,30 @@ class Refiner :
self.logger.info(f"Using {len(minor_landmarks)} minor landmarks around the predicted path")
# full set of visitable landmarks
full_set = base_tour[:-1] + minor_landmarks # create full set of possible landmarks (without finish)
full_set.append(base_tour[-1]) # add finish back
# Full set of visitable landmarks.
full_set = self.integrate_landmarks(minor_landmarks, base_tour) # could probably be optimized with less overhead
# get a new tour
# Generate a new tour with the optimizer.
new_tour = self.optimizer.solve_optimization(
max_time = max_time + detour,
landmarks = full_set,
max_landmarks = self.max_landmarks_refiner
)
# If unsuccessful optimization, use the base_tour.
if new_tour is None:
self.logger.warning("No solution found for the refined tour. Returning the initial tour.")
new_tour = base_tour
# If only one landmark, return it.
if len(new_tour) < 4 :
return new_tour
# Find shortest path using the nearest neighbor heuristic
# Find shortest path using the nearest neighbor heuristic.
better_tour, better_poly = self.find_shortest_path_through_all_landmarks(new_tour)
# Fix the tour using Polygons if the path looks weird
# Fix the tour using Polygons if the path looks weird.
# Conditions : circular trip and invalid polygon.
if base_tour[0].location == base_tour[-1].location and not better_poly.is_valid :
better_tour = self.fix_using_polygon(better_tour)

View File

@@ -1,9 +1,9 @@
from structs.landmark import Landmark
from ..structs.landmark import Landmark
def take_most_important(landmarks: list[Landmark], n_important) -> list[Landmark]:
"""
Given a list of landmarks, return the n_important most important landmarks
Parameters:
Args:
landmarks: list[Landmark] - list of landmarks
n_important: int - number of most important landmarks to return
Returns:

View File

@@ -37,7 +37,7 @@ jobs:
REF_NAME: ${{ github.ref_name }}
run:
# remove the 'v' prefix from the tag name
echo "VERSION_NAME=${REF_NAME//v}" >> $GITHUB_ENV
echo "BUILD_NAME=${REF_NAME//v}" >> $GITHUB_ENV
- name: Load secrets from github
run: |
@@ -53,4 +53,6 @@ jobs:
- name: Run fastlane lane
run: bundle exec fastlane deploy_testing
working-directory: android
# the environment variable VERSION_NAME is implicitly available
env:
BUILD_NUMBER: ${{ github.run_number }}
# BUILD_NAME is implicitly available

View File

@@ -30,14 +30,19 @@ if (flutterVersionName == null) {
def secretPropertiesFile = rootProject.file('secrets.properties')
def fallbackPropertiesFile = rootProject.file('fallback.properties')
def secretProperties = new Properties()
if (secretPropertiesFile.exists()) {
secretPropertiesFile.withReader('UTF-8') { reader ->
secretProperties.load(reader)
}
} else if (fallbackPropertiesFile.exists()) {
fallbackPropertiesFile.withReader('UTF-8') { reader ->
secretProperties.load(reader)
}
} else {
throw new GradleException("Secrets file secrets.properties not found")
throw new GradleException("Secrets file (secrets.properties, fallback.properties) not found")
}

View File

@@ -1 +1,3 @@
# This file mirrors the state of secrets.properties as a reference for the developer.
# And as a fallback for build.gradle
MAPS_API_KEY=Key

View File

@@ -5,22 +5,28 @@ default_platform(:android)
platform :android do
desc "Deploy a new version as a preview version"
desc "Deploy a new version to closed testing"
lane :deploy_testing do
version_name = ENV["VERSION_NAME"]
build_name = ENV["BUILD_NAME"]
build_number = ENV["BUILD_NUMBER"]
sh(
"flutter",
"build",
"appbundle",
"--release",
"--build-name=#{version_name}",
"--build-name=#{build_name}",
"--build-number=#{build_number}",
)
upload_to_play_store(
track: 'alpha',
skip_upload_apk: true,
skip_upload_changelogs: true,
aab: "../build/app/outputs/bundle/release/app-release.aab",
# this is the default output of flutter build ... --release
# in particular this the build folder lies in the flutter root folder
# this is the parent folder for the android folder
)
end
@@ -28,6 +34,7 @@ platform :android do
lane :deploy_release do
gradle(
task: "clean assembleRelease",
# todo update to a flutter call
properties: {
# loaded from environment
"android.injected.version.name" => ENV["VERSION_NAME"],
@@ -37,6 +44,10 @@ platform :android do
track: "production",
skip_upload_apk: true,
skip_upload_changelogs: true,
aab: "../build/app/outputs/bundle/release/app-release.aab",
# this is the default output of flutter build ... --release
# in particular this the build folder lies in the flutter root folder
# this is the parent folder for the android folder
)
end
end