31 Commits

Author SHA1 Message Date
54f541382e integrated supabase in payment process
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m13s
Run linting on the backend code / Build (pull_request) Successful in 3m9s
Run testing on the backend code / Build (pull_request) Failing after 2m32s
Build and deploy the backend to staging / Deploy to staging (pull_request) Failing after 35s
2025-10-09 14:31:56 +02:00
29ac462725 forgot the credits lol
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Failing after 48s
Run linting on the backend code / Build (pull_request) Successful in 3m9s
Run testing on the backend code / Build (pull_request) Failing after 1m59s
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been skipped
2025-10-08 17:33:58 +02:00
d374dc333f changed unit_price to float
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Failing after 6m14s
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been skipped
Run linting on the backend code / Build (pull_request) Successful in 1m50s
Run testing on the backend code / Build (pull_request) Failing after 4m1s
2025-10-08 17:31:42 +02:00
ab03cee3e3 strong base for payment handling
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Failing after 50s
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been skipped
Run testing on the backend code / Build (pull_request) Failing after 2m32s
Run linting on the backend code / Build (pull_request) Successful in 2m39s
2025-10-08 17:30:07 +02:00
f86174bc11 overhaul of paypal handler WIP
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Failing after 56s
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been skipped
Run linting on the backend code / Build (pull_request) Successful in 1m54s
Run testing on the backend code / Build (pull_request) Failing after 2m33s
2025-10-04 17:03:36 +02:00
3bdcdea850 overhaul of paypal handler WIP
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 3m44s
Run linting on the backend code / Build (pull_request) Successful in 23s
Run testing on the backend code / Build (pull_request) Failing after 3m7s
Build and deploy the backend to staging / Deploy to staging (pull_request) Failing after 35s
2025-10-02 13:59:07 +02:00
5549f8b0e5 added todo
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m42s
Run linting on the backend code / Build (pull_request) Successful in 3m12s
Run testing on the backend code / Build (pull_request) Failing after 3m20s
Build and deploy the backend to staging / Deploy to staging (pull_request) Failing after 29s
2025-09-25 22:00:13 +02:00
b201dfe97c moved the rest of endpoints to individual routers
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Failing after 5m45s
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been skipped
Run linting on the backend code / Build (pull_request) Successful in 2m57s
Run testing on the backend code / Build (pull_request) Failing after 4m6s
2025-09-25 21:41:33 +02:00
b65d184f48 used .env for supabase secrets
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Failing after 52s
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been skipped
Run linting on the backend code / Build (pull_request) Successful in 50s
Run testing on the backend code / Build (pull_request) Failing after 3m8s
2025-09-25 21:30:46 +02:00
16b35ab5af added .env dataclass 2025-09-25 21:30:32 +02:00
011671832a removed main from uv init 2025-09-25 21:30:12 +02:00
f2237bd721 added .env 2025-09-25 21:29:58 +02:00
bf8b64aacf i am stoopid
Some checks failed
Run testing on the backend code / Build (pull_request) Has been cancelled
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been cancelled
Build and deploy the backend to staging / Build and push image (pull_request) Has been cancelled
Run linting on the backend code / Build (pull_request) Successful in 19s
2025-07-27 18:32:24 +02:00
44cd983fb8 fixed linter 4 real
Some checks failed
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been cancelled
Build and deploy the backend to staging / Build and push image (pull_request) Has been cancelled
Run testing on the backend code / Build (pull_request) Has been cancelled
Run linting on the backend code / Build (pull_request) Failing after 16s
2025-07-27 18:31:30 +02:00
89c95063dd fixed linter
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m23s
Run linting on the backend code / Build (pull_request) Failing after 15s
Run testing on the backend code / Build (pull_request) Failing after 38s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 30s
2025-07-27 18:27:49 +02:00
e41d3f5e3a added supabase routes and payment handling
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m34s
Run linting on the backend code / Build (pull_request) Failing after 18s
Run testing on the backend code / Build (pull_request) Failing after 38s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 1m5s
2025-07-27 18:18:24 +02:00
f5cedbc5a0 fixed README 2025-07-27 17:33:06 +02:00
88dc5dd323 removed reports from tracking 2025-07-27 17:27:57 +02:00
c6bb0cddb7 Added field validation for preferences
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m31s
Run linting on the backend code / Build (pull_request) Failing after 19s
Run testing on the backend code / Build (pull_request) Failing after 19m47s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 33s
2025-07-27 17:22:38 +02:00
9ccf68d983 fixed the toilets and works with uv now
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m58s
Run linting on the backend code / Build (pull_request) Failing after 20s
Run testing on the backend code / Build (pull_request) Failing after 22m6s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 1m8s
2025-07-27 17:13:11 +02:00
132aa5a19b changed to no dev when building the docker image
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m24s
Run linting on the backend code / Build (pull_request) Failing after 21s
Run testing on the backend code / Build (pull_request) Failing after 22m41s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 36s
2025-07-26 12:44:41 +02:00
19b0c37a97 fixed the missing dependency in the refiner and changed the test run to using uv 2025-07-26 12:44:12 +02:00
ecdef605a7 cleanup and removed pipenv files 2025-07-26 12:41:58 +02:00
e2a918112b changed to uv fo managing dependencies 2025-07-26 12:41:15 +02:00
96b0718081 removed unused landmark attributes
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m39s
Run linting on the backend code / Build (pull_request) Successful in 31s
Run testing on the backend code / Build (pull_request) Failing after 49s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 38s
2025-07-13 17:47:12 +02:00
d9e5d9dac6 fixed dependcu
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m8s
Run linting on the backend code / Build (pull_request) Successful in 29s
Run testing on the backend code / Build (pull_request) Failing after 46s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 28s
2025-07-13 17:45:13 +02:00
b0f9d31ee2 Implement backend API for landmarks, trip optimization, and toilet locations
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m49s
Run linting on the backend code / Build (pull_request) Successful in 30s
Run testing on the backend code / Build (pull_request) Failing after 45s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 32s
- Added landmarks_router.py to handle landmark retrieval based on user preferences and location.
- Implemented optimization_router.py for trip optimization, including handling preferences and landmarks.
- Created toilets_router.py to fetch toilet locations within a specified radius from a given location.
- Enhanced error handling and logging across all new endpoints.
- Generated a comprehensive report.html for test results and environment details.
2025-07-13 17:43:24 +02:00
54bc9028ad simplified test pipeline
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m38s
Run linting on the backend code / Build (pull_request) Successful in 27s
Run testing on the backend code / Build (pull_request) Failing after 17m36s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 35s
2025-07-02 21:59:07 +02:00
37926e68ec fixed typo in invalid inputs
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m24s
Run linting on the backend code / Build (pull_request) Successful in 27s
Run testing on the backend code / Build (pull_request) Failing after 20m39s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 32s
2025-07-02 21:58:47 +02:00
e2d3d29956 working split
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m46s
Run linting on the backend code / Build (pull_request) Successful in 2m31s
Run testing on the backend code / Build (pull_request) Failing after 12m37s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 29s
2025-06-22 14:24:00 +02:00
6921ab57f8 added more structure
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 3m29s
Run linting on the backend code / Build (pull_request) Successful in 27s
Run testing on the backend code / Build (pull_request) Failing after 12m29s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 34s
2025-06-21 18:54:42 +02:00
47 changed files with 4870 additions and 3525 deletions

View File

@@ -15,18 +15,18 @@ jobs:
- uses: https://gitea.com/actions/checkout@v4
- name: Install dependencies
- name: Install pylint
run: |
apt-get update && apt-get install -y python3 python3-pip
pip install pipenv
pip install pylint
- name: Install packages
run: |
ls -la
# only install dev-packages
pipenv install --categories=dev-packages
working-directory: backend
# - name: Install packages
# run: |
# ls -la
# # only install dev-packages
# uv sync
# working-directory: backend
- name: Run linter
run: pipenv run pylint src --fail-under=9
run: pylint src --fail-under=9
working-directory: backend

View File

@@ -18,17 +18,17 @@ jobs:
- name: Install dependencies
run: |
apt-get update && apt-get install -y python3 python3-pip
pip install pipenv
pip install uv
- name: Install packages
run: |
ls -la
# install all packages, including dev-packages
pipenv install --dev
uv sync
working-directory: backend
- name: Run Tests
run: pipenv run pytest src --html=report.html --self-contained-html --log-cli-level=DEBUG
run: uv run pytest src --html=report.html --self-contained-html --log-cli-level=DEBUG
working-directory: backend
- name: Upload HTML report

5
backend/.gitignore vendored
View File

@@ -2,7 +2,7 @@
cache_XML/
# secrets
*secrets.yaml
*.env
# Byte-compiled / optimized / DLL files
__pycache__/
@@ -12,6 +12,9 @@ __pycache__/
# C extensions
*.so
# Pytest html reports
*.html
# Distribution / packaging
.Python
build/

1
backend/.python-version Normal file
View File

@@ -0,0 +1 @@
3.12.9

View File

@@ -1,11 +1,29 @@
FROM python:3.11-slim
FROM python:3.12-slim-bookworm
# The installer requires curl (and certificates) to download the release archive
RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates
# Download the latest installer
ADD https://astral.sh/uv/install.sh /uv-installer.sh
# Run the installer then remove it
RUN sh /uv-installer.sh && rm /uv-installer.sh
# Ensure the installed binary is on the `PATH`
ENV PATH="/root/.local/bin/:$PATH"
# Set the working directory
WORKDIR /app
COPY Pipfile Pipfile.lock .
RUN pip install pipenv
RUN pipenv install --deploy --system
# Copy uv files
COPY pyproject.toml pyproject.toml
COPY uv.lock uv.lock
COPY .python-version .python-version
# Sync the venv
RUN uv sync --frozen --no-cache --no-dev
# Copy application files
COPY src src
EXPOSE 8000
@@ -17,4 +35,4 @@ ENV MEMCACHED_HOST_PATH=none
ENV LOKI_URL=none
# explicitly use a string instead of an argument list to force a shell and variable expansion
CMD fastapi run src/main.py --port 8000 --workers $NUM_WORKERS
CMD uv run fastapi run src/main.py --port 8000 --workers $NUM_WORKERS

View File

@@ -1,27 +0,0 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[dev-packages]
pylint = "*"
pytest = "*"
tomli = "*"
httpx = "*"
exceptiongroup = "*"
pytest-html = "*"
typing-extensions = "*"
dill = "*"
[packages]
numpy = "*"
fastapi = "*"
pydantic = "*"
shapely = "*"
pymemcache = "*"
fastapi-cli = "*"
scikit-learn = "*"
loki-logger-handler = "*"
pulp = "*"
scipy = "*"
requests = "*"

1246
backend/Pipfile.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -6,31 +6,31 @@ This repository contains the backend code for the application. It utilizes **Fas
### Directory Structure
- The code for the Python application is located in the `src` directory.
- Package management is handled with **pipenv**, and the dependencies are listed in the `Pipfile`.
- Package management is handled with **uv**, and the dependencies are listed in the `pyproject.toml` file.
- Since the application is designed to be deployed in a container, the `Dockerfile` is provided to build the image.
### Setting Up the Development Environment
To set up your development environment using **pipenv**, follow these steps:
To set up your development environment using **uv**, follow these steps:
1. Install `pipenv` by running:
1. Make sure you find yourself in the `backend` directory:
```bash
sudo apt install pipenv
cd backend
```
2. Create and activate a virtual environment:
1. Install `uv` by running:
```bash
pipenv shell
curl -LsSf https://astral.sh/uv/install.sh | sh
```
3. Install the dependencies listed in the `Pipfile`:
3. Install the dependencies listed in `pyproject.toml` and create the virtual environment at the same time:
```bash
pipenv install
uv sync
```
4. The virtual environment will be created under:
```bash
~/.local/share/virtualenvs/...
backend/.venv/...
```
### Deployment

View File

@@ -1,363 +0,0 @@
[
{
"name": "Chinatown",
"type": "shopping",
"location": [
45.7554934,
4.8444852
],
"osm_type": "way",
"osm_id": 996515596,
"attractiveness": 129,
"n_tags": 0,
"image_url": null,
"website_url": null,
"wiki_url": null,
"keywords": {},
"description": null,
"duration": 30,
"name_en": null,
"uuid": "285d159c-68ee-4b37-8d71-f27ee3d38b02",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Galeries Lafayette",
"type": "shopping",
"location": [
45.7627107,
4.8556833
],
"osm_type": "way",
"osm_id": 1069872743,
"attractiveness": 197,
"n_tags": 11,
"image_url": null,
"website_url": "http://www.galerieslafayette.com/",
"wiki_url": null,
"keywords": null,
"description": null,
"duration": 30,
"name_en": null,
"uuid": "28f1bc30-10d3-4944-8861-0ed9abca012d",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Muji",
"type": "shopping",
"location": [
45.7615971,
4.8543781
],
"osm_type": "way",
"osm_id": 1044165817,
"attractiveness": 259,
"n_tags": 14,
"image_url": null,
"website_url": "https://www.muji.com/fr/",
"wiki_url": null,
"keywords": null,
"description": null,
"duration": 30,
"name_en": "Muji",
"uuid": "957f86a5-6c00-41a2-815d-d6f739052be4",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "HEMA",
"type": "shopping",
"location": [
45.7619133,
4.8565239
],
"osm_type": "way",
"osm_id": 1069872750,
"attractiveness": 156,
"n_tags": 9,
"image_url": null,
"website_url": "https://fr.westfield.com/lapartdieu/store/HEMA/www.hema.fr",
"wiki_url": null,
"keywords": null,
"description": null,
"duration": 30,
"name_en": null,
"uuid": "8dae9d3e-e4c4-4e80-941d-0b106e22c85b",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Cordeliers",
"type": "shopping",
"location": [
45.7622752,
4.8337998
],
"osm_type": "node",
"osm_id": 5545183519,
"attractiveness": 813,
"n_tags": 0,
"image_url": null,
"website_url": null,
"wiki_url": null,
"keywords": {},
"description": null,
"duration": 30,
"name_en": null,
"uuid": "ba02adb5-e28f-4645-8c2d-25ead6232379",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Halles de Lyon Paul Bocuse",
"type": "shopping",
"location": [
45.7628282,
4.8505601
],
"osm_type": "relation",
"osm_id": 971529,
"attractiveness": 272,
"n_tags": 12,
"image_url": null,
"website_url": "https://www.halles-de-lyon-paulbocuse.com/",
"wiki_url": "fr:Halles de Lyon-Paul Bocuse",
"keywords": {
"importance": "national",
"height": null,
"place_type": "marketplace",
"date": null
},
"description": "Halles de Lyon Paul Bocuse is a marketplace of national importance.",
"duration": 30,
"name_en": null,
"uuid": "bbd50de3-aa91-425d-90c2-d4abfd1b4abe",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Grand Bazar",
"type": "shopping",
"location": [
45.7632141,
4.8361975
],
"osm_type": "way",
"osm_id": 82399951,
"attractiveness": 93,
"n_tags": 7,
"image_url": null,
"website_url": null,
"wiki_url": null,
"keywords": null,
"description": null,
"duration": 30,
"name_en": null,
"uuid": "3de9131c-87c5-4efb-9fa8-064896fb8b29",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Shopping Area",
"type": "shopping",
"location": [
45.7673452,
4.8438683
],
"osm_type": "node",
"osm_id": 0,
"attractiveness": 156,
"n_tags": 0,
"image_url": null,
"website_url": null,
"wiki_url": null,
"keywords": {},
"description": null,
"duration": 30,
"name_en": null,
"uuid": "df2482a8-7e2e-4536-aad3-564899b2fa65",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Cour Oxyg\u00e8ne",
"type": "shopping",
"location": [
45.7620905,
4.8568873
],
"osm_type": "way",
"osm_id": 132673030,
"attractiveness": 63,
"n_tags": 5,
"image_url": null,
"website_url": null,
"wiki_url": null,
"keywords": null,
"description": null,
"duration": 30,
"name_en": null,
"uuid": "ed134f76-9a02-4bee-9c10-78454f7bc4ce",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "P\u00f4le de Commerces et de Loisirs Confluence",
"type": "shopping",
"location": [
45.7410414,
4.8171031
],
"osm_type": "way",
"osm_id": 440270633,
"attractiveness": 259,
"n_tags": 14,
"image_url": null,
"website_url": "https://www.confluence.fr/",
"wiki_url": null,
"keywords": null,
"description": null,
"duration": 30,
"name_en": null,
"uuid": "dd7e2f5f-0e60-4560-b903-e5ded4b6e36a",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Grand H\u00f4tel-Dieu",
"type": "shopping",
"location": [
45.7586955,
4.8364597
],
"osm_type": "relation",
"osm_id": 300128,
"attractiveness": 546,
"n_tags": 22,
"image_url": null,
"website_url": "https://grand-hotel-dieu.com",
"wiki_url": "fr:H\u00f4tel-Dieu de Lyon",
"keywords": {
"importance": "international",
"height": null,
"place_type": "building",
"date": "C17"
},
"description": "Grand H\u00f4tel-Dieu is an internationally famous building. It was constructed in C17.",
"duration": 30,
"name_en": null,
"uuid": "a91265a8-ffbd-44f7-a7ab-3ff75f08fbab",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Westfield La Part-Dieu",
"type": "shopping",
"location": [
45.761331,
4.855676
],
"osm_type": "way",
"osm_id": 62338376,
"attractiveness": 546,
"n_tags": 22,
"image_url": null,
"website_url": "https://fr.westfield.com/lapartdieu",
"wiki_url": "fr:La Part-Dieu (centre commercial)",
"keywords": null,
"description": null,
"duration": 30,
"name_en": null,
"uuid": "7d60316f-d689-4fcf-be68-ffc09353b826",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
},
{
"name": "Ainay",
"type": "shopping",
"location": [
45.7553105,
4.8312084
],
"osm_type": "node",
"osm_id": 5545126047,
"attractiveness": 132,
"n_tags": 0,
"image_url": null,
"website_url": null,
"wiki_url": null,
"keywords": {},
"description": null,
"duration": 30,
"name_en": null,
"uuid": "ad214f3d-a4b9-4078-876a-446caa7ab01c",
"must_do": false,
"must_avoid": false,
"is_secondary": false,
"time_to_reach_next": 0,
"next_uuid": null,
"is_viewpoint": false,
"is_place_of_worship": false
}
]

58
backend/pyproject.toml Normal file
View File

@@ -0,0 +1,58 @@
[project]
name = "backend"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"annotated-types==0.7.0 ; python_full_version >= '3.8'",
"anyio==4.8.0 ; python_full_version >= '3.9'",
"certifi==2024.12.14 ; python_full_version >= '3.6'",
"charset-normalizer==3.4.1 ; python_full_version >= '3.7'",
"click==8.1.8 ; python_full_version >= '3.7'",
"dotenv>=0.9.9",
"fastapi==0.115.7 ; python_full_version >= '3.8'",
"fastapi-cli==0.0.7 ; python_full_version >= '3.8'",
"h11==0.14.0 ; python_full_version >= '3.7'",
"httptools==0.6.4",
"idna==3.10 ; python_full_version >= '3.6'",
"joblib==1.4.2 ; python_full_version >= '3.8'",
"loki-logger-handler==1.1.0 ; python_full_version >= '2.7'",
"markdown-it-py==3.0.0 ; python_full_version >= '3.8'",
"mdurl==0.1.2 ; python_full_version >= '3.7'",
"numpy==2.2.2 ; python_full_version >= '3.10'",
"paypalrestsdk>=1.13.3",
"pulp==2.9.0 ; python_full_version >= '3.7'",
"pydantic==2.10.6 ; python_full_version >= '3.8'",
"pydantic-core==2.27.2 ; python_full_version >= '3.8'",
"pygments==2.19.1 ; python_full_version >= '3.8'",
"pymemcache==4.0.0 ; python_full_version >= '3.7'",
"python-dotenv==1.0.1",
"pyyaml==6.0.2",
"requests==2.32.3 ; python_full_version >= '3.8'",
"rich==13.9.4 ; python_full_version >= '3.8'",
"rich-toolkit==0.13.2 ; python_full_version >= '3.8'",
"scikit-learn==1.6.1 ; python_full_version >= '3.9'",
"scipy==1.15.1 ; python_full_version >= '3.10'",
"shapely==2.0.6 ; python_full_version >= '3.7'",
"shellingham==1.5.4 ; python_full_version >= '3.7'",
"sniffio==1.3.1 ; python_full_version >= '3.7'",
"starlette==0.45.3 ; python_full_version >= '3.9'",
"supabase>=2.16.0",
"threadpoolctl==3.5.0 ; python_full_version >= '3.8'",
"typer==0.15.1 ; python_full_version >= '3.7'",
"typing-extensions==4.12.2 ; python_full_version >= '3.8'",
"urllib3==2.3.0 ; python_full_version >= '3.9'",
"uvicorn[standard]==0.34.0 ; python_full_version >= '3.9'",
"uvloop==0.21.0",
"watchfiles==1.0.4",
"websockets==14.2",
]
[dependency-groups]
dev = [
"httpx>=0.28.1",
"ipykernel>=6.30.0",
"pytest>=8.4.1",
"pytest-html>=4.1.1",
]

File diff suppressed because one or more lines are too long

View File

@@ -28,6 +28,11 @@ This folder defines the commonly used data structures used within the project. T
### src/tests
This folder contains unit tests and test cases for the application's various modules. It is used to ensure the correctness and stability of the code.
Run the unit tests with the following command:
```bash
uv run pytest src --log-cli-level=DEBUG --html=report.html --self-contained-html
```
### src/utils
The `utils` folder contains utility classes and functions that provide core functionality for the application. The main component in this folder is the `LandmarkManager`, which is central to the process of fetching and organizing landmarks.

View File

@@ -1,4 +1,6 @@
"""Module used for handling cache"""
import hashlib
from pymemcache import serde
from pymemcache.client.base import Client
@@ -73,3 +75,62 @@ else:
encoding='utf-8',
serde=serde.pickle_serde
)
#### Cache for payment architecture
def make_credit_cache_key(user_id: str, order_id: str) -> str:
"""
Generate a cache key from user_id and order_id using md5.
Args:
user_id (str): The user's ID.
order_id (str): The PayPal order ID.
Returns:
str: A unique cache key.
"""
# Concatenate and hash to avoid collisions and keep key size small
raw_key = f"{user_id}:{order_id}"
return hashlib.md5(raw_key.encode('utf-8')).hexdigest()
class CreditCache:
"""
Handles storing and retrieving credits to grant for a user/order.
Methods:
set_credits(user_id, order_id, credits):
Store the credits for a user/order.
get_credits(user_id, order_id):
Retrieve the credits for a user/order.
"""
@staticmethod
def set_credits(user_id: str, order_id: str, credits_to_grant: int) -> None:
"""
Store the credits to be granted for a user/order.
Args:
user_id (str): The user's ID.
order_id (str): The PayPal order ID.
credits (int): The amount of credits to grant.
"""
cache_key = make_credit_cache_key(user_id, order_id)
client.set(cache_key, credits_to_grant)
@staticmethod
def get_credits(user_id: str, order_id: str) -> int | None:
"""
Retrieve the credits to be granted for a user/order.
Args:
user_id (str): The user's ID.
order_id (str): The PayPal order ID.
Returns:
int | None: The credits to grant, or None if not found.
"""
cache_key = make_credit_cache_key(user_id, order_id)
return client.get(cache_key)

View File

View File

@@ -0,0 +1,23 @@
"""This module is for loading variables from the environment and passes them throughout the code using the Environment dataclass"""
import os
from dataclasses import dataclass
from dotenv import load_dotenv
# Load variables from environment
load_dotenv(override=True)
@dataclass
class Environment :
# Load supabase secrets
supabase_url = os.environ['SUPABASE_URL']
supabase_admin_key = os.environ['SUPABASE_ADMIN_KEY']
supabase_test_user_id = os.environ['SUPABASE_TEST_USER_ID']
# Load paypal secrets
paypal_id_sandbox = os.environ['PAYPAL_ID_SANDBOX']
paypal_key_sandbox = os.environ['PAYPAL_KEY_SANDBOX']

View File

@@ -146,7 +146,7 @@ class ClusterManager:
self.valid = False
else :
self.logger.debug(f"Detected 0 {cluster_type} clusters.")
self.logger.debug(f"Found 0 {cluster_type} clusters.")
self.valid = False

View File

@@ -4,7 +4,6 @@ import yaml
from ..structs.preferences import Preferences
from ..structs.landmark import Landmark
from ..utils.take_most_important import take_most_important
from .cluster_manager import ClusterManager
from ..overpass.overpass import Overpass, get_base_info
from ..utils.bbox import create_bbox
@@ -23,7 +22,7 @@ class LandmarkManager:
church_coeff: float # coeff to adjsut score of churches
nature_coeff: float # coeff to adjust score of parks
overall_coeff: float # coeff to adjust weight of tags
n_important: int # number of important landmarks to consider
# n_important: int # number of important landmarks to consider
def __init__(self) -> None:
@@ -42,7 +41,7 @@ class LandmarkManager:
self.wikipedia_bonus = parameters['wikipedia_bonus']
self.viewpoint_bonus = parameters['viewpoint_bonus']
self.pay_bonus = parameters['pay_bonus']
self.n_important = parameters['N_important']
# self.n_important = parameters['N_important']
with OPTIMIZER_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f)
@@ -55,7 +54,12 @@ class LandmarkManager:
self.logger.info('LandmakManager successfully initialized.')
def generate_landmarks_list(self, center_coordinates: tuple[float, float], preferences: Preferences) -> tuple[list[Landmark], list[Landmark]]:
def generate_landmarks_list(
self,
center_coordinates: tuple[float, float],
preferences: Preferences,
allow_clusters: bool = True
) -> list[Landmark] :
"""
Generate and prioritize a list of landmarks based on user preferences.
@@ -63,16 +67,17 @@ class LandmarkManager:
and current location. It scores and corrects these landmarks, removes duplicates, and then selects the most important
landmarks based on a predefined criterion.
Args:
center_coordinates (tuple[float, float]): The latitude and longitude of the center location around which to search.
preferences (Preferences): The user's preference settings that influence the landmark selection.
Parameters :
center_coordinates (tuple[float, float]): The latitude and longitude of the center location around which to search.
preferences (Preferences): The user's preference settings that influence the landmark selection.
allow_clusters (bool, optional) : If set to False, no clusters will be fetched. Mainly used for the option to fetch landmarks nearby.
Returns:
tuple[list[Landmark], list[Landmark]]:
- A list of all existing landmarks.
- A list of the most important landmarks based on the user's preferences.
"""
self.logger.debug('Starting to fetch landmarks...')
self.logger.info(f'Starting to fetch landmarks around {center_coordinates}...')
max_walk_dist = int((preferences.max_time_minute/2)/60*self.walking_speed*1000/self.detour_factor)
radius = min(max_walk_dist, int(self.max_bbox_side/2))
@@ -89,10 +94,11 @@ class LandmarkManager:
all_landmarks.update(current_landmarks)
self.logger.info(f'Found {len(current_landmarks)} sightseeing landmarks')
if allow_clusters :
# special pipeline for historic neighborhoods
neighborhood_manager = ClusterManager(bbox, 'sightseeing')
historic_clusters = neighborhood_manager.generate_clusters()
all_landmarks.update(historic_clusters)
neighborhood_manager = ClusterManager(bbox, 'sightseeing')
historic_clusters = neighborhood_manager.generate_clusters()
all_landmarks.update(historic_clusters)
# list for nature
if preferences.nature.score != 0:
@@ -113,16 +119,19 @@ class LandmarkManager:
landmark.duration = 30
all_landmarks.update(current_landmarks)
# special pipeline for shopping malls
shopping_manager = ClusterManager(bbox, 'shopping')
shopping_clusters = shopping_manager.generate_clusters()
all_landmarks.update(shopping_clusters)
if allow_clusters :
# special pipeline for shopping malls
shopping_manager = ClusterManager(bbox, 'shopping')
shopping_clusters = shopping_manager.generate_clusters()
all_landmarks.update(shopping_clusters)
landmarks_constrained = take_most_important(all_landmarks, self.n_important)
# DETAILS HERE
# self.logger.info(f'All landmarks generated : {len(all_landmarks)} landmarks around {center_coordinates}, and constrained to {len(landmarks_constrained)} most important ones.')
self.logger.info(f'Found {len(all_landmarks)} landmarks in total.')
return all_landmarks, landmarks_constrained
return sorted(all_landmarks, key=lambda x: x.attractiveness, reverse=True)
def set_landmark_score(self, landmark: Landmark, landmarktype: str, preference_level: int) :
"""
@@ -178,6 +187,7 @@ class LandmarkManager:
# caution, when applying a list of selectors, overpass will search for elements that match ALL selectors simultaneously
# we need to split the selectors into separate queries and merge the results
# TODO: this can be multi-threaded once the Overpass rate-limit is not a problem anymore
for sel in dict_to_selector_list(amenity_selector):
# self.logger.debug(f"Current selector: {sel}")
@@ -236,6 +246,17 @@ class LandmarkManager:
continue
tags = elem.get('tags')
n_tags=len(tags)
# Skip this landmark if not suitable
if tags.get('building:part') is not None :
continue
if tags.get('disused') is not None :
continue
if tags.get('boundary') is not None :
continue
if tags.get('shop') is not None and landmarktype != 'shopping' :
continue
# Convert this to Landmark object
landmark = Landmark(name=name,
@@ -244,180 +265,36 @@ class LandmarkManager:
osm_id=id,
osm_type=osm_type,
attractiveness=0,
n_tags=len(tags))
n_tags=n_tags)
# Browse through tags to add information to landmark.
for key, value in tags.items():
# Extract useful information for score calculation later down the road.
landmark.image_url = tags.get('image')
landmark.website_url = tags.get('website')
landmark.wiki_url = tags.get('wikipedia')
landmark.name_en = tags.get('name:en')
# Skip this landmark if not suitable.
if key == 'building:part' and value == 'yes' :
break
if 'disused:' in key :
break
if 'boundary:' in key :
break
if 'shop' in key and landmarktype != 'shopping' :
break
# if value == 'apartments' :
# break
# Fill in the other attributes.
if key == 'image' :
landmark.image_url = value
if key == 'website' :
landmark.website_url = value
if value == 'place_of_worship' :
# Check for place of worship
if tags.get('place_of_worship') is not None :
landmark.is_place_of_worship = True
if key == 'wikipedia' :
landmark.wiki_url = value
if key == 'name:en' :
landmark.name_en = value
if 'building:' in key or 'pay' in key :
landmark.n_tags -= 1
landmark.name_en = tags.get('place_of_worship')
# Set the duration. Needed for the optimization.
if tags.get('amenity') in ['aquarium', 'planetarium'] or tags.get('tourism') in ['aquarium', 'museum', 'zoo']:
landmark.duration = 60
elif tags.get('tourism') == 'viewpoint' :
landmark.is_viewpoint = True
landmark.duration = 10
elif tags.get('building') == 'cathedral' :
landmark.is_place_of_worship = False
landmark.duration = 10
# Set the duration.
if value in ['museum', 'aquarium', 'planetarium'] :
landmark.duration = 60
elif value == 'viewpoint' :
landmark.is_viewpoint = True
landmark.duration = 10
elif value == 'cathedral' :
landmark.is_place_of_worship = False
landmark.duration = 10
landmark.description, landmark.keywords = self.description_and_keywords(tags)
# Compute the score and add landmark to the list.
self.set_landmark_score(landmark, landmarktype, preference_level)
landmarks.append(landmark)
continue
return landmarks
def description_and_keywords(self, tags: dict):
"""
Generates a description and a set of keywords for a given landmark based on its tags.
Params:
tags (dict): A dictionary containing metadata about the landmark, including its name,
importance, height, date of construction, and visitor information.
Returns:
description (str): A string description of the landmark.
keywords (dict): A dictionary of keywords with fields such as 'importance', 'height',
'place_type', and 'date'.
"""
# Extract relevant fields
name = tags.get('name')
importance = tags.get('importance', None)
n_visitors = tags.get('tourism:visitors', None)
height = tags.get('height')
place_type = self.get_place_type(tags)
date = self.get_date(tags)
if place_type is None :
return None, None
# Start the description.
if importance is None :
if len(tags.keys()) < 5 :
return None, None
if len(tags.keys()) < 10 :
description = f"{name} is a well known {place_type}."
elif len(tags.keys()) < 17 :
importance = 'national'
description = f"{name} is a {place_type} of national importance."
else :
importance = 'international'
description = f"{name} is an internationally famous {place_type}."
else :
description = f"{name} is a {place_type} of {importance} importance."
if height is not None and date is not None :
description += f" This {place_type} was constructed in {date} and is ca. {height} meters high."
elif height is not None :
description += f" This {place_type} stands ca. {height} meters tall."
elif date is not None:
description += f" It was constructed in {date}."
# Format the visitor number
if n_visitors is not None :
n_visitors = int(n_visitors)
if n_visitors < 1000000 :
description += f" It welcomes {int(n_visitors/1000)} thousand visitors every year."
else :
description += f" It welcomes {round(n_visitors/1000000, 1)} million visitors every year."
# Set the keywords.
keywords = {"importance": importance,
"height": height,
"place_type": place_type,
"date": date}
return description, keywords
def get_place_type(self, data):
"""
Determines the type of the place based on available tags such as 'amenity', 'building',
'historic', and 'leisure'. The priority order is: 'historic' > 'building' (if not generic) >
'amenity' > 'leisure'.
Params:
data (dict): A dictionary containing metadata about the place.
Returns:
place_type (str): The determined type of the place, or None if no relevant type is found.
"""
amenity = data.get('amenity', None)
building = data.get('building', None)
historic = data.get('historic', None)
leisure = data.get('leisure')
if historic and historic != "yes":
return historic
if building and building not in ["yes", "civic", "government", "apartments", "residential", "commericial", "industrial", "retail", "religious", "public", "service"]:
return building
if amenity:
return amenity
if leisure:
return leisure
return None
def get_date(self, data):
"""
Extracts the most relevant date from the available tags, prioritizing 'construction_date',
'start_date', 'year_of_construction', and 'opening_date' in that order.
Params:
data (dict): A dictionary containing metadata about the place.
Returns:
date (str): The most relevant date found, or None if no date is available.
"""
construction_date = data.get('construction_date', None)
opening_date = data.get('opening_date', None)
start_date = data.get('start_date', None)
year_of_construction = data.get('year_of_construction', None)
# Prioritize based on availability
if construction_date:
return construction_date
if start_date:
return start_date
if year_of_construction:
return year_of_construction
if opening_date:
return opening_date
return None
def dict_to_selector_list(d: dict) -> list:
"""
Convert a dictionary of key-value pairs to a list of Overpass query strings.

View File

@@ -0,0 +1,123 @@
"""Main app for backend api"""
import logging
import time
import random
from fastapi import HTTPException, APIRouter
from ..structs.landmark import Landmark
from ..structs.preferences import Preferences, Preference
from .landmarks_manager import LandmarkManager
# Setup the logger and the Landmarks Manager
logger = logging.getLogger(__name__)
manager = LandmarkManager()
# Initialize the API router
router = APIRouter()
@router.post("/get/landmarks")
def get_landmarks(
preferences: Preferences,
start: tuple[float, float],
) -> list[Landmark]:
"""
Function that returns all available landmarks given some preferences and a start position.
Args:
preferences : the preferences specified by the user as the post body
start : the coordinates of the starting point
Returns:
list[Landmark] : The full list of fetched landmarks
"""
if preferences is None:
raise HTTPException(status_code=406, detail="Preferences not provided or incomplete.")
if (preferences.shopping.score == 0 and
preferences.sightseeing.score == 0 and
preferences.nature.score == 0) :
raise HTTPException(status_code=406, detail="All preferences are 0.")
if start is None:
raise HTTPException(status_code=406, detail="Start coordinates not provided")
if not (-90 <= start[0] <= 90 or -180 <= start[1] <= 180):
raise HTTPException(status_code=422, detail="Start coordinates not in range")
logger.info(f"Requested new trip generation. Details:\n\tCoordinates: {start}\n\tTime: {preferences.max_time_minute}\n\tSightseeing: {preferences.sightseeing.score}\n\tNature: {preferences.nature.score}\n\tShopping: {preferences.shopping.score}")
start_time = time.time()
# Generate the landmarks from the start location
landmarks = manager.generate_landmarks_list(
center_coordinates = start,
preferences = preferences
)
if len(landmarks) == 0 :
raise HTTPException(status_code=500, detail="No landmarks were found.")
t_generate_landmarks = time.time() - start_time
logger.info(f'Fetched {len(landmarks)} landmarks in \t: {round(t_generate_landmarks,3)} seconds')
return landmarks
@router.post("/get-nearby/landmarks/{lat}/{lon}")
def get_landmarks_nearby(
lat: float,
lon: float
) -> list[Landmark] :
"""
Suggests nearby landmarks based on a given latitude and longitude.
This endpoint returns a curated list of up to 5 landmarks around the given geographical coordinates. It uses fixed preferences for
sightseeing, shopping, and nature, with a maximum time constraint of 30 minutes to limit the number of landmarks returned.
Args:
lat (float): Latitude of the user's current location.
lon (float): Longitude of the user's current location.
Returns:
list[Landmark]: A list of selected nearby landmarks.
"""
logger.info(f'Fetching landmarks nearby ({lat}, {lon}).')
# Define fixed preferences:
prefs = Preferences(
sightseeing = Preference(
type='sightseeing',
score=5
),
shopping = Preference(
type='shopping',
score=2
),
nature = Preference(
type='nature',
score=5
),
max_time_minute=30,
detour_tolerance_minute=0,
)
# Find the landmarks around the location
landmarks_around = manager.generate_landmarks_list(
center_coordinates = (lat, lon),
preferences = prefs,
allow_clusters=False,
)
if len(landmarks_around) == 0 :
raise HTTPException(status_code=500, detail="No landmarks were found.")
# select 8 - 12 landmarks from there
if len(landmarks_around) > 8 :
n_imp = random.randint(2,5)
rest = random.randint(8 - n_imp, min(12, len(landmarks_around))-n_imp)
print(f'len = {len(landmarks_around)}\nn_imp = {n_imp}\nrest = {rest}')
landmarks_around = landmarks_around[:n_imp] + random.sample(landmarks_around[n_imp:], rest)
logger.info(f'Found {len(landmarks_around)} landmarks to suggest nearby ({lat}, {lon}).')
# logger.debug('Suggested landmarks :\n\t' + '\n\t'.join(f'{landmark}' for landmark in landmarks_around))
return landmarks_around

View File

@@ -33,14 +33,14 @@ def configure_logging():
# silence the chatty logs loki generates itself
logging.getLogger('urllib3.connectionpool').setLevel(logging.WARNING)
# no need for time since it's added by loki or can be shown in kube logs
logging_format = '%(name)s - %(levelname)s - %(message)s'
logging_format = '%(name)-55s - %(levelname)-7s - %(message)s'
else:
# if we are in a debug (local) session, set verbose and rich logging
from rich.logging import RichHandler
logging_handlers = [RichHandler()]
logging_level = logging.DEBUG if is_debug else logging.INFO
logging_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
logging_format = '%(asctime)s - %(name)-55s - %(levelname)-7s - %(message)s'

View File

@@ -1,28 +1,18 @@
"""Main app for backend api"""
import logging
import time
from contextlib import asynccontextmanager
from fastapi import FastAPI, HTTPException, BackgroundTasks
from fastapi import FastAPI
from .logging_config import configure_logging
from .structs.landmark import Landmark
from .structs.preferences import Preferences
from .structs.linked_landmarks import LinkedLandmarks
from .structs.trip import Trip
from .landmarks.landmarks_manager import LandmarkManager
from .toilets.toilet_routes import router as toilets_router
from .optimization.optimizer import Optimizer
from .optimization.refiner import Refiner
from .overpass.overpass import fill_cache
from .cache import client as cache_client
from .toilets.toilets_router import router as toilets_router
from .optimization.optimization_router import router as optimization_router
from .landmarks.landmarks_router import router as landmarks_router
from .payments.payment_router import router as payment_router
from .trips.trips_router import router as trips_router
logger = logging.getLogger(__name__)
manager = LandmarkManager()
optimizer = Optimizer()
refiner = Refiner(optimizer=optimizer)
@asynccontextmanager
async def lifespan(app: FastAPI):
@@ -33,194 +23,37 @@ async def lifespan(app: FastAPI):
logger.info("Shutting down logging")
# Create the fastapi app
app = FastAPI(lifespan=lifespan)
# Fetches the global list of landmarks given preferences and start/end coordinates. Two routes
# Call with "/get/landmarks/" for main entry point of the trip generation pipeline.
# Call with "/get-nearby/landmarks/" for the NEARBY feature.
app.include_router(landmarks_router)
# Optimizes the trip given preferences. Second step in the main trip generation pipeline.
# Call with "/optimize/trip"
app.include_router(optimization_router)
# Fetches toilets near given coordinates.
# Call with "/get/toilets" for fetching toilets around coordinates.
app.include_router(toilets_router)
@app.post("/trip/new")
def new_trip(preferences: Preferences,
start: tuple[float, float],
end: tuple[float, float] | None = None,
background_tasks: BackgroundTasks = None) -> Trip:
"""
Main function to call the optimizer.
Args:
preferences : the preferences specified by the user as the post body
start : the coordinates of the starting point
end : the coordinates of the finishing point
Returns:
(uuid) : The uuid of the first landmark in the optimized route
"""
if preferences is None:
raise HTTPException(status_code=406, detail="Preferences not provided or incomplete.")
if (preferences.shopping.score == 0 and
preferences.sightseeing.score == 0 and
preferences.nature.score == 0) :
raise HTTPException(status_code=406, detail="All preferences are 0.")
if start is None:
raise HTTPException(status_code=406, detail="Start coordinates not provided")
if not (-90 <= start[0] <= 90 or -180 <= start[1] <= 180):
raise HTTPException(status_code=422, detail="Start coordinates not in range")
if end is None:
end = start
logger.info("No end coordinates provided. Using start=end.")
logger.info(f"Requested new trip generation. Details:\n\tCoordinates: {start}\n\tTime: {preferences.max_time_minute}\n\tSightseeing: {preferences.sightseeing.score}\n\tNature: {preferences.nature.score}\n\tShopping: {preferences.shopping.score}")
start_landmark = Landmark(name='start',
type='start',
location=(start[0], start[1]),
osm_type='start',
osm_id=0,
attractiveness=0,
duration=0,
must_do=True,
n_tags = 0)
end_landmark = Landmark(name='finish',
type='finish',
location=(end[0], end[1]),
osm_type='end',
osm_id=0,
attractiveness=0,
duration=0,
must_do=True,
n_tags=0)
start_time = time.time()
# Generate the landmarks from the start location
landmarks, landmarks_short = manager.generate_landmarks_list(
center_coordinates = start,
preferences = preferences
)
if len(landmarks) == 0 :
raise HTTPException(status_code=500, detail="No landmarks were found.")
# insert start and finish to the landmarks list
landmarks_short.insert(0, start_landmark)
landmarks_short.append(end_landmark)
t_generate_landmarks = time.time() - start_time
logger.info(f'Fetched {len(landmarks)} landmarks in \t: {round(t_generate_landmarks,3)} seconds')
start_time = time.time()
# First stage optimization
try:
base_tour = optimizer.solve_optimization(preferences.max_time_minute, landmarks_short)
except Exception as exc:
logger.error(f"Trip generation failed: {str(exc)}")
raise HTTPException(status_code=500, detail=f"Optimization failed: {str(exc)}") from exc
t_first_stage = time.time() - start_time
start_time = time.time()
# Second stage optimization
# TODO : only if necessary (not enough landmarks for ex.)
try :
refined_tour = refiner.refine_optimization(landmarks, base_tour,
preferences.max_time_minute,
preferences.detour_tolerance_minute)
except Exception as exc :
logger.warning(f"Refiner failed. Proceeding with base trip {str(exc)}")
refined_tour = base_tour
t_second_stage = time.time() - start_time
logger.debug(f'First stage optimization\t: {round(t_first_stage,3)} seconds')
logger.debug(f'Second stage optimization\t: {round(t_second_stage,3)} seconds')
logger.info(f'Total computation time\t: {round(t_first_stage + t_second_stage,3)} seconds')
linked_tour = LinkedLandmarks(refined_tour)
# upon creation of the trip, persistence of both the trip and its landmarks is ensured.
trip = Trip.from_linked_landmarks(linked_tour, cache_client)
logger.info(f'Generated a trip of {trip.total_time} minutes with {len(refined_tour)} landmarks in {round(t_generate_landmarks + t_first_stage + t_second_stage,3)} seconds.')
logger.debug('Detailed trip :\n\t' + '\n\t'.join(f'{landmark}' for landmark in refined_tour))
background_tasks.add_task(fill_cache)
return trip
# Include the payment router for interacting with paypal sdk.
# See src/payment/payment_router.py for more information on how to call.
# Call with "/orders/new" to initiate a payment with an order request (step 1)
# Call with "/orders/{order_id}/{user_id}capture" to capture a payment and grant the user the due credits (step 2)
app.include_router(payment_router)
#### For already existing trips/landmarks
@app.get("/trip/{trip_uuid}")
def get_trip(trip_uuid: str) -> Trip:
"""
Look-up the cache for a trip that has been previously generated using its identifier.
# Endpoint for putting together a trip, fetching landmarks by UUID and updating trip times. Three routes
# Call with "/trip/{trip_uuid}" for getting trip by UUID.
# Call with "/landmark/{landmark_uuid}" for getting landmark by UUID.
# Call with "/trip//trip/recompute-time/{trip_uuid}/{removed_landmark_uuid}" for updating trip times.
app.include_router(trips_router)
Args:
trip_uuid (str) : unique identifier for a trip.
Returns:
(Trip) : the corresponding trip.
"""
try:
trip = cache_client.get(f"trip_{trip_uuid}")
return trip
except KeyError as exc:
logger.error(f"Failed to fetch trip with UUID {trip_uuid}: {str(exc)}")
raise HTTPException(status_code=404, detail="Trip not found") from exc
@app.get("/landmark/{landmark_uuid}")
def get_landmark(landmark_uuid: str) -> Landmark:
"""
Returns a Landmark from its unique identifier.
Args:
landmark_uuid (str) : unique identifier for a Landmark.
Returns:
(Landmark) : the corresponding Landmark.
"""
try:
landmark = cache_client.get(f"landmark_{landmark_uuid}")
return landmark
except KeyError as exc:
logger.error(f"Failed to fetch landmark with UUID {landmark_uuid}: {str(exc)}")
raise HTTPException(status_code=404, detail="Landmark not found") from exc
@app.post("/trip/recompute-time/{trip_uuid}/{removed_landmark_uuid}")
def update_trip_time(trip_uuid: str, removed_landmark_uuid: str) -> Trip:
"""
Updates the reaching times of a given trip when removing a landmark.
Args:
landmark_uuid (str) : unique identifier for a Landmark.
Returns:
(Landmark) : the corresponding Landmark.
"""
# First, fetch the trip in the cache.
try:
trip = cache_client.get(f'trip_{trip_uuid}')
except KeyError as exc:
logger.error(f"Failed to update trip with UUID {trip_uuid} (trip not found): {str(exc)}")
raise HTTPException(status_code=404, detail='Trip not found') from exc
landmarks = []
next_uuid = trip.first_landmark_uuid
# Extract landmarks
try :
while next_uuid is not None:
landmark = cache_client.get(f'landmark_{next_uuid}')
# Filter out the removed landmark.
if next_uuid != removed_landmark_uuid :
landmarks.append(landmark)
next_uuid = landmark.next_uuid # Prepare for the next iteration
except KeyError as exc:
logger.error(f"Failed to update trip with UUID {trip_uuid} : {str(exc)}")
raise HTTPException(status_code=404, detail=f'landmark {next_uuid} not found') from exc
# Re-link every thing and compute times again
linked_tour = LinkedLandmarks(landmarks)
trip = Trip.from_linked_landmarks(linked_tour, cache_client)
return trip

View File

@@ -0,0 +1,164 @@
"""API entry point for the trip optimization."""
import logging
import time
import yaml
from fastapi import HTTPException, APIRouter, BackgroundTasks, Body
from .optimizer import Optimizer
from .refiner import Refiner
from ..supabase.supabase import SupabaseClient
from ..structs.landmark import Landmark
from ..structs.preferences import Preferences
from ..structs.linked_landmarks import LinkedLandmarks
from ..structs.trip import Trip
from ..overpass.overpass import fill_cache
from ..cache import client as cache_client
from ..constants import OPTIMIZER_PARAMETERS_PATH
# Setup the Logger, Optimizer and Refiner
logger = logging.getLogger(__name__)
optimizer = Optimizer()
refiner = Refiner(optimizer=optimizer)
supabase = SupabaseClient()
# Initialize the API router
router = APIRouter()
@router.post("/optimize/trip")
def optimize_trip(
user_id: str = Body(...),
preferences: Preferences = Body(...),
landmarks: list[Landmark] = Body(...),
start: tuple[float, float] = Body(...),
end: tuple[float, float] | None = Body(None),
background_tasks: BackgroundTasks = None
) -> Trip:
"""
Main function to call the optimizer.
Args:
preferences (Preferences) : the preferences specified by the user as the post body.
start (tuple[float, float]) : the coordinates of the starting point.
end tuple[float, float] : the coordinates of the finishing point.
backgroud_tasks (BackgroundTasks) : necessary to fill the cache after the trip has been returned.
Returns:
(uuid) : The uuid of the first landmark in the optimized route
"""
# Check for valid user balance
try:
if not supabase.check_balance(user_id=user_id):
logger.warning('Insufficient credits to perform this action.')
raise HTTPException(status_code=418, detail='Insufficient credits')
except SyntaxError as se :
logger.error(f'SyntaxError: {se}')
raise HTTPException(status_code=400, detail=str(se)) from se
except ValueError as ve :
logger.error(f'SyntaxError: {ve}')
raise HTTPException(status_code=406, detail=str(ve)) from ve
except Exception as exc:
logger.error(f'SyntaxError: {exc}')
raise HTTPException(status_code=500, detail=f"Internal Server Error: {str(exc)}") from exc
# Check for invalid input
if preferences is None:
raise HTTPException(status_code=406, detail="Preferences not provided or incomplete.")
if len(landmarks) == 0 :
raise HTTPException(status_code=406, detail="No landmarks provided for computing the trip.")
if (preferences.shopping.score == 0 and
preferences.sightseeing.score == 0 and
preferences.nature.score == 0) :
raise HTTPException(status_code=406, detail="All preferences are 0.")
if start is None:
raise HTTPException(status_code=406, detail="Start coordinates not provided")
if not (-90 <= start[0] <= 90 or -180 <= start[1] <= 180):
raise HTTPException(status_code=422, detail="Start coordinates not in range")
if end is None:
end = start
logger.info("No end coordinates provided. Using start=end.")
# Start the timer
start_time = time.time()
logger.info(f"Requested new trip generation. Details:\n\tCoordinates: {start}\n\tTime: {preferences.max_time_minute}\n\tSightseeing: {preferences.sightseeing.score}\n\tNature: {preferences.nature.score}\n\tShopping: {preferences.shopping.score}")
start_landmark = Landmark(
name='start',
type='start',
location=(start[0], start[1]),
osm_type='start',
osm_id=0,
attractiveness=0,
duration=0,
must_do=True,
n_tags = 0
)
end_landmark = Landmark(
name='finish',
type='finish',
location=(end[0], end[1]),
osm_type='end',
osm_id=0,
attractiveness=0,
duration=0,
must_do=True,
n_tags=0
)
# From the parameters load the length at which to truncate the landmarks list.
with OPTIMIZER_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f)
n_important = parameters['N_important']
# Truncate to the most important landmarks for a shorter list
landmarks_short = landmarks[:n_important]
# insert start and finish to the shorter landmarks list
landmarks_short.insert(0, start_landmark)
landmarks_short.append(end_landmark)
# First stage optimization
try:
base_tour = optimizer.solve_optimization(preferences.max_time_minute, landmarks_short)
except Exception as exc:
logger.error(f"Trip generation failed: {str(exc)}")
raise HTTPException(status_code=500, detail=f"Optimization failed: {str(exc)}") from exc
t_first_stage = time.time() - start_time
start_time = time.time()
# Second stage optimization
try :
refined_tour = refiner.refine_optimization(
landmarks, base_tour,
preferences.max_time_minute,
preferences.detour_tolerance_minute
)
except Exception as exc :
logger.warning(f"Refiner failed. Proceeding with base trip {str(exc)}")
refined_tour = base_tour
t_second_stage = time.time() - start_time
logger.debug(f'First stage optimization\t: {round(t_first_stage,3)} seconds')
logger.debug(f'Second stage optimization\t: {round(t_second_stage,3)} seconds')
logger.info(f'Total computation time\t: {round(t_first_stage + t_second_stage,3)} seconds')
linked_tour = LinkedLandmarks(refined_tour)
# upon creation of the trip, persistence of both the trip and its landmarks is ensured.
trip = Trip.from_linked_landmarks(linked_tour, cache_client)
logger.info(f'Optimized a trip of {trip.total_time} minutes with {len(refined_tour)} landmarks in {round(t_first_stage + t_second_stage,3)} seconds.')
logger.info('Detailed trip :\n\t' + '\n\t'.join(f'{landmark}' for landmark in refined_tour))
# Add the cache fill as background task
background_tasks.add_task(fill_cache)
# Finally, decrement the user balance
supabase.decrement_credit_balance(user_id=user_id)
return trip

View File

@@ -6,7 +6,6 @@ from shapely import buffer, LineString, Point, Polygon, MultiPoint, concave_hull
from ..structs.landmark import Landmark
from ..utils.get_time_distance import get_time
from ..utils.take_most_important import take_most_important
from .optimizer import Optimizer
from ..constants import OPTIMIZER_PARAMETERS_PATH
@@ -238,7 +237,7 @@ class Refiner :
if self.is_in_area(area, landmark.location) and landmark.name not in visited_names:
second_order_landmarks.append(landmark)
return take_most_important(second_order_landmarks, int(self.max_landmarks_refiner*0.75))
return sorted(second_order_landmarks, key=lambda x: x.attractiveness, reverse=True)[:int(self.max_landmarks_refiner*0.75)]
# Try fix the shortest path using shapely

View File

@@ -402,6 +402,8 @@ def fill_cache():
n_files = 0
total = 0
overpass.logger.info('Trip successfully returned, starting to fill cache.')
with os.scandir(OSM_CACHE_DIR) as it:
for entry in it:
if entry.is_file() and entry.name.startswith('hollow_'):

View File

@@ -7,5 +7,4 @@ tag_exponent: 1.15
image_bonus: 1.1
viewpoint_bonus: 10
wikipedia_bonus: 1.25
N_important: 60
pay_bonus: -1

View File

@@ -6,4 +6,5 @@ max_landmarks_refiner: 20
overshoot: 0.0016
time_limit: 1
gap_rel: 0.025
max_iter: 80
max_iter: 80
N_important: 60

View File

@@ -0,0 +1,353 @@
import json
import logging
from typing import Literal
from datetime import datetime, timedelta
import requests
from pydantic import BaseModel, Field, field_validator
from ..configuration.environment import Environment
from ..cache import CreditCache, make_credit_cache_key
# Intialize the logger
logger = logging.getLogger(__name__)
# Define the base URL, might move that to toml file
BASE_URL_PROD = 'https://api-m.paypal.com'
BASE_URL_SANDBOX = 'https://api-m.sandbox.paypal.com'
class BasketItem(BaseModel):
"""
Represents a single item in the user's basket.
Attributes:
id (str): The unique identifier for the item.
quantity (int): The number of units of the item.
"""
id: str
quantity: int
class Item(BaseModel):
"""
Represents an item available in the shop.
Attributes:
id (str): The unique identifier for the item.
name (str): The name of the item.
description (str): The description of the item.
unit_price (float): The unit price of the item.
"""
id: str
name: str
description: str
unit_price: float
unit_credits: int
def item_from_sql(item_id: str):
"""
Fetches an item from the database by its ID.
Args:
item_id (str): The unique identifier for the item.
Returns:
Item: The item object retrieved from the database.
"""
# TODO: Replace with actual SQL fetch logic
return Item(
id = '12345678',
name = 'test_item',
description = 'lorem ipsum',
unit_price = 0.1,
unit_credits = 5
)
class OrderRequest(BaseModel):
"""
Represents an order request from the frontend.
Attributes:
user_id (str): The ID of the user placing the order.
basket (list[BasketItem]): List of basket items.
currency (str): The currency code for the order.
created_at (datetime): Timestamp when the order was created.
updated_at (datetime): Timestamp when the order was last updated.
items (list[Item]): List of item details loaded from the database.
total_price (float): Total price of the order.
"""
user_id: str
basket: list[BasketItem]
currency: Literal['CHF', 'EUR', 'USD']
created_at: datetime = Field(default_factory=datetime.now)
updated_at: datetime = Field(default_factory=datetime.now)
items: list[Item] = Field(default_factory=list)
total_price: float = None
total_credits: int = None
@field_validator('basket')
def validate_basket(cls, v):
"""Validates the basket items.
Args:
v (list): List of basket items.
Raises:
ValueError: If basket does not contain valid BasketItem objects.
Returns:
list: The validated basket.
"""
if not v or not all(isinstance(i, BasketItem) for i in v):
raise ValueError('Basket must contain BasketItem objects')
return
def load_items_and_price(self):
# This should be automatic upon initialization of the class
"""
Loads item details from database and calculates the total price as well as the total credits to be granted.
"""
self.items = []
self.total_price = 0
self.total_credits = 0
for basket_item in self.basket:
item = item_from_sql(basket_item.id)
self.items.append(item)
self.total_price += item.unit_price * basket_item.quantity # increment price
self.total_credits += item.unit_credits * basket_item.quantity # increment credit balance
def to_paypal_items(self):
"""
Converts items to the PayPal API item format.
Returns:
list: List of items formatted for PayPal API.
"""
item_list = []
for basket_item, item in zip(self.basket, self.items):
item_list.append({
'id': item.id,
'name': item.name,
'description': item.description,
'quantity': str(basket_item.quantity),
'unit_amount': {
'currency_code': self.currency,
'value': str(item.unit_price)
}
})
return item_list
# Payment handler class for managing PayPal payments
class PaypalClient:
"""
Handles PayPal payment operations.
Attributes:
sandbox (bool): Whether to use the sandbox environment.
id (str): PayPal client ID.
key (str): PayPal client secret.
base_url (str): Base URL for PayPal API.
_token_cache (dict): Cache for the PayPal OAuth access token.
"""
_token_cache = {
"access_token": None,
"expires_at": 0
}
def __init__(
self,
sandbox_mode: bool = False
):
"""
Initializes the handler.
Args:
sandbox_mode (bool): Whether to use sandbox credentials.
"""
self.logger = logging.getLogger(__name__)
self.sandbox = sandbox_mode
# PayPal keys
if sandbox_mode :
self.id = Environment.paypal_id_sandbox
self.key = Environment.paypal_key_sandbox
self.base_url = BASE_URL_SANDBOX
else :
self.id = Environment.paypal_id_prod
self.key = Environment.paypal_key_prod
self.base_url = BASE_URL_PROD
def _get_access_token(self) -> str | None:
"""
Gets (and caches) a PayPal access token.
Returns:
str | None: The access token if successful, None otherwise.
"""
now = datetime.now()
# Check if token is still valid
if (
self._token_cache["access_token"] is not None
and self._token_cache["expires_at"] > now
):
self.logger.info('Returning (cached) access token.')
return self._token_cache["access_token"]
# Request new token
validation_data = {'grant_type': 'client_credentials'}
try:
# pass the request
validation_response = requests.post(
url = f'{self.base_url}/v1/oauth2/token',
data = validation_data,
auth =(self.id, self.key)
)
except Exception as exc:
self.logger.error(f'Error while requesting access token: {exc}')
return None
data = validation_response.json()
access_token = data.get("access_token")
expires_in = int(data.get("expires_in", 3600)) # seconds, default 1 hour
# Cache the token and its expiry
self._token_cache["access_token"] = access_token
self._token_cache["expires_at"] = now + timedelta(seconds=expires_in - 60) # buffer 1 min
self.logger.info('Returning (new) access token.')
return access_token
def order(self, order_request: OrderRequest):
"""
Creates a new PayPal order.
Args:
order_request (OrderRequest): The order request.
Returns:
dict | None: PayPal order response JSON, or None if failed.
"""
# Fetch details of order from mart database and compute total credits and price
order_request.load_items_and_price()
# Prepare payload for post request to paypal API
order_data = {
'intent': 'CAPTURE',
'purchase_units': [
{
'items': order_request.to_paypal_items(),
'amount': {
'currency_code': order_request.currency,
'value': str(order_request.total_price),
'breakdown': {
'item_total': {
'currency_code': order_request.currency,
'value': str(order_request.total_price)
}
}
}
}
],
# TODO: add these to anydev website
'application_context': {
'return_url': 'https://anydev.info',
'cancel_url': 'https://anydev.info'
}
}
# Get the access_token:
access_token = self._get_access_token()
try:
order_response = requests.post(
url = f'{self.base_url}/v2/checkout/orders',
headers = {'Authorization': f'Bearer {access_token}'},
json = order_data,
)
# Raise HTTP Exception if request was unsuccessful.
except Exception as exc:
self.logger.error(f'Error creating PayPal order: {exc}')
return None
order_response.raise_for_status()
# TODO Now that we have the order ID, we can inscribe the details in sql database using the order id given by paypal
# DB for storing the transactions:
# order_id (key): json.loads(order_response.text)["id"]
# user_id : order_request.user_id
# created_at : order_request.created_at
# status : PENDING
# basket (json) : OrderDetails.jsonify()
# total_price : order_request.total_price
# currency : order_request.currency
# updated_at : order_request.created_at
# Create a cache item for credits to be granted to user
CreditCache.set_credits(
user_id = order_request.user_id,
order_id = json.loads(order_response.text)["id"],
credits_to_grant = order_request.total_credits)
return order_response.json()
# Standalone function to capture a payment
def capture(self, user_id: str, order_id: str):
"""
Captures payment for a PayPal order.
Args:
order_id (str): The PayPal order ID.
Returns:
dict | None: PayPal capture response JSON, or None if failed.
"""
# Get the access_token:
access_token = self._get_access_token()
try:
capture_response = requests.post(
url = f'{self.base_url}/v2/checkout/orders/{order_id}/capture',
headers = {'Authorization': f'Bearer {access_token}'},
json = {},
)
except Exception as exc:
logger.error(f'Error while requesting access token: {exc}')
return None
# Raise exception if API call failed
capture_response.raise_for_status()
# print(capture_response.text)
# TODO: update status to PAID in sql database
# where order_id (key) = order_id
# status : 'PAID'
# updated_at : datetime.now()
# Not sure yet if/how to implement that
def cancel(self):
pass

View File

@@ -0,0 +1,159 @@
import logging
from typing import Literal
from fastapi import APIRouter, HTTPException
from ..payments import PaypalClient, OrderRequest
from ..supabase.supabase import SupabaseClient
from ..cache import CreditCache, make_credit_cache_key
# Create a PayPal & Supabase client
paypal_client = PaypalClient(sandbox_mode=False)
supabase = SupabaseClient()
# Initialize the API router
router = APIRouter()
# Initialize the logger
logger = logging.getLogger(__name__)
@router.post("/orders/new")
def create_order(
user_id: str,
basket: list,
currency: Literal['CHF', 'EUR', 'USD']
):
"""
Creates a new PayPal order.
Args:
user_id (str): The ID of the user placing the order.
basket (list): The basket items.
currency (str): The currency code.
Returns:
dict: The PayPal order details.
"""
# Create order :
order = OrderRequest(
user_id = user_id,
basket=basket,
currency=currency
)
# Process the order and return the details
return paypal_client.order(order_request = order)
@router.post("/orders/{order_id}/{user_id}capture")
def capture_order(order_id: str, user_id: str):
"""
Captures payment for an existing PayPal order.
Args:
order_id (str): The PayPal order ID.
Returns:
dict: The PayPal capture response.
"""
# Capture the payment
result = paypal_client.capture(order_id)
# Grant the user the correct amount of credits:
credits = CreditCache.get_credits(user_id, order_id)
if credits:
supabase.increment_credit_balance(
user_id=user_id,
amount=credits
)
logger.info('Payment capture succeeded: incrementing balance of user {user_id} by {credits}.')
else:
logger.error('Capture payment failed. Could not find cache key for user {user_id} and order {order_id}')
return result
# import logging
# import paypalrestsdk
# from fastapi import HTTPException, APIRouter
# from ..supabase.supabase import SupabaseClient
# from .payment_handler import PaymentRequest, PaymentHandler
# # Set up logging and supabase
# logger = logging.getLogger(__name__)
# supabase = SupabaseClient()
# # Configure PayPal SDK
# paypalrestsdk.configure({
# "mode": "sandbox", # Use 'live' for production
# "client_id": "YOUR_PAYPAL_CLIENT_ID",
# "client_secret": "YOUR_PAYPAL_SECRET"
# })
# # Define the API router
# router = APIRouter()
# @router.post("/purchase/credits")
# def purchase_credits(payment_request: PaymentRequest):
# """
# Handles token purchases. Calculates the number of tokens based on the amount paid,
# updates the user's balance, and processes PayPal payment.
# """
# payment_handler = PaymentHandler(payment_request)
# # Create PayPal payment and get the approval URL
# approval_url = payment_handler.create_paypal_payment()
# return {
# "message": "Purchase initiated successfully",
# "payment_id": payment_handler.payment_id,
# "credits": payment_request.credit_amount,
# "approval_url": approval_url,
# }
# @router.get("/payment/success")
# def payment_success(paymentId: str, PayerID: str):
# """
# Handles successful PayPal payment.
# """
# payment = paypalrestsdk.Payment.find(paymentId)
# if payment.execute({"payer_id": PayerID}):
# logger.info("Payment executed successfully")
# # Retrieve transaction details from the database
# result = supabase.table("pending_payments").select("*").eq("payment_id", paymentId).single().execute()
# if not result.data:
# raise HTTPException(status_code=404, detail="Transaction not found")
# # Extract the necessary information
# user_id = result.data["user_id"]
# credit_amount = result.data["credit_amount"]
# # Update the user's balance
# supabase.increment_credit_balance(user_id, amount=credit_amount)
# # Optionally, delete the pending payment entry since the transaction is completed
# supabase.table("pending_payments").delete().eq("payment_id", paymentId).execute()
# return {"message": "Payment completed successfully"}
# else:
# logger.error(f"Payment execution failed: {payment.error}")
# raise HTTPException(status_code=500, detail="Payment execution failed")
# @router.get("/payment/cancel")
# def payment_cancel():
# """
# Handles PayPal payment cancellation.
# """
# return {"message": "Payment was cancelled"}

View File

@@ -0,0 +1,101 @@
import requests
import json
from ..configuration.environment import Environment
# DOCUMENTATION AT : https://developer.paypal.com/api/rest/requests/
# username and password
username = Environment.paypal_id_sandbox
password = Environment.paypal_key_sandbox
######## STEP 1: Validation ########
# url for validation post request
validation_url = "https://api-m.sandbox.paypal.com/v1/oauth2/token"
validation_url_prod = "https://api-m.paypal.com/v1/oauth2/token"
# payload for the post request
validation_data = {'grant_type': 'client_credentials'}
# pass the request
validation_response = requests.post(
url=validation_url,
data=validation_data,
auth=(username, password)
)
# todo check status code + try except. Status code 201 ?
print(validation_response.text)
access_token = json.loads(validation_response.text)["access_token"]
######## STEP 2: Create Order ########
# url for post request
order_url = "https://api-m.sandbox.paypal.com/v2/checkout/orders"
order_url_prod = "https://api-m.paypal.com/v2/checkout/orders"
# payload for the request
order_data = {
"intent": "CAPTURE",
"purchase_units": [
{
"items": [
{
"name": "AnyWay Credits",
"description": "50 pack of credits",
"quantity": 1,
"unit_amount": {
"currency_code": "CHF",
"value": "5.00"
}
}
],
"amount": {
"currency_code": "CHF",
"value": "5.00",
"breakdown": {
"item_total": {
"currency_code": "CHF",
"value": "5.00"
}
}
}
}
],
"application_context": {
"return_url": "https://anydev.info",
"cancel_url": "https://anydev.info"
}
}
order_response = requests.post(
url=order_url,
headers={"Authorization": f"Bearer {access_token}"}, ## need access token here?
json=order_data,
auth=(username, password)
)
# todo check status code + try except
print(order_response.text)
order_id = json.loads(order_response.text)["id"]
######## STEP 3: capture payment
# url for post request
capture_url = f"https://api-m.sandbox.paypal.com/v2/checkout/orders/{order_id}/capture"
capture_url_prod = f"https://api-m.paypal.com/v2/checkout/orders/{order_id}/capture"
capture_response = requests.post(
url=capture_url,
json={},
auth=(username, password)
)
# todo check status code + try except
print(capture_response.text)
# order_id = json.loads(response.text)["id"]

View File

@@ -49,8 +49,8 @@ class Landmark(BaseModel) :
image_url : Optional[str] = None
website_url : Optional[str] = None
wiki_url : Optional[str] = None
keywords: Optional[dict] = {}
description : Optional[str] = None
# keywords: Optional[dict] = {}
# description : Optional[str] = None
duration : Optional[int] = 5
name_en : Optional[str] = None

View File

@@ -2,6 +2,7 @@
from .landmark import Landmark
from ..utils.get_time_distance import get_time
from ..utils.description import description_and_keywords
class LinkedLandmarks:
"""
@@ -35,18 +36,23 @@ class LinkedLandmarks:
Create the links between the landmarks in the list by setting their
.next_uuid and the .time_to_next attributes.
"""
# Mark secondary landmarks as such
self.update_secondary_landmarks()
for i, landmark in enumerate(self._landmarks[:-1]):
# Set uuid of the next landmark
landmark.next_uuid = self._landmarks[i + 1].uuid
# Adjust time to reach and total time
time_to_next = get_time(landmark.location, self._landmarks[i + 1].location)
landmark.time_to_reach_next = time_to_next
self.total_time += time_to_next
self.total_time += landmark.duration
# Fill in the keywords and description. GOOD IDEA, BAD EXECUTION, tags aren't available anymore at this stage
# landmark.description, landmark.keywords = description_and_keywords(tags)
self._landmarks[-1].next_uuid = None
self._landmarks[-1].time_to_reach_next = 0

View File

@@ -1,7 +1,7 @@
"""Defines the Preferences used as input for trip generation."""
from typing import Optional, Literal
from pydantic import BaseModel
from pydantic import BaseModel, field_validator
class Preference(BaseModel) :
@@ -15,6 +15,13 @@ class Preference(BaseModel) :
type: Literal['sightseeing', 'nature', 'shopping', 'start', 'finish']
score: int # score could be from 1 to 5
@field_validator("type")
@classmethod
def validate_type(cls, v):
if v not in {'sightseeing', 'nature', 'shopping', 'start', 'finish'}:
raise ValueError(f"Invalid type: {v}")
return v
# Input for optimization
class Preferences(BaseModel) :
@@ -32,3 +39,16 @@ class Preferences(BaseModel) :
max_time_minute: Optional[int] = 3*60
detour_tolerance_minute: Optional[int] = 0
def model_post_init(self, __context):
"""
Method to validate proper initialization of individual Preferences.
Raises ValueError if the Preference type does not match with the field name.
"""
if self.sightseeing.type != 'sightseeing':
raise ValueError(f'The sightseeing preference cannot be {self.sightseeing.type}.')
if self.nature.type != 'nature':
raise ValueError(f'The nature preference cannot be {self.nature.type}.')
if self.shopping.type != 'shopping':
raise ValueError(f'The shopping preference cannot be {self.shopping.type}.')

View File

@@ -0,0 +1,168 @@
import os
import logging
import yaml
from fastapi import HTTPException, status
from supabase import create_client, Client, ClientOptions
from ..constants import PARAMETERS_DIR
from ..configuration.environment import Environment
# Silence the supabase logger
logging.getLogger("httpx").setLevel(logging.CRITICAL)
logging.getLogger("hpack").setLevel(logging.CRITICAL)
logging.getLogger("httpcore").setLevel(logging.CRITICAL)
class SupabaseClient:
logger = logging.getLogger(__name__)
def __init__(self):
self.SUPABASE_URL = Environment.supabase_url
self.SUPABASE_ADMIN_KEY = Environment.supabase_admin_key
self.SUPABASE_TEST_USER_ID = Environment.supabase_test_user_id
self.supabase = create_client(
self.SUPABASE_URL,
self.SUPABASE_ADMIN_KEY,
options=ClientOptions(schema='public')
)
self.logger.info('Supabase client initialized.')
def check_balance(self, user_id: str) -> bool:
"""
Checks if the user has enough 'credit' for generating a new trip.
Args:
user_id (str): The ID of the current user.
Returns:
bool: True if the balance is positive, False otherwise.
"""
try:
# Query the public.credits table to get the user's credits
response = (
self.supabase.table("credits")
.select('*')
.eq('id', user_id)
.single()
.execute()
)
except Exception as e:
if e.code == '22P02' :
self.logger.error(f"Failed querying credits : {str(e)}")
raise SyntaxError(f"Failed querying credits : {str(e)}") from e
if e.code == 'PGRST116' :
self.logger.error(f"User not found : {str(e)}")
raise ValueError(f"User not found : {str(e)}") from e
else :
self.logger.error(f"An unexpected error occured while checking user balance : {str(e)}")
raise Exception(f"An unexpected error occured while checking user balance : {str(e)}") from e
# Proceed to check the user's credit balance
credits = response.data['credit_amount']
self.logger.debug(f'Credits of user {user_id}: {credits}')
if credits > 0:
self.logger.info(f'Credit balance is positive for user {user_id}. Proceeding with trip generation.')
return True
self.logger.warning(f'Insufficient balance for user {user_id}. Trip generation cannot proceed.')
return False
def decrement_credit_balance(self, user_id: str, amount: int=1) -> bool:
"""
Decrements the user's credit balance by 1.
Args:
user_id (str): The ID of the current user.
"""
try:
# Query the public.credits table to get the user's current credits
response = (
self.supabase.table("credits")
.select('*')
.eq('id', user_id)
.single()
.execute()
)
except Exception as e:
if e.code == '22P02' :
self.logger.error(f"Failed decrementing credits : {str(e)}")
raise SyntaxError(f"Failed decrementing credits : {str(e)}") from e
if e.code == 'PGRST116' :
self.logger.error(f"User not found : {str(e)}")
raise ValueError(f"User not found : {str(e)}") from e
else :
self.logger.error(f"An unexpected error occured while decrementing user balance : {str(e)}")
raise Exception(f"An unexpected error occured while decrementing user balance : {str(e)}") from e
current_credits = response.data['credit_amount']
updated_credits = current_credits - amount
# Update the user's credits in the table
update_response = (
self.supabase.table('credits')
.update({'credit_amount': updated_credits})
.eq('id', user_id)
.execute()
)
# Check if the update was successful
if update_response.data:
self.logger.debug(f'Credit balance successfully decremented.')
return True
else:
raise Exception("Error decrementing credit balance.")
def increment_credit_balance(self, user_id: str, amount: int=1) -> bool:
"""
Increments the user's credit balance by 1.
Args:
user_id (str): The ID of the current user.
"""
try:
# Query the public.credits table to get the user's current credits
response = (
self.supabase.table("credits")
.select('*')
.eq('id', user_id)
.single()
.execute()
)
except Exception as e:
if e.code == '22P02' :
self.logger.error(f"Failed incrementing credits : {str(e)}")
raise SyntaxError(f"Failed incrementing credits : {str(e)}") from e
if e.code == 'PGRST116' :
self.logger.error(f"User not found : {str(e)}")
raise ValueError(f"User not found : {str(e)}") from e
else :
self.logger.error(f"An unexpected error occured while incrementing user balance : {str(e)}")
raise Exception(f"An unexpected error occured while incrementing user balance : {str(e)}") from e
current_credits = response.data['credit_amount']
updated_credits = current_credits + amount
# Update the user's credits in the table
update_response = (
self.supabase.table('credits')
.update({'credit_amount': updated_credits})
.eq('id', user_id)
.execute()
)
# Check if the update was successful
if update_response.data:
self.logger.debug(f'Credit balance successfully incremented.')
return True
else:
raise Exception("Error incrementing credit balance.")

View File

@@ -19,30 +19,50 @@ def invalid_client():
([48.8566, 2.3522], {}, 422),
# Invalid cases: incomplete preferences.
([48.084588, 7.280405], {"sightseeing": {"type": "nature", "score": 5}, # no shopping
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": 5}, # no shopping pref
"nature": {"type": "nature", "score": 5},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "nature", "score": 5}, # no nature
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": 5}, # no nature pref
"shopping": {"type": "shopping", "score": 5},
}, 422),
([48.084588, 7.280405], {"nature": {"type": "nature", "score": 5}, # no sightseeing
([48.084588, 7.280405], {"nature": {"type": "nature", "score": 5}, # no sightseeing pref
"shopping": {"type": "shopping", "score": 5},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "nature", "score": 1}, # mixed up preferences types. TODO: i suggest reducing the complexity by remove the Preference object.
"nature": {"type": "shopping", "score": 1},
"shopping": {"type": "shopping", "score": 1},
}, 422),
([48.084588, 7.280405], {"doesnotexist": {"type": "sightseeing", "score": 2}, # non-existing preferences types
"nature": {"type": "nature", "score": 2},
"shopping": {"type": "shopping", "score": 2},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": 3}, # non-existing preferences types
"nature": {"type": "doesntexisteither", "score": 3},
"shopping": {"type": "shopping", "score": 3},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": -1}, # negative preference value
"nature": {"type": "doesntexisteither", "score": 4},
"shopping": {"type": "shopping", "score": 4},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": 10}, # too high preference value
"nature": {"type": "doesntexisteither", "score": 4},
"shopping": {"type": "shopping", "score": 4},
}, 422),
# Invalid cases: unexisting coords
([91, 181], {"sightseeing": {"type": "nature", "score": 5},
([91, 181], {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
}, 422),
([-91, 181], {"sightseeing": {"type": "nature", "score": 5},
([-91, 181], {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
}, 422),
([91, -181], {"sightseeing": {"type": "nature", "score": 5},
([91, -181], {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
}, 422),
([-91, -181], {"sightseeing": {"type": "nature", "score": 5},
([-91, -181], {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
}, 422),
@@ -53,8 +73,8 @@ def test_input(invalid_client, start, preferences, status_code): # pylint: dis
Test new trip creation with different sets of preferences and locations.
"""
response = invalid_client.post(
"/trip/new",
json={
"/get/landmarks",
json ={
"preferences": preferences,
"start": start
}

View File

@@ -1,343 +0,0 @@
"""Collection of tests to ensure correct implementation and track progress. """
import time
from fastapi.testclient import TestClient
import pytest
from .test_utils import load_trip_landmarks, log_trip_details
from ..main import app
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
def test_turckheim(client, request): # pylint: disable=redefined-outer-name
"""
Test n°1 : Custom test in Turckheim to ensure small villages are also supported.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 20
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 0},
"shopping": {"type": "shopping", "score": 0},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [48.084588, 7.280405]
# "start": [45.74445023349939, 4.8222687890538865]
# "start": [45.75156398104873, 4.827154464827647]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# checks :
assert response.status_code == 200 # check for successful planning
assert isinstance(landmarks, list) # check that the return type is a list
assert len(landmarks) > 2 # check that there is something to visit
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
# assert 2!= 3
def test_bellecour(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°2 : Custom test in Lyon centre to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 120
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [45.7576485, 4.8330241]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_cologne(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°3 : Custom test in Cologne to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 240
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [50.942352665, 6.957777972392]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_strasbourg(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°4 : Custom test in Strasbourg to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 180
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [48.5846589226, 7.74078715721]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_zurich(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°5 : Custom test in Zurich to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 180
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [47.377884227, 8.5395114066]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_paris(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°6 : Custom test in Paris (les Halles) centre to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 200
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 0},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [48.85468881798671, 2.3423925755998374]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_new_york(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°7 : Custom test in New York to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 600
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [40.72592726802, -73.9920434795]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_shopping(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°8 : Custom test in Lyon centre to ensure shopping clusters are found.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 240
response = client.post(
"/trip/new",
json={
"preferences": {"sightseeing": {"type": "sightseeing", "score": 0},
"nature": {"type": "nature", "score": 0},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [45.7576485, 4.8330241]
}
)
result = response.json()
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"

View File

@@ -0,0 +1,46 @@
"""Collection of tests to ensure correct implementation and track progress of the get_landmarks_nearby feature. """
from fastapi.testclient import TestClient
import pytest
from ..main import app
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
@pytest.mark.parametrize(
"location,status_code",
[
([45.7576485, 4.8330241], 200), # Lyon, France
([41.4020572, 2.1818985], 200), # Barcelona, Spain
([59.3293, 18.0686], 200), # Stockholm, Sweden
([43.6532, -79.3832], 200), # Toronto, Canada
([38.7223, -9.1393], 200), # Lisbon, Portugal
([6.5244, 3.3792], 200), # Lagos, Nigeria
([17.3850, 78.4867], 200), # Hyderabad, India
([30.0444, 31.2357], 200), # Cairo, Egypt
([50.8503, 4.3517], 200), # Brussels, Belgium
([35.2271, -80.8431], 200), # Charlotte, USA
([10.4806, -66.9036], 200), # Caracas, Venezuela
([9.51074, -13.71118], 200), # Conakry, Guinea
]
)
def test_nearby(client, location, status_code): # pylint: disable=redefined-outer-name
"""
Test n°1 : Verify handling of invalid input.
Args:
client:
request:
"""
response = client.post(f"/get-nearby/landmarks/{location[0]}/{location[1]}")
suggestions = response.json()
# checks :
assert response.status_code == status_code # check for successful planning
assert isinstance(suggestions, list) # check that the return type is a list
assert len(suggestions) > 0

View File

@@ -0,0 +1,46 @@
"""Collection of tests to ensure correct implementation and track progress of paypal payments."""
from fastapi.testclient import TestClient
import pytest
from ..main import app
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
@pytest.mark.parametrize(
"location,status_code",
[
([45.7576485, 4.8330241], 200), # Lyon, France
([41.4020572, 2.1818985], 200), # Barcelona, Spain
([59.3293, 18.0686], 200), # Stockholm, Sweden
([43.6532, -79.3832], 200), # Toronto, Canada
([38.7223, -9.1393], 200), # Lisbon, Portugal
([6.5244, 3.3792], 200), # Lagos, Nigeria
([17.3850, 78.4867], 200), # Hyderabad, India
([30.0444, 31.2357], 200), # Cairo, Egypt
([50.8503, 4.3517], 200), # Brussels, Belgium
([35.2271, -80.8431], 200), # Charlotte, USA
([10.4806, -66.9036], 200), # Caracas, Venezuela
([9.51074, -13.71118], 200), # Conakry, Guinea
]
)
def test_nearby(client, location, status_code): # pylint: disable=redefined-outer-name
"""
Test n°1 : Verify handling of invalid input.
Args:
client:
request:
"""
response = client.post(f"/get-nearby/landmarks/{location[0]}/{location[1]}")
suggestions = response.json()
# checks :
assert response.status_code == status_code # check for successful planning
assert isinstance(suggestions, list) # check that the return type is a list
assert len(suggestions) > 0

View File

@@ -18,7 +18,7 @@ def client():
[
({}, None, 422), # Invalid case: no location at all.
([443], None, 422), # Invalid cases: invalid location.
([443, 433], None, 422), # Invalid cases: invalid location.
([443, 433], None, 422), # Invalid cases: invalid location.
]
)
def test_invalid_input(client, location, radius, status_code): # pylint: disable=redefined-outer-name
@@ -30,7 +30,7 @@ def test_invalid_input(client, location, radius, status_code): # pylint: disa
request:
"""
response = client.post(
"/toilets/new",
"/get/toilets",
params={
"location": location,
"radius": radius
@@ -58,7 +58,7 @@ def test_no_toilets(client, location, status_code): # pylint: disable=redefin
request:
"""
response = client.post(
"/toilets/new",
"/get/toilets",
params={
"location": location
}
@@ -87,7 +87,7 @@ def test_toilets(client, location, status_code): # pylint: disable=redefined-
request:
"""
response = client.post(
"/toilets/new",
"/get/toilets",
params={
"location": location,
"radius" : 600

View File

@@ -0,0 +1,101 @@
"""Collection of tests to ensure correct implementation and track progress."""
import os
import time
import yaml
from fastapi.testclient import TestClient
import pytest
from .test_utils import load_trip_landmarks, log_trip_details
from ..supabase.supabase import SupabaseClient
from ..structs.preferences import Preferences, Preference
from ..constants import PARAMETERS_DIR
from ..main import app
# Create a supabase client
supabase = SupabaseClient()
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
@pytest.mark.parametrize(
"sightseeing, shopping, nature, max_time_minute, start_coords, end_coords",
[
# Edge cases
(0, 0, 5, 240, [45.7576485, 4.8330241], None), # Lyon, Bellecour - test shopping only
# Realistic
(5, 0, 0, 20, [48.0845881, 7.2804050], None), # Turckheim
(5, 5, 5, 120, [45.7576485, 4.8330241], None), # Lyon, Bellecour
(5, 2, 5, 240, [50.9423526, 6.9577780], None), # Cologne, centre
(3, 5, 0, 180, [48.5846589226, 7.74078715721], None), # Strasbourg, centre
(2, 4, 5, 180, [47.377884227, 8.5395114066], None), # Zurich, centre
(5, 0, 5, 200, [48.85468881798671, 2.3423925755998374], None), # Paris, centre
(5, 5, 5, 600, [40.72592726802, -73.9920434795], None), # New York, Lower Manhattan
]
)
def test_trip(client, request, sightseeing, shopping, nature, max_time_minute, start_coords, end_coords):
start_time = time.time() # Start timer
prefs = Preferences(
sightseeing=Preference(type='sightseeing', score=sightseeing),
shopping=Preference(type='shopping', score=shopping),
nature=Preference(type='nature', score=nature),
max_time_minute=max_time_minute,
detour_tolerance_minute=0,
)
start = start_coords
end = end_coords
# Step 1: request the list of landmarks in the vicinty of the starting point
response = client.post(
"/get/landmarks",
json={
"preferences": prefs.model_dump(),
"start": start_coords,
"end": end_coords,
}
)
assert response.status_code == 200
landmarks = response.json()
# Step 2: Feed the landmarks to the optimizer to compute the trip
response = client.post(
"/optimize/trip",
json={
"user_id": supabase.SUPABASE_TEST_USER_ID,
"preferences": prefs.model_dump(),
"landmarks": landmarks,
"start": start,
"end": end,
}
)
assert response.status_code == 200
# Increment the user balance again
supabase.increment_credit_balance(
supabase.SUPABASE_TEST_USER_ID,
amount=1
)
# Parse the response
result = response.json()
# print(result)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], prefs.max_time_minute)
# checks :
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert prefs.max_time_minute*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {prefs.max_time_minute}"
assert prefs.max_time_minute*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {prefs.max_time_minute}"

View File

@@ -1,9 +1,12 @@
"""Helper methods for testing."""
import time
import logging
from functools import wraps
from fastapi import HTTPException
from ..structs.landmark import Landmark
from ..cache import client as cache_client
from ..structs.landmark import Landmark
from ..structs.preferences import Preferences, Preference
def landmarks_to_osmid(landmarks: list[Landmark]) -> list[int] :
@@ -91,3 +94,34 @@ def log_trip_details(request, landmarks: list[Landmark], duration: int, target_d
request.node.trip_details = trip_string
request.node.trip_duration = str(duration) # result['total_time']
request.node.target_duration = str(target_duration)
def trip_params(
sightseeing: int,
shopping: int,
nature: int,
max_time_minute: int,
start_coords: tuple[float, float] = None,
end_coords: tuple[float, float] = None,
):
def decorator(test_func):
@wraps(test_func)
def wrapper(client, request):
prefs = Preferences(
sightseeing=Preference(type='sightseeing', score=sightseeing),
shopping=Preference(type='shopping', score=shopping),
nature=Preference(type='nature', score=nature),
max_time_minute=max_time_minute,
detour_tolerance_minute=0,
)
start = start_coords
end = end_coords
# Inject into test function
return test_func(client, request, prefs, start, end)
return wrapper
return decorator

View File

@@ -70,6 +70,8 @@ class ToiletsManager:
toilets_list = self.to_toilets(result)
self.logger.debug(f'Found {len(toilets_list)} toilets around {self.location}')
return toilets_list

View File

@@ -1,16 +1,20 @@
"""Defines the endpoint for fetching toilet locations."""
"""API entry point for fetching toilet locations."""
from fastapi import HTTPException, APIRouter, Query
from ..structs.toilets import Toilets
from .toilets_manager import ToiletsManager
from ..structs.toilets import Toilets
# Define the API router
# Initialize the API router
router = APIRouter()
@router.post("/toilets/new")
def get_toilets(location: tuple[float, float] = Query(...), radius: int = 500) -> list[Toilets] :
@router.post("/get/toilets")
def get_toilets(
location: tuple[float, float] = Query(...),
radius: int = 500
) -> list[Toilets] :
"""
Endpoint to find toilets within a specified radius from a given location.
@@ -34,5 +38,6 @@ def get_toilets(location: tuple[float, float] = Query(...), radius: int = 500) -
toilets_list = toilets_manager.generate_toilet_list()
except KeyError as exc:
raise HTTPException(status_code=404, detail="No toilets found") from exc
return toilets_list

View File

View File

@@ -0,0 +1,104 @@
import logging
from fastapi import HTTPException, APIRouter
from ..structs.landmark import Landmark
from ..structs.linked_landmarks import LinkedLandmarks
from ..structs.trip import Trip
from ..landmarks.landmarks_manager import LandmarkManager
from ..optimization.optimizer import Optimizer
from ..optimization.refiner import Refiner
from ..cache import client as cache_client
logger = logging.getLogger(__name__)
manager = LandmarkManager()
optimizer = Optimizer()
refiner = Refiner(optimizer=optimizer)
# Initialize the API router
router = APIRouter()
#### For already existing trips/landmarks
@router.get("/trip/{trip_uuid}")
def get_trip(trip_uuid: str) -> Trip:
"""
Look-up the cache for a trip that has been previously generated using its identifier.
Args:
trip_uuid (str) : unique identifier for a trip.
Returns:
(Trip) : the corresponding trip.
"""
try:
trip = cache_client.get(f"trip_{trip_uuid}")
return trip
except KeyError as exc:
logger.error(f"Failed to fetch trip with UUID {trip_uuid}: {str(exc)}")
raise HTTPException(status_code=404, detail="Trip not found") from exc
# Fetch a landmark from memcached by its uuid
@router.get("/landmark/{landmark_uuid}")
def get_landmark(landmark_uuid: str) -> Landmark:
"""
Returns a Landmark from its unique identifier.
Args:
landmark_uuid (str) : unique identifier for a Landmark.
Returns:
(Landmark) : the corresponding Landmark.
"""
try:
landmark = cache_client.get(f"landmark_{landmark_uuid}")
return landmark
except KeyError as exc:
logger.error(f"Failed to fetch landmark with UUID {landmark_uuid}: {str(exc)}")
raise HTTPException(status_code=404, detail="Landmark not found") from exc
# Update the times between landmarks when removing an item from the list
@router.post("/trip/recompute-time/{trip_uuid}/{removed_landmark_uuid}")
def update_trip_time(trip_uuid: str, removed_landmark_uuid: str) -> Trip:
"""
Updates the reaching times of a given trip when removing a landmark.
Args:
landmark_uuid (str) : unique identifier for a Landmark.
Returns:
(Landmark) : the corresponding Landmark.
"""
# First, fetch the trip in the cache.
try:
trip = cache_client.get(f'trip_{trip_uuid}')
except KeyError as exc:
logger.error(f"Failed to update trip with UUID {trip_uuid} (trip not found): {str(exc)}")
raise HTTPException(status_code=404, detail='Trip not found') from exc
landmarks = []
next_uuid = trip.first_landmark_uuid
# Extract landmarks
try :
while next_uuid is not None:
landmark = cache_client.get(f'landmark_{next_uuid}')
# Filter out the removed landmark.
if next_uuid != removed_landmark_uuid :
landmarks.append(landmark)
next_uuid = landmark.next_uuid # Prepare for the next iteration
except KeyError as exc:
logger.error(f"Failed to update trip with UUID {trip_uuid} : {str(exc)}")
raise HTTPException(status_code=404, detail=f'landmark {next_uuid} not found') from exc
# Re-link every thing and compute times again
linked_tour = LinkedLandmarks(landmarks)
trip = Trip.from_linked_landmarks(linked_tour, cache_client)
return trip

View File

@@ -0,0 +1,123 @@
"""Add more information about the landmarks by writing a short description and keywords. """
def description_and_keywords(tags: dict):
"""
Generates a description and a set of keywords for a given landmark based on its tags.
Params:
tags (dict): A dictionary containing metadata about the landmark, including its name,
importance, height, date of construction, and visitor information.
Returns:
description (str): A string description of the landmark.
keywords (dict): A dictionary of keywords with fields such as 'importance', 'height',
'place_type', and 'date'.
"""
# Extract relevant fields
name = tags.get('name')
importance = tags.get('importance', None)
n_visitors = tags.get('tourism:visitors', None)
height = tags.get('height')
place_type = get_place_type(tags)
date = get_date(tags)
if place_type is None :
return None, None
# Start the description.
if importance is None :
if len(tags.keys()) < 5 :
return None, None
if len(tags.keys()) < 10 :
description = f"{name} is a well known {place_type}."
elif len(tags.keys()) < 17 :
importance = 'national'
description = f"{name} is a {place_type} of national importance."
else :
importance = 'international'
description = f"{name} is an internationally famous {place_type}."
else :
description = f"{name} is a {place_type} of {importance} importance."
if height is not None and date is not None :
description += f" This {place_type} was constructed in {date} and is ca. {height} meters high."
elif height is not None :
description += f" This {place_type} stands ca. {height} meters tall."
elif date is not None:
description += f" It was constructed in {date}."
# Format the visitor number
if n_visitors is not None :
n_visitors = int(n_visitors)
if n_visitors < 1000000 :
description += f" It welcomes {int(n_visitors/1000)} thousand visitors every year."
else :
description += f" It welcomes {round(n_visitors/1000000, 1)} million visitors every year."
# Set the keywords.
keywords = {"importance": importance,
"height": height,
"place_type": place_type,
"date": date}
return description, keywords
def get_place_type(tags):
"""
Determines the type of the place based on available tags such as 'amenity', 'building',
'historic', and 'leisure'. The priority order is: 'historic' > 'building' (if not generic) >
'amenity' > 'leisure'.
Params:
tags (dict): A dictionary containing metadata about the place.
Returns:
place_type (str): The determined type of the place, or None if no relevant type is found.
"""
amenity = tags.get('amenity', None)
building = tags.get('building', None)
historic = tags.get('historic', None)
leisure = tags.get('leisure')
if historic and historic != "yes":
return historic
if building and building not in ["yes", "civic", "government", "apartments", "residential", "commericial", "industrial", "retail", "religious", "public", "service"]:
return building
if amenity:
return amenity
if leisure:
return leisure
return None
def get_date(tags):
"""
Extracts the most relevant date from the available tags, prioritizing 'construction_date',
'start_date', 'year_of_construction', and 'opening_date' in that order.
Params:
tags (dict): A dictionary containing metadata about the place.
Returns:
date (str): The most relevant date found, or None if no date is available.
"""
construction_date = tags.get('construction_date', None)
opening_date = tags.get('opening_date', None)
start_date = tags.get('start_date', None)
year_of_construction = tags.get('year_of_construction', None)
# Prioritize based on availability
if construction_date:
return construction_date
if start_date:
return start_date
if year_of_construction:
return year_of_construction
if opening_date:
return opening_date
return None

View File

@@ -1,17 +0,0 @@
"""Helper function to return only the major landmarks from a large list."""
from ..structs.landmark import Landmark
def take_most_important(landmarks: list[Landmark], n_important) -> list[Landmark]:
"""
Given a list of landmarks, return the n_important most important landmarks
Args:
landmarks: list[Landmark] - list of landmarks
n_important: int - number of most important landmarks to return
Returns:
list[Landmark] - list of the n_important most important landmarks
"""
# Sort landmarks by attractiveness (descending)
sorted_landmarks = sorted(landmarks, key=lambda x: x.attractiveness, reverse=True)
return sorted_landmarks[:n_important]

1889
backend/uv.lock generated Normal file

File diff suppressed because it is too large Load Diff

1091
report.html Normal file

File diff suppressed because it is too large Load Diff