8 Commits

Author SHA1 Message Date
dd277287af basis for paypal transactions
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m48s
Run linting on the backend code / Build (pull_request) Failing after 29s
Run testing on the backend code / Build (pull_request) Failing after 50s
Build and release debug APK / Build APK (pull_request) Failing after 3m50s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 27s
2025-02-17 11:54:03 +01:00
f258df8e72 better endpoints
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m15s
Run linting on the backend code / Build (pull_request) Successful in 27s
Run testing on the backend code / Build (pull_request) Failing after 48s
Build and release debug APK / Build APK (pull_request) Failing after 4m38s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 25s
2025-02-17 10:27:02 +01:00
fd091a9ccc fixed print
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m49s
Run linting on the backend code / Build (pull_request) Successful in 30s
Run testing on the backend code / Build (pull_request) Failing after 50s
Build and release debug APK / Build APK (pull_request) Failing after 3m27s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 25s
2025-02-14 17:19:27 +01:00
f81c28f2ac user creation and deletetion endpoint
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m49s
Run linting on the backend code / Build (pull_request) Successful in 46s
Run testing on the backend code / Build (pull_request) Failing after 48s
Build and release debug APK / Build APK (pull_request) Failing after 3m35s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 26s
2025-02-14 16:47:37 +01:00
361b2b1f42 full passed tests
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m12s
Run linting on the backend code / Build (pull_request) Successful in 28s
Run testing on the backend code / Build (pull_request) Failing after 50s
Build and release debug APK / Build APK (pull_request) Failing after 3m46s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 25s
2025-02-14 10:44:09 +01:00
16918369d7 corrected supabase api communication
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 2m26s
Run linting on the backend code / Build (pull_request) Successful in 27s
Run testing on the backend code / Build (pull_request) Failing after 49s
Build and release debug APK / Build APK (pull_request) Failing after 3m22s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 25s
2025-02-14 10:33:57 +01:00
2c49480966 supabase implementation
Some checks failed
Build and deploy the backend to staging / Build and push image (pull_request) Successful in 1m42s
Run linting on the backend code / Build (pull_request) Successful in 27s
Run testing on the backend code / Build (pull_request) Failing after 50s
Build and release debug APK / Build APK (pull_request) Failing after 3m6s
Build and deploy the backend to staging / Deploy to staging (pull_request) Successful in 24s
2025-02-12 21:36:15 +01:00
3a9ef4e7d3 starting to implement paywall logic
Some checks failed
Run testing on the backend code / Build (pull_request) Has been cancelled
Build and deploy the backend to staging / Deploy to staging (pull_request) Has been cancelled
Build and deploy the backend to staging / Build and push image (pull_request) Has been cancelled
Run linting on the backend code / Build (pull_request) Has been cancelled
2025-02-11 17:19:03 +01:00
51 changed files with 4019 additions and 2933 deletions

View File

@@ -18,17 +18,17 @@ jobs:
- name: Install dependencies - name: Install dependencies
run: | run: |
apt-get update && apt-get install -y python3 python3-pip apt-get update && apt-get install -y python3 python3-pip
pip install uv pip install pipenv
- name: Install packages - name: Install packages
run: | run: |
ls -la ls -la
# install all packages, including dev-packages # install all packages, including dev-packages
uv sync pipenv install --dev
working-directory: backend working-directory: backend
- name: Run Tests - name: Run Tests
run: uv run pytest src --html=report.html --self-contained-html --log-cli-level=DEBUG run: pipenv run pytest src --html=report.html --self-contained-html --log-cli-level=DEBUG
working-directory: backend working-directory: backend
- name: Upload HTML report - name: Upload HTML report

12
.vscode/launch.json vendored
View File

@@ -21,15 +21,21 @@
] ]
}, },
{ {
"name": "Backend - tester", "name": "Backend - test",
"type": "debugpy", "type": "debugpy",
"request": "launch", "request": "launch",
"program": "src/tester.py", "module": "pytest",
"args": [
"src/tests",
"--log-cli-level=DEBUG",
"--html=report.html",
"--self-contained-html"
],
"env": { "env": {
"DEBUG": "true" "DEBUG": "true"
}, },
"cwd": "${workspaceFolder}/backend" "cwd": "${workspaceFolder}/backend"
}, },
// frontend - flutter app // frontend - flutter app
{ {
"name": "Frontend - debug", "name": "Frontend - debug",

3
backend/.gitignore vendored
View File

@@ -12,9 +12,6 @@ __pycache__/
# C extensions # C extensions
*.so *.so
# Pytest html reports
*.html
# Distribution / packaging # Distribution / packaging
.Python .Python
build/ build/

View File

@@ -1 +0,0 @@
3.12.9

View File

@@ -1,29 +1,11 @@
FROM python:3.12-slim-bookworm FROM python:3.11-slim
# The installer requires curl (and certificates) to download the release archive
RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates
# Download the latest installer
ADD https://astral.sh/uv/install.sh /uv-installer.sh
# Run the installer then remove it
RUN sh /uv-installer.sh && rm /uv-installer.sh
# Ensure the installed binary is on the `PATH`
ENV PATH="/root/.local/bin/:$PATH"
# Set the working directory
WORKDIR /app WORKDIR /app
COPY Pipfile Pipfile.lock .
# Copy uv files RUN pip install pipenv
COPY pyproject.toml pyproject.toml RUN pipenv install --deploy --system
COPY uv.lock uv.lock
COPY .python-version .python-version
# Sync the venv
RUN uv sync --frozen --no-cache --no-dev
# Copy application files
COPY src src COPY src src
EXPOSE 8000 EXPOSE 8000
@@ -35,4 +17,4 @@ ENV MEMCACHED_HOST_PATH=none
ENV LOKI_URL=none ENV LOKI_URL=none
# explicitly use a string instead of an argument list to force a shell and variable expansion # explicitly use a string instead of an argument list to force a shell and variable expansion
CMD uv run fastapi run src/main.py --port 8000 --workers $NUM_WORKERS CMD fastapi run src/main.py --port 8000 --workers $NUM_WORKERS

29
backend/Pipfile Normal file
View File

@@ -0,0 +1,29 @@
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[dev-packages]
pylint = "*"
pytest = "*"
tomli = "*"
httpx = "*"
exceptiongroup = "*"
pytest-html = "*"
typing-extensions = "*"
dill = "*"
[packages]
numpy = "*"
fastapi = "*"
pydantic = "*"
shapely = "*"
pymemcache = "*"
fastapi-cli = "*"
scikit-learn = "*"
loki-logger-handler = "*"
pulp = "*"
scipy = "*"
requests = "*"
supabase = "*"
paypalrestsdk = "*"

2001
backend/Pipfile.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -6,31 +6,31 @@ This repository contains the backend code for the application. It utilizes **Fas
### Directory Structure ### Directory Structure
- The code for the Python application is located in the `src` directory. - The code for the Python application is located in the `src` directory.
- Package management is handled with **uv**, and the dependencies are listed in the `pyproject.toml` file. - Package management is handled with **pipenv**, and the dependencies are listed in the `Pipfile`.
- Since the application is designed to be deployed in a container, the `Dockerfile` is provided to build the image. - Since the application is designed to be deployed in a container, the `Dockerfile` is provided to build the image.
### Setting Up the Development Environment ### Setting Up the Development Environment
To set up your development environment using **uv**, follow these steps: To set up your development environment using **pipenv**, follow these steps:
1. Make sure you find yourself in the `backend` directory: 1. Install `pipenv` by running:
```bash ```bash
cd backend sudo apt install pipenv
``` ```
1. Install `uv` by running: 2. Create and activate a virtual environment:
```bash ```bash
curl -LsSf https://astral.sh/uv/install.sh | sh pipenv shell
``` ```
3. Install the dependencies listed in `pyproject.toml` and create the virtual environment at the same time: 3. Install the dependencies listed in the `Pipfile`:
```bash ```bash
uv sync pipenv install
``` ```
4. The virtual environment will be created under: 4. The virtual environment will be created under:
```bash ```bash
backend/.venv/... ~/.local/share/virtualenvs/...
``` ```
### Deployment ### Deployment

View File

@@ -1,6 +0,0 @@
def main():
print("Hello from backend!")
if __name__ == "__main__":
main()

View File

@@ -1,57 +0,0 @@
[project]
name = "backend"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.12"
dependencies = [
"annotated-types==0.7.0 ; python_full_version >= '3.8'",
"anyio==4.8.0 ; python_full_version >= '3.9'",
"certifi==2024.12.14 ; python_full_version >= '3.6'",
"charset-normalizer==3.4.1 ; python_full_version >= '3.7'",
"click==8.1.8 ; python_full_version >= '3.7'",
"fastapi==0.115.7 ; python_full_version >= '3.8'",
"fastapi-cli==0.0.7 ; python_full_version >= '3.8'",
"h11==0.14.0 ; python_full_version >= '3.7'",
"httptools==0.6.4",
"idna==3.10 ; python_full_version >= '3.6'",
"joblib==1.4.2 ; python_full_version >= '3.8'",
"loki-logger-handler==1.1.0 ; python_full_version >= '2.7'",
"markdown-it-py==3.0.0 ; python_full_version >= '3.8'",
"mdurl==0.1.2 ; python_full_version >= '3.7'",
"numpy==2.2.2 ; python_full_version >= '3.10'",
"paypalrestsdk>=1.13.3",
"pulp==2.9.0 ; python_full_version >= '3.7'",
"pydantic==2.10.6 ; python_full_version >= '3.8'",
"pydantic-core==2.27.2 ; python_full_version >= '3.8'",
"pygments==2.19.1 ; python_full_version >= '3.8'",
"pymemcache==4.0.0 ; python_full_version >= '3.7'",
"python-dotenv==1.0.1",
"pyyaml==6.0.2",
"requests==2.32.3 ; python_full_version >= '3.8'",
"rich==13.9.4 ; python_full_version >= '3.8'",
"rich-toolkit==0.13.2 ; python_full_version >= '3.8'",
"scikit-learn==1.6.1 ; python_full_version >= '3.9'",
"scipy==1.15.1 ; python_full_version >= '3.10'",
"shapely==2.0.6 ; python_full_version >= '3.7'",
"shellingham==1.5.4 ; python_full_version >= '3.7'",
"sniffio==1.3.1 ; python_full_version >= '3.7'",
"starlette==0.45.3 ; python_full_version >= '3.9'",
"supabase>=2.16.0",
"threadpoolctl==3.5.0 ; python_full_version >= '3.8'",
"typer==0.15.1 ; python_full_version >= '3.7'",
"typing-extensions==4.12.2 ; python_full_version >= '3.8'",
"urllib3==2.3.0 ; python_full_version >= '3.9'",
"uvicorn[standard]==0.34.0 ; python_full_version >= '3.9'",
"uvloop==0.21.0",
"watchfiles==1.0.4",
"websockets==14.2",
]
[dependency-groups]
dev = [
"httpx>=0.28.1",
"ipykernel>=6.30.0",
"pytest>=8.4.1",
"pytest-html>=4.1.1",
]

1094
backend/report.html Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -28,11 +28,6 @@ This folder defines the commonly used data structures used within the project. T
### src/tests ### src/tests
This folder contains unit tests and test cases for the application's various modules. It is used to ensure the correctness and stability of the code. This folder contains unit tests and test cases for the application's various modules. It is used to ensure the correctness and stability of the code.
Run the unit tests with the following command:
```bash
uv run pytest src --log-cli-level=DEBUG --html=report.html --self-contained-html
```
### src/utils ### src/utils
The `utils` folder contains utility classes and functions that provide core functionality for the application. The main component in this folder is the `LandmarkManager`, which is central to the process of fetching and organizing landmarks. The `utils` folder contains utility classes and functions that provide core functionality for the application. The main component in this folder is the `LandmarkManager`, which is central to the process of fetching and organizing landmarks.
@@ -54,7 +49,7 @@ This file configures the logging system for the application. It defines how logs
This file contains the main application logic and API endpoints for interacting with the system. The application is built using the FastAPI framework, which provides several endpoints for creating trips, fetching trips, and retrieving landmarks or nearby facilities. The key endpoints include: This file contains the main application logic and API endpoints for interacting with the system. The application is built using the FastAPI framework, which provides several endpoints for creating trips, fetching trips, and retrieving landmarks or nearby facilities. The key endpoints include:
- **POST /trip/new**: - **POST /trip/new**:
- This endpoint allows users to create a new trip by specifying preferences, start coordinates, and optionally end coordinates. The preferences guide the optimization process for selecting landmarks. - This endpoint allows users to create a new trip by specifying user_id, preferences, start coordinates, and optionally end coordinates. The preferences guide the optimization process for selecting landmarks. The user id is needed to verify that the user's credit balance.
- Returns: A `Trip` object containing the optimized route, landmarks, and trip details. - Returns: A `Trip` object containing the optimized route, landmarks, and trip details.
- **GET /trip/{trip_uuid}**: - **GET /trip/{trip_uuid}**:

View File

@@ -12,6 +12,14 @@ LANDMARK_PARAMETERS_PATH = PARAMETERS_DIR / 'landmark_parameters.yaml'
OPTIMIZER_PARAMETERS_PATH = PARAMETERS_DIR / 'optimizer_parameters.yaml' OPTIMIZER_PARAMETERS_PATH = PARAMETERS_DIR / 'optimizer_parameters.yaml'
PAYPAL_CLIENT_ID = os.getenv("future-paypal-client-id", None)
PAYPAL_SECRET = os.getenv("future-paypal-secret", None)
PAYPAL_API_URL = "https://api-m.sandbox.paypal.com"
SUPABASE_URL = os.getenv("SUPABASE_URL", None)
SUPABASE_KEY = os.getenv("SUPABASE_API_KEY", None)
cache_dir_string = os.getenv('OSM_CACHE_DIR', './cache') cache_dir_string = os.getenv('OSM_CACHE_DIR', './cache')
OSM_CACHE_DIR = Path(cache_dir_string) OSM_CACHE_DIR = Path(cache_dir_string)

View File

@@ -1,123 +0,0 @@
"""Main app for backend api"""
import logging
import time
import random
from fastapi import HTTPException, APIRouter
from ..structs.landmark import Landmark
from ..structs.preferences import Preferences, Preference
from .landmarks_manager import LandmarkManager
# Setup the logger and the Landmarks Manager
logger = logging.getLogger(__name__)
manager = LandmarkManager()
# Initialize the API router
router = APIRouter()
@router.post("/get/landmarks")
def get_landmarks(
preferences: Preferences,
start: tuple[float, float],
) -> list[Landmark]:
"""
Function that returns all available landmarks given some preferences and a start position.
Args:
preferences : the preferences specified by the user as the post body
start : the coordinates of the starting point
Returns:
list[Landmark] : The full list of fetched landmarks
"""
if preferences is None:
raise HTTPException(status_code=406, detail="Preferences not provided or incomplete.")
if (preferences.shopping.score == 0 and
preferences.sightseeing.score == 0 and
preferences.nature.score == 0) :
raise HTTPException(status_code=406, detail="All preferences are 0.")
if start is None:
raise HTTPException(status_code=406, detail="Start coordinates not provided")
if not (-90 <= start[0] <= 90 or -180 <= start[1] <= 180):
raise HTTPException(status_code=422, detail="Start coordinates not in range")
logger.info(f"Requested new trip generation. Details:\n\tCoordinates: {start}\n\tTime: {preferences.max_time_minute}\n\tSightseeing: {preferences.sightseeing.score}\n\tNature: {preferences.nature.score}\n\tShopping: {preferences.shopping.score}")
start_time = time.time()
# Generate the landmarks from the start location
landmarks = manager.generate_landmarks_list(
center_coordinates = start,
preferences = preferences
)
if len(landmarks) == 0 :
raise HTTPException(status_code=500, detail="No landmarks were found.")
t_generate_landmarks = time.time() - start_time
logger.info(f'Fetched {len(landmarks)} landmarks in \t: {round(t_generate_landmarks,3)} seconds')
return landmarks
@router.post("/get-nearby/landmarks/{lat}/{lon}")
def get_landmarks_nearby(
lat: float,
lon: float
) -> list[Landmark] :
"""
Suggests nearby landmarks based on a given latitude and longitude.
This endpoint returns a curated list of up to 5 landmarks around the given geographical coordinates. It uses fixed preferences for
sightseeing, shopping, and nature, with a maximum time constraint of 30 minutes to limit the number of landmarks returned.
Args:
lat (float): Latitude of the user's current location.
lon (float): Longitude of the user's current location.
Returns:
list[Landmark]: A list of selected nearby landmarks.
"""
logger.info(f'Fetching landmarks nearby ({lat}, {lon}).')
# Define fixed preferences:
prefs = Preferences(
sightseeing = Preference(
type='sightseeing',
score=5
),
shopping = Preference(
type='shopping',
score=2
),
nature = Preference(
type='nature',
score=5
),
max_time_minute=30,
detour_tolerance_minute=0,
)
# Find the landmarks around the location
landmarks_around = manager.generate_landmarks_list(
center_coordinates = (lat, lon),
preferences = prefs,
allow_clusters=False,
)
if len(landmarks_around) == 0 :
raise HTTPException(status_code=500, detail="No landmarks were found.")
# select 8 - 12 landmarks from there
if len(landmarks_around) > 8 :
n_imp = random.randint(2,5)
rest = random.randint(8 - n_imp, min(12, len(landmarks_around))-n_imp)
print(f'len = {len(landmarks_around)}\nn_imp = {n_imp}\nrest = {rest}')
landmarks_around = landmarks_around[:n_imp] + random.sample(landmarks_around[n_imp:], rest)
logger.info(f'Found {len(landmarks_around)} landmarks to suggest nearby ({lat}, {lon}).')
# logger.debug('Suggested landmarks :\n\t' + '\n\t'.join(f'{landmark}' for landmark in landmarks_around))
return landmarks_around

View File

@@ -33,14 +33,14 @@ def configure_logging():
# silence the chatty logs loki generates itself # silence the chatty logs loki generates itself
logging.getLogger('urllib3.connectionpool').setLevel(logging.WARNING) logging.getLogger('urllib3.connectionpool').setLevel(logging.WARNING)
# no need for time since it's added by loki or can be shown in kube logs # no need for time since it's added by loki or can be shown in kube logs
logging_format = '%(name)-55s - %(levelname)-7s - %(message)s' logging_format = '%(name)s - %(levelname)s - %(message)s'
else: else:
# if we are in a debug (local) session, set verbose and rich logging # if we are in a debug (local) session, set verbose and rich logging
from rich.logging import RichHandler from rich.logging import RichHandler
logging_handlers = [RichHandler()] logging_handlers = [RichHandler()]
logging_level = logging.DEBUG if is_debug else logging.INFO logging_level = logging.DEBUG if is_debug else logging.INFO
logging_format = '%(asctime)s - %(name)-55s - %(levelname)-7s - %(message)s' logging_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'

View File

@@ -1,20 +1,24 @@
"""Main app for backend api""" """Main app for backend api"""
import logging import logging
import time
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from fastapi import FastAPI, HTTPException from fastapi import FastAPI, HTTPException, BackgroundTasks, Query, Body
from .logging_config import configure_logging from .logging_config import configure_logging
from .structs.landmark import Landmark from .structs.landmark import Landmark, Toilets
from .structs.preferences import Preferences
from .structs.linked_landmarks import LinkedLandmarks from .structs.linked_landmarks import LinkedLandmarks
from .structs.trip import Trip from .structs.trip import Trip
from .landmarks.landmarks_manager import LandmarkManager from .utils.landmarks_manager import LandmarkManager
from .toilets.toilets_router import router as toilets_router from .utils.toilets_manager import ToiletsManager
from .optimization.optimization_router import router as optimization_router
from .landmarks.landmarks_router import router as landmarks_router
from .payments.payment_router import router as payment_router
from .optimization.optimizer import Optimizer from .optimization.optimizer import Optimizer
from .optimization.refiner import Refiner from .optimization.refiner import Refiner
from .overpass.overpass import fill_cache
from .cache import client as cache_client from .cache import client as cache_client
from .payments.supabase import Supabase
from .payments.payment_routes import router as payment_router
from .payments.supabase_routes import router as supabase_router
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -22,6 +26,7 @@ logger = logging.getLogger(__name__)
manager = LandmarkManager() manager = LandmarkManager()
optimizer = Optimizer() optimizer = Optimizer()
refiner = Refiner(optimizer=optimizer) refiner = Refiner(optimizer=optimizer)
supabase = Supabase()
@asynccontextmanager @asynccontextmanager
@@ -36,25 +41,130 @@ async def lifespan(app: FastAPI):
app = FastAPI(lifespan=lifespan) app = FastAPI(lifespan=lifespan)
# Include the payment routes and supabase routes
# Fetches the global list of landmarks given preferences and start/end coordinates. Two routes
# Call with "/get/landmarks/" for main entry point of the trip generation pipeline.
# Call with "/get-nearby/landmarks/" for the NEARBY feature.
app.include_router(landmarks_router)
# Optimizes the trip given preferences. Second step in the main trip generation pipeline.
# Call with "/optimize/trip"
app.include_router(optimization_router)
# Fetches toilets near given coordinates.
# Call with "/get/toilets" for fetching toilets around coordinates.
app.include_router(toilets_router)
# Include the payment router for interacting with paypal sdk.
# See src/payment/payment_router.py for more information on how to call.
app.include_router(payment_router) app.include_router(payment_router)
app.include_router(supabase_router)
@app.post("/trip/new")
def new_trip(user_id: str = Body(...),
preferences: Preferences = Body(...),
start: tuple[float, float] = Body(...),
end: tuple[float, float] | None = Body(None),
background_tasks: BackgroundTasks = None) -> Trip:
"""
Main function to call the optimizer.
Args:
preferences : the preferences specified by the user as the post body
start : the coordinates of the starting point
end : the coordinates of the finishing point
Returns:
(uuid) : The uuid of the first landmark in the optimized route
"""
# Check for valid user balance.
try:
if not supabase.check_balance(user_id=user_id):
logger.warning('Insufficient credits to perform this action.')
return {"error": "Insufficient credits"}, 400 # Return a 400 Bad Request with an appropriate message
except SyntaxError as se :
raise HTTPException(status_code=400, detail=str(se)) from se
except ValueError as ve :
raise HTTPException(status_code=406, detail=str(ve)) from ve
except Exception as exc:
raise HTTPException(status_code=500, detail=f"Internal Server Error: {str(exc)}") from exc
# Check for invalid input.
if preferences is None:
raise HTTPException(status_code=406, detail="Preferences not provided or incomplete.")
if (preferences.shopping.score == 0 and
preferences.sightseeing.score == 0 and
preferences.nature.score == 0) :
raise HTTPException(status_code=406, detail="All preferences are 0.")
if start is None:
raise HTTPException(status_code=406, detail="Start coordinates not provided")
if not (-90 <= start[0] <= 90 or -180 <= start[1] <= 180):
raise HTTPException(status_code=422, detail="Start coordinates not in range")
if end is None:
end = start
logger.info("No end coordinates provided. Using start=end.")
start_landmark = Landmark(name='start',
type='start',
location=(start[0], start[1]),
osm_type='start',
osm_id=0,
attractiveness=0,
duration=0,
must_do=True,
n_tags = 0)
end_landmark = Landmark(name='finish',
type='finish',
location=(end[0], end[1]),
osm_type='end',
osm_id=0,
attractiveness=0,
duration=0,
must_do=True,
n_tags=0)
start_time = time.time()
# Generate the landmarks from the start location
landmarks, landmarks_short = manager.generate_landmarks_list(
center_coordinates = start,
preferences = preferences
)
if len(landmarks) == 0 :
raise HTTPException(status_code=500, detail="No landmarks were found.")
# insert start and finish to the landmarks list
landmarks_short.insert(0, start_landmark)
landmarks_short.append(end_landmark)
t_generate_landmarks = time.time() - start_time
logger.info(f'Fetched {len(landmarks)} landmarks in \t: {round(t_generate_landmarks,3)} seconds')
start_time = time.time()
# First stage optimization
try:
base_tour = optimizer.solve_optimization(preferences.max_time_minute, landmarks_short)
except Exception as exc:
raise HTTPException(status_code=500, detail=f"Optimization failed: {str(exc)}") from exc
t_first_stage = time.time() - start_time
start_time = time.time()
# Second stage optimization
# TODO : only if necessary (not enough landmarks for ex.)
try :
refined_tour = refiner.refine_optimization(landmarks, base_tour,
preferences.max_time_minute,
preferences.detour_tolerance_minute)
except TimeoutError as te :
logger.error(f'Refiner failed : {str(te)} Using base tour.')
refined_tour = base_tour
except Exception as exc :
raise HTTPException(status_code=500, detail=f"An unexpected error occurred: {str(exc)}") from exc
t_second_stage = time.time() - start_time
logger.debug(f'First stage optimization\t: {round(t_first_stage,3)} seconds')
logger.debug(f'Second stage optimization\t: {round(t_second_stage,3)} seconds')
logger.info(f'Total computation time\t: {round(t_first_stage + t_second_stage,3)} seconds')
linked_tour = LinkedLandmarks(refined_tour)
# upon creation of the trip, persistence of both the trip and its landmarks is ensured.
trip = Trip.from_linked_landmarks(linked_tour, cache_client)
logger.info(f'Generated a trip of {trip.total_time} minutes with {len(refined_tour)} landmarks in {round(t_generate_landmarks + t_first_stage + t_second_stage,3)} seconds.')
logger.debug('Detailed trip :\n\t' + '\n\t'.join(f'{landmark}' for landmark in refined_tour))
background_tasks.add_task(fill_cache)
supabase.decrement_credit_balance(user_id=user_id)
return trip
#### For already existing trips/landmarks #### For already existing trips/landmarks
@app.get("/trip/{trip_uuid}") @app.get("/trip/{trip_uuid}")
@@ -72,7 +182,6 @@ def get_trip(trip_uuid: str) -> Trip:
trip = cache_client.get(f"trip_{trip_uuid}") trip = cache_client.get(f"trip_{trip_uuid}")
return trip return trip
except KeyError as exc: except KeyError as exc:
logger.error(f"Failed to fetch trip with UUID {trip_uuid}: {str(exc)}")
raise HTTPException(status_code=404, detail="Trip not found") from exc raise HTTPException(status_code=404, detail="Trip not found") from exc
@@ -91,7 +200,6 @@ def get_landmark(landmark_uuid: str) -> Landmark:
landmark = cache_client.get(f"landmark_{landmark_uuid}") landmark = cache_client.get(f"landmark_{landmark_uuid}")
return landmark return landmark
except KeyError as exc: except KeyError as exc:
logger.error(f"Failed to fetch landmark with UUID {landmark_uuid}: {str(exc)}")
raise HTTPException(status_code=404, detail="Landmark not found") from exc raise HTTPException(status_code=404, detail="Landmark not found") from exc
@@ -110,7 +218,6 @@ def update_trip_time(trip_uuid: str, removed_landmark_uuid: str) -> Trip:
try: try:
trip = cache_client.get(f'trip_{trip_uuid}') trip = cache_client.get(f'trip_{trip_uuid}')
except KeyError as exc: except KeyError as exc:
logger.error(f"Failed to update trip with UUID {trip_uuid} (trip not found): {str(exc)}")
raise HTTPException(status_code=404, detail='Trip not found') from exc raise HTTPException(status_code=404, detail='Trip not found') from exc
landmarks = [] landmarks = []
@@ -125,7 +232,6 @@ def update_trip_time(trip_uuid: str, removed_landmark_uuid: str) -> Trip:
landmarks.append(landmark) landmarks.append(landmark)
next_uuid = landmark.next_uuid # Prepare for the next iteration next_uuid = landmark.next_uuid # Prepare for the next iteration
except KeyError as exc: except KeyError as exc:
logger.error(f"Failed to update trip with UUID {trip_uuid} : {str(exc)}")
raise HTTPException(status_code=404, detail=f'landmark {next_uuid} not found') from exc raise HTTPException(status_code=404, detail=f'landmark {next_uuid} not found') from exc
# Re-link every thing and compute times again # Re-link every thing and compute times again
@@ -134,3 +240,34 @@ def update_trip_time(trip_uuid: str, removed_landmark_uuid: str) -> Trip:
return trip return trip
@app.post("/toilets/new")
def get_toilets(location: tuple[float, float] = Query(...), radius: int = 500) -> list[Toilets] :
"""
Endpoint to find toilets within a specified radius from a given location.
This endpoint expects the `location` and `radius` as **query parameters**, not in the request body.
Args:
location (tuple[float, float]): The latitude and longitude of the location to search from.
radius (int, optional): The radius (in meters) within which to search for toilets. Defaults to 500 meters.
Returns:
list[Toilets]: A list of Toilets objects that meet the criteria.
"""
if location is None:
raise HTTPException(status_code=406, detail="Coordinates not provided or invalid")
if not (-90 <= location[0] <= 90 or -180 <= location[1] <= 180):
raise HTTPException(status_code=422, detail="Start coordinates not in range")
toilets_manager = ToiletsManager(location, radius)
try :
toilets_list = toilets_manager.generate_toilet_list()
return toilets_list
except KeyError as exc:
raise HTTPException(status_code=404, detail="No toilets found") from exc

View File

@@ -1,164 +0,0 @@
"""API entry point for the trip optimization."""
import logging
import time
import yaml
from fastapi import HTTPException, APIRouter, BackgroundTasks, Body
from .optimizer import Optimizer
from .refiner import Refiner
from ..supabase.supabase import SupabaseClient
from ..structs.landmark import Landmark
from ..structs.preferences import Preferences
from ..structs.linked_landmarks import LinkedLandmarks
from ..structs.trip import Trip
from ..overpass.overpass import fill_cache
from ..cache import client as cache_client
from ..constants import OPTIMIZER_PARAMETERS_PATH
# Setup the Logger, Optimizer and Refiner
logger = logging.getLogger(__name__)
optimizer = Optimizer()
refiner = Refiner(optimizer=optimizer)
supabase = SupabaseClient()
# Initialize the API router
router = APIRouter()
@router.post("/optimize/trip")
def optimize_trip(
user_id: str = Body(...),
preferences: Preferences = Body(...),
landmarks: list[Landmark] = Body(...),
start: tuple[float, float] = Body(...),
end: tuple[float, float] | None = Body(None),
background_tasks: BackgroundTasks = None
) -> Trip:
"""
Main function to call the optimizer.
Args:
preferences (Preferences) : the preferences specified by the user as the post body.
start (tuple[float, float]) : the coordinates of the starting point.
end tuple[float, float] : the coordinates of the finishing point.
backgroud_tasks (BackgroundTasks) : necessary to fill the cache after the trip has been returned.
Returns:
(uuid) : The uuid of the first landmark in the optimized route
"""
# Check for valid user balance
try:
if not supabase.check_balance(user_id=user_id):
logger.warning('Insufficient credits to perform this action.')
raise HTTPException(status_code=418, detail='Insufficient credits')
except SyntaxError as se :
logger.error(f'SyntaxError: {se}')
raise HTTPException(status_code=400, detail=str(se)) from se
except ValueError as ve :
logger.error(f'SyntaxError: {ve}')
raise HTTPException(status_code=406, detail=str(ve)) from ve
except Exception as exc:
logger.error(f'SyntaxError: {exc}')
raise HTTPException(status_code=500, detail=f"Internal Server Error: {str(exc)}") from exc
# Check for invalid input
if preferences is None:
raise HTTPException(status_code=406, detail="Preferences not provided or incomplete.")
if len(landmarks) == 0 :
raise HTTPException(status_code=406, detail="No landmarks provided for computing the trip.")
if (preferences.shopping.score == 0 and
preferences.sightseeing.score == 0 and
preferences.nature.score == 0) :
raise HTTPException(status_code=406, detail="All preferences are 0.")
if start is None:
raise HTTPException(status_code=406, detail="Start coordinates not provided")
if not (-90 <= start[0] <= 90 or -180 <= start[1] <= 180):
raise HTTPException(status_code=422, detail="Start coordinates not in range")
if end is None:
end = start
logger.info("No end coordinates provided. Using start=end.")
# Start the timer
start_time = time.time()
logger.info(f"Requested new trip generation. Details:\n\tCoordinates: {start}\n\tTime: {preferences.max_time_minute}\n\tSightseeing: {preferences.sightseeing.score}\n\tNature: {preferences.nature.score}\n\tShopping: {preferences.shopping.score}")
start_landmark = Landmark(
name='start',
type='start',
location=(start[0], start[1]),
osm_type='start',
osm_id=0,
attractiveness=0,
duration=0,
must_do=True,
n_tags = 0
)
end_landmark = Landmark(
name='finish',
type='finish',
location=(end[0], end[1]),
osm_type='end',
osm_id=0,
attractiveness=0,
duration=0,
must_do=True,
n_tags=0
)
# From the parameters load the length at which to truncate the landmarks list.
with OPTIMIZER_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f)
n_important = parameters['N_important']
# Truncate to the most important landmarks for a shorter list
landmarks_short = landmarks[:n_important]
# insert start and finish to the shorter landmarks list
landmarks_short.insert(0, start_landmark)
landmarks_short.append(end_landmark)
# First stage optimization
try:
base_tour = optimizer.solve_optimization(preferences.max_time_minute, landmarks_short)
except Exception as exc:
logger.error(f"Trip generation failed: {str(exc)}")
raise HTTPException(status_code=500, detail=f"Optimization failed: {str(exc)}") from exc
t_first_stage = time.time() - start_time
start_time = time.time()
# Second stage optimization
try :
refined_tour = refiner.refine_optimization(
landmarks, base_tour,
preferences.max_time_minute,
preferences.detour_tolerance_minute
)
except Exception as exc :
logger.warning(f"Refiner failed. Proceeding with base trip {str(exc)}")
refined_tour = base_tour
t_second_stage = time.time() - start_time
logger.debug(f'First stage optimization\t: {round(t_first_stage,3)} seconds')
logger.debug(f'Second stage optimization\t: {round(t_second_stage,3)} seconds')
logger.info(f'Total computation time\t: {round(t_first_stage + t_second_stage,3)} seconds')
linked_tour = LinkedLandmarks(refined_tour)
# upon creation of the trip, persistence of both the trip and its landmarks is ensured.
trip = Trip.from_linked_landmarks(linked_tour, cache_client)
logger.info(f'Optimized a trip of {trip.total_time} minutes with {len(refined_tour)} landmarks in {round(t_first_stage + t_second_stage,3)} seconds.')
logger.info('Detailed trip :\n\t' + '\n\t'.join(f'{landmark}' for landmark in refined_tour))
# Add the cache fill as background task
background_tasks.add_task(fill_cache)
# Finally, decrement the user balance
supabase.decrement_credit_balance(user_id=user_id)
return trip

View File

@@ -257,6 +257,7 @@ class Optimizer:
Returns: Returns:
None: This function modifies the `prob` object by adding L-2 equality constraints in-place. None: This function modifies the `prob` object by adding L-2 equality constraints in-place.
""" """
# FIXME: weird 0 artifact in the coefficients popping up
# Loop through rows 1 to L-2 to prevent stacked ones # Loop through rows 1 to L-2 to prevent stacked ones
for i in range(1, L-1): for i in range(1, L-1):
# Add the constraint that sums across each "row" or "block" in the decision variables # Add the constraint that sums across each "row" or "block" in the decision variables
@@ -589,7 +590,7 @@ class Optimizer:
try : try :
prob.solve(pl.PULP_CBC_CMD(msg=False, timeLimit=self.time_limit+1, gapRel=self.gap_rel)) prob.solve(pl.PULP_CBC_CMD(msg=False, timeLimit=self.time_limit+1, gapRel=self.gap_rel))
except Exception as exc : except Exception as exc :
raise Exception(f"No solution found: {str(exc)}") from exc raise Exception(f"No solution found: {exc}") from exc
status = pl.LpStatus[prob.status] status = pl.LpStatus[prob.status]
solution = [pl.value(var) for var in x] # The values of the decision variables (will be 0 or 1) solution = [pl.value(var) for var in x] # The values of the decision variables (will be 0 or 1)
@@ -597,7 +598,7 @@ class Optimizer:
# Raise error if no solution is found. FIXME: for now this throws the internal server error # Raise error if no solution is found. FIXME: for now this throws the internal server error
if status != 'Optimal' : if status != 'Optimal' :
self.logger.warning("The problem is overconstrained, no solution on first try.") self.logger.error("The problem is overconstrained, no solution on first try.")
raise ArithmeticError("No solution could be found. Please try again with more time or different preferences.") raise ArithmeticError("No solution could be found. Please try again with more time or different preferences.")
# If there is a solution, we're good to go, just check for connectiveness # If there is a solution, we're good to go, just check for connectiveness
@@ -607,7 +608,7 @@ class Optimizer:
while circles is not None : while circles is not None :
i += 1 i += 1
if i == self.max_iter : if i == self.max_iter :
self.logger.warning(f'Timeout: No solution found after {self.max_iter} iterations.') self.logger.error(f'Timeout: No solution found after {self.max_iter} iterations.')
raise TimeoutError(f"Optimization took too long. No solution found after {self.max_iter} iterations.") raise TimeoutError(f"Optimization took too long. No solution found after {self.max_iter} iterations.")
for circle in circles : for circle in circles :
@@ -617,13 +618,12 @@ class Optimizer:
try : try :
prob.solve(pl.PULP_CBC_CMD(msg=False, timeLimit=self.time_limit, gapRel=self.gap_rel)) prob.solve(pl.PULP_CBC_CMD(msg=False, timeLimit=self.time_limit, gapRel=self.gap_rel))
except Exception as exc : except Exception as exc :
self.logger.warning("No solution found: {str(exc)") raise Exception(f"No solution found: {exc}") from exc
raise Exception(f"No solution found: {str(exc)}") from exc
solution = [pl.value(var) for var in x] solution = [pl.value(var) for var in x]
if pl.LpStatus[prob.status] != 'Optimal' : if pl.LpStatus[prob.status] != 'Optimal' :
self.logger.warning("The problem is overconstrained, no solution after {i} cycles.") self.logger.error("The problem is overconstrained, no solution after {i} cycles.")
raise ArithmeticError("No solution could be found. Please try again with more time or different preferences.") raise ArithmeticError("No solution could be found. Please try again with more time or different preferences.")
circles = self.is_connected(solution) circles = self.is_connected(solution)

View File

@@ -6,6 +6,7 @@ from shapely import buffer, LineString, Point, Polygon, MultiPoint, concave_hull
from ..structs.landmark import Landmark from ..structs.landmark import Landmark
from ..utils.get_time_distance import get_time from ..utils.get_time_distance import get_time
from ..utils.take_most_important import take_most_important
from .optimizer import Optimizer from .optimizer import Optimizer
from ..constants import OPTIMIZER_PARAMETERS_PATH from ..constants import OPTIMIZER_PARAMETERS_PATH
@@ -237,7 +238,7 @@ class Refiner :
if self.is_in_area(area, landmark.location) and landmark.name not in visited_names: if self.is_in_area(area, landmark.location) and landmark.name not in visited_names:
second_order_landmarks.append(landmark) second_order_landmarks.append(landmark)
return sorted(second_order_landmarks, key=lambda x: x.attractiveness, reverse=True)[:int(self.max_landmarks_refiner*0.75)] return take_most_important(second_order_landmarks, int(self.max_landmarks_refiner*0.75))
# Try fix the shortest path using shapely # Try fix the shortest path using shapely
@@ -277,7 +278,7 @@ class Refiner :
better_tour_poly = concave_hull(MultiPoint(coords)) # Create concave hull with "core" of tour leaving out start and finish better_tour_poly = concave_hull(MultiPoint(coords)) # Create concave hull with "core" of tour leaving out start and finish
xs, ys = better_tour_poly.exterior.xy xs, ys = better_tour_poly.exterior.xy
""" """
FIXED : ERROR HERE : ERROR HERE :
Exception has occurred: AttributeError Exception has occurred: AttributeError
'LineString' object has no attribute 'exterior' 'LineString' object has no attribute 'exterior'
""" """
@@ -355,7 +356,7 @@ class Refiner :
# If unsuccessful optimization, use the base_tour. # If unsuccessful optimization, use the base_tour.
if new_tour is None: if new_tour is None:
self.logger.warning("Refiner failed: No solution found during second stage optimization.") self.logger.warning("No solution found for the refined tour. Returning the initial tour.")
new_tour = base_tour new_tour = base_tour
# If only one landmark, return it. # If only one landmark, return it.
@@ -368,7 +369,6 @@ class Refiner :
# Fix the tour using Polygons if the path looks weird. # Fix the tour using Polygons if the path looks weird.
# Conditions : circular trip and invalid polygon. # Conditions : circular trip and invalid polygon.
if base_tour[0].location == base_tour[-1].location and not better_poly.is_valid : if base_tour[0].location == base_tour[-1].location and not better_poly.is_valid :
self.logger.debug("Tours might be funky, attempting to correct with polygons")
better_tour = self.fix_using_polygon(better_tour) better_tour = self.fix_using_polygon(better_tour)
return better_tour return better_tour

View File

@@ -1,4 +1,3 @@
"""Module defining the handling of cache data from Overpass requests."""
import os import os
import json import json
import hashlib import hashlib
@@ -62,7 +61,7 @@ class JSONCache(CachingStrategyBase):
return None return None
def set(self, key, value): def set(self, key, value):
"""Save the JSON data in the cache.""" """Save the JSON data as an ElementTree to the cache."""
filename = self._filename(key) filename = self._filename(key)
try: try:
# Write the JSON data to the cache file # Write the JSON data to the cache file
@@ -95,7 +94,7 @@ class JSONCache(CachingStrategyBase):
def close(self): def close(self):
"""Cleanup method, if needed.""" """Cleanup method, if needed."""
pass
class CachingStrategy: class CachingStrategy:
""" """
@@ -108,7 +107,6 @@ class CachingStrategy:
@classmethod @classmethod
def use(cls, strategy_name='JSON', **kwargs): def use(cls, strategy_name='JSON', **kwargs):
"""Define the caching strategy to use."""
if cls.__strategy: if cls.__strategy:
cls.__strategy.close() cls.__strategy.close()
@@ -121,12 +119,10 @@ class CachingStrategy:
@classmethod @classmethod
def get(cls, key): def get(cls, key):
"""Get the data from the cache."""
return cls.__strategy.get(key) return cls.__strategy.get(key)
@classmethod @classmethod
def set(cls, key, value): def set(cls, key, value):
"""Save the data in the cache."""
cls.__strategy.set(key, value) cls.__strategy.set(key, value)
@classmethod @classmethod

View File

@@ -1,6 +1,5 @@
"""Module allowing connexion to overpass api and fectch data from OSM.""" """Module allowing connexion to overpass api and fectch data from OSM."""
import os import os
import time
import urllib import urllib
import math import math
import logging import logging
@@ -60,17 +59,19 @@ class Overpass :
return Overpass._filter_landmarks(cached_responses, bbox) return Overpass._filter_landmarks(cached_responses, bbox)
# If there is no cached data, fetch all from Overpass. # If there is no cached data, fetch all from Overpass.
if not cached_responses : elif not cached_responses :
query_str = Overpass.build_query(bbox, osm_types, selector, conditions, out) query_str = Overpass.build_query(bbox, osm_types, selector, conditions, out)
self.logger.debug(f'Query string: {query_str}') self.logger.debug(f'Query string: {query_str}')
return self.fetch_data_from_api(query_str) return self.fetch_data_from_api(query_str)
# Resize the bbox for smaller search area and build new query string. # Hybrid cache: some data from Overpass, some data from cache.
non_cached_bbox = Overpass._get_non_cached_bbox(non_cached_cells, bbox) else :
query_str = Overpass.build_query(non_cached_bbox, osm_types, selector, conditions, out) # Resize the bbox for smaller search area and build new query string.
self.logger.debug(f'Query string: {query_str}') non_cached_bbox = Overpass._get_non_cached_bbox(non_cached_cells, bbox)
non_cached_responses = self.fetch_data_from_api(query_str) query_str = Overpass.build_query(non_cached_bbox, osm_types, selector, conditions, out)
return Overpass._filter_landmarks(cached_responses, bbox) + non_cached_responses self.logger.debug(f'Query string: {query_str}')
non_cached_responses = self.fetch_data_from_api(query_str)
return Overpass._filter_landmarks(cached_responses, bbox) + non_cached_responses
def fetch_data_from_api(self, query_str: str) -> List[dict]: def fetch_data_from_api(self, query_str: str) -> List[dict]:
@@ -95,10 +96,9 @@ class Overpass :
return elements return elements
except urllib.error.URLError as e: except urllib.error.URLError as e:
self.logger.error(f"Error connecting to Overpass API: {str(e)}") self.logger.error(f"Error connecting to Overpass API: {e}")
raise ConnectionError(f"Error connecting to Overpass API: {str(e)}") from e raise ConnectionError(f"Error connecting to Overpass API: {e}") from e
except Exception as exc : except Exception as exc :
self.logger.error(f"unexpected error while fetching data from Overpass: {str(exc)}")
raise Exception(f'An unexpected error occured: {str(exc)}') from exc raise Exception(f'An unexpected error occured: {str(exc)}') from exc
@@ -114,7 +114,7 @@ class Overpass :
with urllib.request.urlopen(request) as response: with urllib.request.urlopen(request) as response:
# Convert the HTTPResponse to a string and load data # Convert the HTTPResponse to a string and load data
response_data = response.read().decode('utf-8') response_data = response.read().decode('utf-8')
data = json.loads(response_data) data = json.loads(response_data)
# Get elements and set cache # Get elements and set cache
@@ -122,7 +122,7 @@ class Overpass :
self.caching_strategy.set(cache_key, elements) self.caching_strategy.set(cache_key, elements)
self.logger.debug(f'Cache set for {cache_key}') self.logger.debug(f'Cache set for {cache_key}')
except urllib.error.URLError as e: except urllib.error.URLError as e:
raise ConnectionError(f"Error connecting to Overpass API: {str(e)}") from e raise ConnectionError(f"Error connecting to Overpass API: {e}") from e
except Exception as exc : except Exception as exc :
raise Exception(f'An unexpected error occured: {str(exc)}') from exc raise Exception(f'An unexpected error occured: {str(exc)}') from exc
@@ -153,7 +153,7 @@ class Overpass :
- If no conditions are provided, the query will just use the `selector` to filter the OSM - If no conditions are provided, the query will just use the `selector` to filter the OSM
elements without additional constraints. elements without additional constraints.
""" """
query = '[out:json][timeout:20];(' query = '[out:json];('
# convert the bbox to string. # convert the bbox to string.
bbox_str = f"({','.join(map(str, bbox))})" bbox_str = f"({','.join(map(str, bbox))})"
@@ -309,9 +309,9 @@ class Overpass :
if min_lat == float('inf') or min_lon == float('inf'): if min_lat == float('inf') or min_lon == float('inf'):
return None return None
return (max(min_lat, original_bbox[0]), return (max(min_lat, original_bbox[0]),
max(min_lon, original_bbox[1]), max(min_lon, original_bbox[1]),
min(max_lat, original_bbox[2]), min(max_lat, original_bbox[2]),
min(max_lon, original_bbox[3])) min(max_lon, original_bbox[3]))
@@ -388,8 +388,8 @@ def get_base_info(elem: dict, osm_type: OSM_TYPES, with_name=False) :
if with_name : if with_name :
name = elem.get('tags', {}).get('name') name = elem.get('tags', {}).get('name')
return osm_id, coords, name return osm_id, coords, name
else :
return osm_id, coords return osm_id, coords
def fill_cache(): def fill_cache():
@@ -399,27 +399,18 @@ def fill_cache():
""" """
overpass = Overpass() overpass = Overpass()
n_files = 0
total = 0
overpass.logger.info('Trip successfully returned, starting to fill cache.')
with os.scandir(OSM_CACHE_DIR) as it: with os.scandir(OSM_CACHE_DIR) as it:
for entry in it: for entry in it:
if entry.is_file() and entry.name.startswith('hollow_'): if entry.is_file() and entry.name.startswith('hollow_'):
total += 1
try : try :
# Read the whole file content as a string # Read the whole file content as a string
with open(entry.path, 'r', encoding='utf-8') as f: with open(entry.path, 'r') as f:
# load data and fill the cache with the query and key # load data and fill the cache with the query and key
json_data = json.load(f) json_data = json.load(f)
overpass.fill_cache(json_data) overpass.fill_cache(json_data)
n_files += 1
time.sleep(1)
# Now delete the file as the cache is filled # Now delete the file as the cache is filled
os.remove(entry.path) os.remove(entry.path)
except Exception as exc : except Exception as exc :
overpass.logger.error(f'An error occured while parsing file {entry.path} as .json file: {str(exc)}') overpass.logger.error(f'An error occured while parsing file {entry.path} as .json file')
overpass.logger.info(f"Successfully filled {n_files}/{total} cache files.")

View File

@@ -72,7 +72,6 @@ sightseeing:
# - castle # - castle
# - museum # - museum
museums: museums:
tourism: tourism:
- museum - museum

View File

@@ -7,4 +7,5 @@ tag_exponent: 1.15
image_bonus: 1.1 image_bonus: 1.1
viewpoint_bonus: 10 viewpoint_bonus: 10
wikipedia_bonus: 1.25 wikipedia_bonus: 1.25
N_important: 60
pay_bonus: -1 pay_bonus: -1

View File

@@ -6,5 +6,4 @@ max_landmarks_refiner: 20
overshoot: 0.0016 overshoot: 0.0016
time_limit: 1 time_limit: 1
gap_rel: 0.025 gap_rel: 0.025
max_iter: 80 max_iter: 40
N_important: 60

View File

@@ -2,12 +2,12 @@ import logging
import paypalrestsdk import paypalrestsdk
from fastapi import HTTPException, APIRouter from fastapi import HTTPException, APIRouter
from ..supabase.supabase import SupabaseClient
from .payment_handler import PaymentRequest, PaymentHandler from .payment_handler import PaymentRequest, PaymentHandler
from .supabase import Supabase
# Set up logging and supabase # Set up logging and supabase
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
supabase = SupabaseClient() supabase = Supabase()
# Configure PayPal SDK # Configure PayPal SDK
paypalrestsdk.configure({ paypalrestsdk.configure({

View File

@@ -12,7 +12,7 @@ logging.getLogger("hpack").setLevel(logging.CRITICAL)
logging.getLogger("httpcore").setLevel(logging.CRITICAL) logging.getLogger("httpcore").setLevel(logging.CRITICAL)
class SupabaseClient: class Supabase:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -29,7 +29,7 @@ class SupabaseClient:
self.SUPABASE_ADMIN_KEY, self.SUPABASE_ADMIN_KEY,
options=ClientOptions(schema='public') options=ClientOptions(schema='public')
) )
self.logger.info('Supabase client initialized.') self.logger.debug('Supabase client initialized.')
def check_balance(self, user_id: str) -> bool: def check_balance(self, user_id: str) -> bool:
@@ -51,6 +51,7 @@ class SupabaseClient:
.single() .single()
.execute() .execute()
) )
# self.logger.critical(response)
except Exception as e: except Exception as e:
if e.code == '22P02' : if e.code == '22P02' :

View File

@@ -0,0 +1,52 @@
"""Endpoints for supabase user handling."""
import logging
from fastapi import APIRouter, HTTPException
from .supabase import Supabase
# Set up logging and supabase.
logger = logging.getLogger(__name__)
supabase = Supabase()
# Create fastapi router
router = APIRouter()
@router.post("/user/create/{email}/{password}")
def register_user(email: str, password: str) -> str:
try:
response = supabase.supabase.auth.admin.create_user({
"email": email,
"password": password
})
except Exception as e:
if e.code == 'email_exists' :
logger.error(f"Failed to create user : {str(e.code)}")
raise HTTPException(status_code=422, detail=str(e)) from e
logger.error(f"Failed to create user : {str(e.code)}")
raise HTTPException(status_code=500, detail=str(e)) from e
# Extract the identity_id and user_id
user_id = response.user.id
logger.info(f"User created successfully, ID: {user_id}")
return user_id
@router.post("/user/delete/{user_id}")
def delete_user(user_id: str):
try:
response = supabase.supabase.auth.admin.delete_user(user_id)
logger.debug(response)
except Exception as e:
if e.code == 'user_not_found' :
logger.error(f"Failed to delete user : {str(e.code)}")
raise HTTPException(status_code=404, detail=str(e)) from e
logger.error(f"Failed to create user : {str(e.code)}")
raise HTTPException(status_code=500, detail=str(e)) from e
logger.info(f"User with ID {user_id} deleted successfully")

View File

@@ -1,7 +1,8 @@
"""Definition of the Landmark class to handle visitable objects across the world.""" """Definition of the Landmark class to handle visitable objects across the world."""
from typing import Optional, Literal from typing import Optional, Literal
from uuid import uuid4, UUID from uuid import uuid4, UUID
from pydantic import BaseModel, Field from pydantic import BaseModel, ConfigDict, Field
# Output to frontend # Output to frontend
@@ -49,8 +50,7 @@ class Landmark(BaseModel) :
image_url : Optional[str] = None image_url : Optional[str] = None
website_url : Optional[str] = None website_url : Optional[str] = None
wiki_url : Optional[str] = None wiki_url : Optional[str] = None
# keywords: Optional[dict] = {} description : Optional[str] = None # TODO future
# description : Optional[str] = None
duration : Optional[int] = 5 duration : Optional[int] = 5
name_en : Optional[str] = None name_en : Optional[str] = None
@@ -69,7 +69,6 @@ class Landmark(BaseModel) :
is_viewpoint : Optional[bool] = False is_viewpoint : Optional[bool] = False
is_place_of_worship : Optional[bool] = False is_place_of_worship : Optional[bool] = False
def __str__(self) -> str: def __str__(self) -> str:
""" """
String representation of the Landmark object. String representation of the Landmark object.
@@ -123,3 +122,26 @@ class Landmark(BaseModel) :
return (self.uuid == value.uuid or return (self.uuid == value.uuid or
self.osm_id == value.osm_id or self.osm_id == value.osm_id or
(self.name == value.name and self.distance(value) < 0.001)) (self.name == value.name and self.distance(value) < 0.001))
class Toilets(BaseModel) :
"""
Model for toilets. When false/empty the information is either false either not known.
"""
location : tuple
wheelchair : Optional[bool] = False
changing_table : Optional[bool] = False
fee : Optional[bool] = False
opening_hours : Optional[str] = ""
def __str__(self) -> str:
"""
String representation of the Toilets object.
Returns:
str: A formatted string with the toilets location.
"""
return f'Toilets @{self.location}'
model_config = ConfigDict(from_attributes=True)

View File

@@ -2,7 +2,6 @@
from .landmark import Landmark from .landmark import Landmark
from ..utils.get_time_distance import get_time from ..utils.get_time_distance import get_time
from ..utils.description import description_and_keywords
class LinkedLandmarks: class LinkedLandmarks:
""" """
@@ -36,23 +35,18 @@ class LinkedLandmarks:
Create the links between the landmarks in the list by setting their Create the links between the landmarks in the list by setting their
.next_uuid and the .time_to_next attributes. .next_uuid and the .time_to_next attributes.
""" """
# Mark secondary landmarks as such # Mark secondary landmarks as such
self.update_secondary_landmarks() self.update_secondary_landmarks()
for i, landmark in enumerate(self._landmarks[:-1]):
# Set uuid of the next landmark
landmark.next_uuid = self._landmarks[i + 1].uuid
# Adjust time to reach and total time for i, landmark in enumerate(self._landmarks[:-1]):
landmark.next_uuid = self._landmarks[i + 1].uuid
time_to_next = get_time(landmark.location, self._landmarks[i + 1].location) time_to_next = get_time(landmark.location, self._landmarks[i + 1].location)
landmark.time_to_reach_next = time_to_next landmark.time_to_reach_next = time_to_next
self.total_time += time_to_next self.total_time += time_to_next
self.total_time += landmark.duration self.total_time += landmark.duration
# Fill in the keywords and description. GOOD IDEA, BAD EXECUTION, tags aren't available anymore at this stage
# landmark.description, landmark.keywords = description_and_keywords(tags)
self._landmarks[-1].next_uuid = None self._landmarks[-1].next_uuid = None
self._landmarks[-1].time_to_reach_next = 0 self._landmarks[-1].time_to_reach_next = 0

View File

@@ -1,7 +1,7 @@
"""Defines the Preferences used as input for trip generation.""" """Defines the Preferences used as input for trip generation."""
from typing import Optional, Literal from typing import Optional, Literal
from pydantic import BaseModel, field_validator from pydantic import BaseModel
class Preference(BaseModel) : class Preference(BaseModel) :
@@ -15,13 +15,6 @@ class Preference(BaseModel) :
type: Literal['sightseeing', 'nature', 'shopping', 'start', 'finish'] type: Literal['sightseeing', 'nature', 'shopping', 'start', 'finish']
score: int # score could be from 1 to 5 score: int # score could be from 1 to 5
@field_validator("type")
@classmethod
def validate_type(cls, v):
if v not in {'sightseeing', 'nature', 'shopping', 'start', 'finish'}:
raise ValueError(f"Invalid type: {v}")
return v
# Input for optimization # Input for optimization
class Preferences(BaseModel) : class Preferences(BaseModel) :
@@ -39,16 +32,3 @@ class Preferences(BaseModel) :
max_time_minute: Optional[int] = 3*60 max_time_minute: Optional[int] = 3*60
detour_tolerance_minute: Optional[int] = 0 detour_tolerance_minute: Optional[int] = 0
def model_post_init(self, __context):
"""
Method to validate proper initialization of individual Preferences.
Raises ValueError if the Preference type does not match with the field name.
"""
if self.sightseeing.type != 'sightseeing':
raise ValueError(f'The sightseeing preference cannot be {self.sightseeing.type}.')
if self.nature.type != 'nature':
raise ValueError(f'The nature preference cannot be {self.nature.type}.')
if self.shopping.type != 'shopping':
raise ValueError(f'The shopping preference cannot be {self.shopping.type}.')

View File

@@ -1,26 +0,0 @@
"""Definition of the Toilets class."""
from typing import Optional
from pydantic import BaseModel, ConfigDict
class Toilets(BaseModel) :
"""
Model for toilets. When false/empty the information is either false either not known.
"""
location : tuple
wheelchair : Optional[bool] = False
changing_table : Optional[bool] = False
fee : Optional[bool] = False
opening_hours : Optional[str] = ""
def __str__(self) -> str:
"""
String representation of the Toilets object.
Returns:
str: A formatted string with the toilets location.
"""
return f'Toilets @{self.location}'
model_config = ConfigDict(from_attributes=True)

View File

@@ -4,6 +4,7 @@ from fastapi.testclient import TestClient
import pytest import pytest
from ..main import app from ..main import app
from ..constants import SUPABASE_TEST_USER_ID
@pytest.fixture(scope="module") @pytest.fixture(scope="module")
@@ -19,50 +20,30 @@ def invalid_client():
([48.8566, 2.3522], {}, 422), ([48.8566, 2.3522], {}, 422),
# Invalid cases: incomplete preferences. # Invalid cases: incomplete preferences.
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": 5}, # no shopping pref ([48.084588, 7.280405], {"sightseeing": {"type": "nature", "score": 5}, # no shopping
"nature": {"type": "nature", "score": 5}, "nature": {"type": "nature", "score": 5},
}, 422), }, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": 5}, # no nature pref ([48.084588, 7.280405], {"sightseeing": {"type": "nature", "score": 5}, # no nature
"shopping": {"type": "shopping", "score": 5}, "shopping": {"type": "shopping", "score": 5},
}, 422), }, 422),
([48.084588, 7.280405], {"nature": {"type": "nature", "score": 5}, # no sightseeing pref ([48.084588, 7.280405], {"nature": {"type": "nature", "score": 5}, # no sightseeing
"shopping": {"type": "shopping", "score": 5}, "shopping": {"type": "shopping", "score": 5},
}, 422), }, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "nature", "score": 1}, # mixed up preferences types. TODO: i suggest reducing the complexity by remove the Preference object.
"nature": {"type": "shopping", "score": 1},
"shopping": {"type": "shopping", "score": 1},
}, 422),
([48.084588, 7.280405], {"doesnotexist": {"type": "sightseeing", "score": 2}, # non-existing preferences types
"nature": {"type": "nature", "score": 2},
"shopping": {"type": "shopping", "score": 2},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": 3}, # non-existing preferences types
"nature": {"type": "doesntexisteither", "score": 3},
"shopping": {"type": "shopping", "score": 3},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": -1}, # negative preference value
"nature": {"type": "doesntexisteither", "score": 4},
"shopping": {"type": "shopping", "score": 4},
}, 422),
([48.084588, 7.280405], {"sightseeing": {"type": "sightseeing", "score": 10}, # too high preference value
"nature": {"type": "doesntexisteither", "score": 4},
"shopping": {"type": "shopping", "score": 4},
}, 422),
# Invalid cases: unexisting coords # Invalid cases: unexisting coords
([91, 181], {"sightseeing": {"type": "sightseeing", "score": 5}, ([91, 181], {"sightseeing": {"type": "nature", "score": 5},
"nature": {"type": "nature", "score": 5}, "nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5}, "shopping": {"type": "shopping", "score": 5},
}, 422), }, 422),
([-91, 181], {"sightseeing": {"type": "sightseeing", "score": 5}, ([-91, 181], {"sightseeing": {"type": "nature", "score": 5},
"nature": {"type": "nature", "score": 5}, "nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5}, "shopping": {"type": "shopping", "score": 5},
}, 422), }, 422),
([91, -181], {"sightseeing": {"type": "sightseeing", "score": 5}, ([91, -181], {"sightseeing": {"type": "nature", "score": 5},
"nature": {"type": "nature", "score": 5}, "nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5}, "shopping": {"type": "shopping", "score": 5},
}, 422), }, 422),
([-91, -181], {"sightseeing": {"type": "sightseeing", "score": 5}, ([-91, -181], {"sightseeing": {"type": "nature", "score": 5},
"nature": {"type": "nature", "score": 5}, "nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5}, "shopping": {"type": "shopping", "score": 5},
}, 422), }, 422),
@@ -73,10 +54,40 @@ def test_input(invalid_client, start, preferences, status_code): # pylint: dis
Test new trip creation with different sets of preferences and locations. Test new trip creation with different sets of preferences and locations.
""" """
response = invalid_client.post( response = invalid_client.post(
"/get/landmarks", "/trip/new",
json ={ json={
"user_id": SUPABASE_TEST_USER_ID,
"preferences": preferences, "preferences": preferences,
"start": start "start": start
} }
) )
assert response.status_code == status_code assert response.status_code == status_code
@pytest.mark.parametrize(
"user_id,status_code",
[
# No user id :
({}, 422),
("invalid_user_id", 400),
# ("12345678-1234-5678-1234-567812345678", 406)
]
)
def test_input(invalid_client, user_id, status_code): # pylint: disable=redefined-outer-name
"""
Test new trip creation with invalid user ID.
"""
response = invalid_client.post(
"/trip/new",
json={
"user_id": user_id,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 0},
"shopping": {"type": "shopping", "score": 0},
"max_time_minute": 20,
"detour_tolerance_minute": 0},
"start": [48.084588, 7.280405]
}
)
assert response.status_code == status_code

View File

@@ -0,0 +1,371 @@
"""Collection of tests to ensure correct implementation and track progress. """
import time
import logging
from fastapi.testclient import TestClient
import pytest
from .test_utils import load_trip_landmarks, log_trip_details
from ..main import app
from ..payments.supabase import Supabase
supabase = Supabase()
logger = logging.getLogger(__name__)
USER_ID = supabase.SUPABASE_TEST_USER_ID
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
def test_turckheim(client, request): # pylint: disable=redefined-outer-name
"""
Test n°1 : Custom test in Turckheim to ensure small villages are also supported.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 20
logger.debug('Running test in Turckheim')
response = client.post(
"/trip/new",
json={
"user_id": USER_ID,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 0},
"shopping": {"type": "shopping", "score": 0},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [48.084588, 7.280405]
# "start": [45.74445023349939, 4.8222687890538865]
# "start": [45.75156398104873, 4.827154464827647]
}
)
result = response.json()
supabase.increment_credit_balance(user_id=USER_ID)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert isinstance(landmarks, list) # check that the return type is a list
assert len(landmarks) > 2 # check that there is something to visit
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
# assert 2!= 3
def test_bellecour(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°2 : Custom test in Lyon centre to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 120
response = client.post(
"/trip/new",
json={
"user_id": USER_ID,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [45.7576485, 4.8330241]
}
)
result = response.json()
supabase.increment_credit_balance(user_id=USER_ID)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_cologne(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°3 : Custom test in Cologne to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 240
response = client.post(
"/trip/new",
json={
"user_id": USER_ID,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [50.942352665, 6.957777972392]
}
)
result = response.json()
supabase.increment_credit_balance(user_id=USER_ID)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_strasbourg(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°4 : Custom test in Strasbourg to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 180
response = client.post(
"/trip/new",
json={
"user_id": USER_ID,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [48.5846589226, 7.74078715721]
}
)
result = response.json()
supabase.increment_credit_balance(user_id=USER_ID)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_zurich(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°5 : Custom test in Zurich to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 180
response = client.post(
"/trip/new",
json={
"user_id": USER_ID,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [47.377884227, 8.5395114066]
}
)
result = response.json()
supabase.increment_credit_balance(user_id=USER_ID)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_paris(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°6 : Custom test in Paris (les Halles) centre to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 200
response = client.post(
"/trip/new",
json={
"user_id": USER_ID,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 0},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [48.85468881798671, 2.3423925755998374]
}
)
result = response.json()
supabase.increment_credit_balance(user_id=USER_ID)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_new_york(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°7 : Custom test in New York to ensure proper decision making in crowded area.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 600
response = client.post(
"/trip/new",
json={
"user_id": USER_ID,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 5},
"nature": {"type": "nature", "score": 5},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [40.72592726802, -73.9920434795]
}
)
result = response.json()
supabase.increment_credit_balance(user_id=USER_ID)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"
def test_shopping(client, request) : # pylint: disable=redefined-outer-name
"""
Test n°8 : Custom test in Lyon centre to ensure shopping clusters are found.
Args:
client:
request:
"""
start_time = time.time() # Start timer
duration_minutes = 240
response = client.post(
"/trip/new",
json={
"user_id": USER_ID,
"preferences": {"sightseeing": {"type": "sightseeing", "score": 0},
"nature": {"type": "nature", "score": 0},
"shopping": {"type": "shopping", "score": 5},
"max_time_minute": duration_minutes,
"detour_tolerance_minute": 0},
"start": [45.7576485, 4.8330241]
}
)
result = response.json()
supabase.increment_credit_balance(user_id=USER_ID)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], duration_minutes)
# for elem in landmarks :
# print(elem)
# checks :
assert response.status_code == 200 # check for successful planning
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert duration_minutes*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {duration_minutes}"
assert duration_minutes*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {duration_minutes}"

View File

@@ -1,46 +0,0 @@
"""Collection of tests to ensure correct implementation and track progress of the get_landmarks_nearby feature. """
from fastapi.testclient import TestClient
import pytest
from ..main import app
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
@pytest.mark.parametrize(
"location,status_code",
[
([45.7576485, 4.8330241], 200), # Lyon, France
([41.4020572, 2.1818985], 200), # Barcelona, Spain
([59.3293, 18.0686], 200), # Stockholm, Sweden
([43.6532, -79.3832], 200), # Toronto, Canada
([38.7223, -9.1393], 200), # Lisbon, Portugal
([6.5244, 3.3792], 200), # Lagos, Nigeria
([17.3850, 78.4867], 200), # Hyderabad, India
([30.0444, 31.2357], 200), # Cairo, Egypt
([50.8503, 4.3517], 200), # Brussels, Belgium
([35.2271, -80.8431], 200), # Charlotte, USA
([10.4806, -66.9036], 200), # Caracas, Venezuela
([9.51074, -13.71118], 200), # Conakry, Guinea
]
)
def test_nearby(client, location, status_code): # pylint: disable=redefined-outer-name
"""
Test n°1 : Verify handling of invalid input.
Args:
client:
request:
"""
response = client.post(f"/get-nearby/landmarks/{location[0]}/{location[1]}")
suggestions = response.json()
# checks :
assert response.status_code == status_code # check for successful planning
assert isinstance(suggestions, list) # check that the return type is a list
assert len(suggestions) > 0

View File

@@ -3,7 +3,7 @@
from fastapi.testclient import TestClient from fastapi.testclient import TestClient
import pytest import pytest
from ..structs.toilets import Toilets from ..structs.landmark import Toilets
from ..main import app from ..main import app
@@ -18,7 +18,7 @@ def client():
[ [
({}, None, 422), # Invalid case: no location at all. ({}, None, 422), # Invalid case: no location at all.
([443], None, 422), # Invalid cases: invalid location. ([443], None, 422), # Invalid cases: invalid location.
([443, 433], None, 422), # Invalid cases: invalid location. ([443, 433], None, 422), # Invalid cases: invalid location.
] ]
) )
def test_invalid_input(client, location, radius, status_code): # pylint: disable=redefined-outer-name def test_invalid_input(client, location, radius, status_code): # pylint: disable=redefined-outer-name
@@ -30,7 +30,7 @@ def test_invalid_input(client, location, radius, status_code): # pylint: disa
request: request:
""" """
response = client.post( response = client.post(
"/get/toilets", "/toilets/new",
params={ params={
"location": location, "location": location,
"radius": radius "radius": radius
@@ -58,7 +58,7 @@ def test_no_toilets(client, location, status_code): # pylint: disable=redefin
request: request:
""" """
response = client.post( response = client.post(
"/get/toilets", "/toilets/new",
params={ params={
"location": location "location": location
} }
@@ -87,7 +87,7 @@ def test_toilets(client, location, status_code): # pylint: disable=redefined-
request: request:
""" """
response = client.post( response = client.post(
"/get/toilets", "/toilets/new",
params={ params={
"location": location, "location": location,
"radius" : 600 "radius" : 600

View File

@@ -1,101 +0,0 @@
"""Collection of tests to ensure correct implementation and track progress."""
import os
import time
import yaml
from fastapi.testclient import TestClient
import pytest
from .test_utils import load_trip_landmarks, log_trip_details
from ..supabase.supabase import SupabaseClient
from ..structs.preferences import Preferences, Preference
from ..constants import PARAMETERS_DIR
from ..main import app
# Create a supabase client
supabase = SupabaseClient()
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
@pytest.mark.parametrize(
"sightseeing, shopping, nature, max_time_minute, start_coords, end_coords",
[
# Edge cases
(0, 0, 5, 240, [45.7576485, 4.8330241], None), # Lyon, Bellecour - test shopping only
# Realistic
(5, 0, 0, 20, [48.0845881, 7.2804050], None), # Turckheim
(5, 5, 5, 120, [45.7576485, 4.8330241], None), # Lyon, Bellecour
(5, 2, 5, 240, [50.9423526, 6.9577780], None), # Cologne, centre
(3, 5, 0, 180, [48.5846589226, 7.74078715721], None), # Strasbourg, centre
(2, 4, 5, 180, [47.377884227, 8.5395114066], None), # Zurich, centre
(5, 0, 5, 200, [48.85468881798671, 2.3423925755998374], None), # Paris, centre
(5, 5, 5, 600, [40.72592726802, -73.9920434795], None), # New York, Lower Manhattan
]
)
def test_trip(client, request, sightseeing, shopping, nature, max_time_minute, start_coords, end_coords):
start_time = time.time() # Start timer
prefs = Preferences(
sightseeing=Preference(type='sightseeing', score=sightseeing),
shopping=Preference(type='shopping', score=shopping),
nature=Preference(type='nature', score=nature),
max_time_minute=max_time_minute,
detour_tolerance_minute=0,
)
start = start_coords
end = end_coords
# Step 1: request the list of landmarks in the vicinty of the starting point
response = client.post(
"/get/landmarks",
json={
"preferences": prefs.model_dump(),
"start": start_coords,
"end": end_coords,
}
)
assert response.status_code == 200
landmarks = response.json()
# Step 2: Feed the landmarks to the optimizer to compute the trip
response = client.post(
"/optimize/trip",
json={
"user_id": supabase.SUPABASE_TEST_USER_ID,
"preferences": prefs.model_dump(),
"landmarks": landmarks,
"start": start,
"end": end,
}
)
assert response.status_code == 200
# Increment the user balance again
supabase.increment_credit_balance(
supabase.SUPABASE_TEST_USER_ID,
amount=1
)
# Parse the response
result = response.json()
# print(result)
landmarks = load_trip_landmarks(client, result['first_landmark_uuid'])
# Get computation time
comp_time = time.time() - start_time
# Add details to report
log_trip_details(request, landmarks, result['total_time'], prefs.max_time_minute)
# checks :
assert comp_time < 30, f"Computation time exceeded 30 seconds: {comp_time:.2f} seconds"
assert prefs.max_time_minute*0.8 < result['total_time'], f"Trip too short: {result['total_time']} instead of {prefs.max_time_minute}"
assert prefs.max_time_minute*1.2 > result['total_time'], f"Trip too long: {result['total_time']} instead of {prefs.max_time_minute}"

View File

@@ -0,0 +1,48 @@
"""Collection of tests to ensure correct handling of user data."""
from fastapi.testclient import TestClient
import pytest
from ..main import app
TEST_EMAIL = "dummy@example.com"
TEST_PW = "DummyPassword123"
@pytest.fixture(scope="module")
def client():
"""Client used to call the app."""
return TestClient(app)
def test_user_handling(client) :
"""
Test the creation of a new user.
"""
# Create a new user
response = client.post(f"/user/create/{TEST_EMAIL}/{TEST_PW}")
# Verify user has been created
assert response.status_code == 200, "Failed to create dummy user"
user_id = response.json()
# Create same user again to raise an error
response = client.post(f"/user/create/{TEST_EMAIL}/{TEST_PW}")
# Verify user already exists
assert response.status_code == 422, "Failed to simulate dummy user already created."
# Delete the user.
response = client.post(f"/user/delete/{user_id}")
# Verify user has been deleted
assert response.status_code == 200, "Failed to delete dummy user."
# Delete the user again to raise an error
response = client.post(f"/user/delete/{user_id}")
# Verify user has been deleted
assert response.status_code == 404, "Failed to simulate dummy user already deleted."

View File

@@ -1,12 +1,10 @@
"""Helper methods for testing.""" """Helper methods for testing."""
import time
import logging import logging
from functools import wraps
from fastapi import HTTPException from fastapi import HTTPException
from pydantic import ValidationError
from ..cache import client as cache_client
from ..structs.landmark import Landmark from ..structs.landmark import Landmark
from ..structs.preferences import Preferences, Preference from ..cache import client as cache_client
def landmarks_to_osmid(landmarks: list[Landmark]) -> list[int] : def landmarks_to_osmid(landmarks: list[Landmark]) -> list[int] :
@@ -41,7 +39,7 @@ def fetch_landmark(landmark_uuid: str):
try: try:
landmark = cache_client.get(f'landmark_{landmark_uuid}') landmark = cache_client.get(f'landmark_{landmark_uuid}')
if not landmark : if not landmark :
logger.error(f'Cache miss for landmark UUID: {landmark_uuid}') logger.warning(f'Cache miss for landmark UUID: {landmark_uuid}')
raise HTTPException(status_code=404, detail=f'Landmark with UUID {landmark_uuid} not found in cache.') raise HTTPException(status_code=404, detail=f'Landmark with UUID {landmark_uuid} not found in cache.')
# Validate that the fetched data is a dictionary # Validate that the fetched data is a dictionary
@@ -94,34 +92,3 @@ def log_trip_details(request, landmarks: list[Landmark], duration: int, target_d
request.node.trip_details = trip_string request.node.trip_details = trip_string
request.node.trip_duration = str(duration) # result['total_time'] request.node.trip_duration = str(duration) # result['total_time']
request.node.target_duration = str(target_duration) request.node.target_duration = str(target_duration)
def trip_params(
sightseeing: int,
shopping: int,
nature: int,
max_time_minute: int,
start_coords: tuple[float, float] = None,
end_coords: tuple[float, float] = None,
):
def decorator(test_func):
@wraps(test_func)
def wrapper(client, request):
prefs = Preferences(
sightseeing=Preference(type='sightseeing', score=sightseeing),
shopping=Preference(type='shopping', score=shopping),
nature=Preference(type='nature', score=nature),
max_time_minute=max_time_minute,
detour_tolerance_minute=0,
)
start = start_coords
end = end_coords
# Inject into test function
return test_func(client, request, prefs, start, end)
return wrapper
return decorator

View File

@@ -1,43 +0,0 @@
"""API entry point for fetching toilet locations."""
from fastapi import HTTPException, APIRouter, Query
from .toilets_manager import ToiletsManager
from ..structs.toilets import Toilets
# Initialize the API router
router = APIRouter()
@router.post("/get/toilets")
def get_toilets(
location: tuple[float, float] = Query(...),
radius: int = 500
) -> list[Toilets] :
"""
Endpoint to find toilets within a specified radius from a given location.
This endpoint expects the `location` and `radius` as **query parameters**, not in the request body.
Args:
location (tuple[float, float]): The latitude and longitude of the location to search from.
radius (int, optional): The radius (in meters) within which to search for toilets. Defaults to 500 meters.
Returns:
list[Toilets]: A list of Toilets objects that meet the criteria.
"""
if location is None:
raise HTTPException(status_code=406, detail="Coordinates not provided or invalid")
if not (-90 <= location[0] <= 90 or -180 <= location[1] <= 180):
raise HTTPException(status_code=422, detail="Start coordinates not in range")
toilets_manager = ToiletsManager(location, radius)
try :
toilets_list = toilets_manager.generate_toilet_list()
except KeyError as exc:
raise HTTPException(status_code=404, detail="No toilets found") from exc
return toilets_list

View File

@@ -8,8 +8,8 @@ from pydantic import BaseModel
from ..overpass.overpass import Overpass, get_base_info from ..overpass.overpass import Overpass, get_base_info
from ..structs.landmark import Landmark from ..structs.landmark import Landmark
from ..utils.get_time_distance import get_distance from .get_time_distance import get_distance
from ..utils.bbox import create_bbox from .utils import create_bbox
@@ -103,7 +103,7 @@ class ClusterManager:
out = out out = out
) )
except Exception as e: except Exception as e:
self.logger.warning(f"Error fetching clusters: {e}") self.logger.error(f"Error fetching clusters: {e}")
if result is None : if result is None :
self.logger.debug(f"Found no {cluster_type} clusters, overpass query returned no datapoints.") self.logger.debug(f"Found no {cluster_type} clusters, overpass query returned no datapoints.")
@@ -113,7 +113,7 @@ class ClusterManager:
points = [] points = []
for elem in result: for elem in result:
osm_type = elem.get('type') osm_type = elem.get('type')
# Get coordinates and append them to the points list # Get coordinates and append them to the points list
_, coords = get_base_info(elem, osm_type) _, coords = get_base_info(elem, osm_type)
if coords is not None : if coords is not None :
@@ -134,7 +134,7 @@ class ClusterManager:
# Check that there are is least 1 cluster # Check that there are is least 1 cluster
if len(set(labels)) > 1 : if len(set(labels)) > 1 :
self.logger.info(f"Found {len(set(labels))} different {cluster_type} clusters.") self.logger.info(f"Found {len(set(labels))} {cluster_type} clusters.")
# Separate clustered points and noise points # Separate clustered points and noise points
self.cluster_points = self.all_points[labels != -1] self.cluster_points = self.all_points[labels != -1]
self.cluster_labels = labels[labels != -1] self.cluster_labels = labels[labels != -1]
@@ -146,7 +146,7 @@ class ClusterManager:
self.valid = False self.valid = False
else : else :
self.logger.debug(f"Found 0 {cluster_type} clusters.") self.logger.debug(f"Detected 0 {cluster_type} clusters.")
self.valid = False self.valid = False
@@ -217,7 +217,7 @@ class ClusterManager:
# Define the bounding box for a given radius around the coordinates # Define the bounding box for a given radius around the coordinates
bbox = create_bbox(cluster.centroid, 300) bbox = create_bbox(cluster.centroid, 300)
# Query neighborhoods and shopping malls # Query neighborhoods and shopping malls
selectors = ['"place"~"^(suburb|neighborhood|neighbourhood|quarter|city_block)$"'] selectors = ['"place"~"^(suburb|neighborhood|neighbourhood|quarter|city_block)$"']
@@ -242,25 +242,27 @@ class ClusterManager:
out = 'ids center tags' out = 'ids center tags'
) )
except Exception as e: except Exception as e:
self.logger.warning(f"Error fetching clusters: {e}") self.logger.error(f"Error fetching clusters: {e}")
continue continue
if result is None : if result is None :
self.logger.warning(f"Error fetching clusters: query result is None") self.logger.error(f"Error fetching clusters: {e}")
continue continue
for elem in result: for elem in result:
# Get basic info osm_type = elem.get('type')
id, coords, name = get_base_info(elem, elem.get('type'), with_name=True)
id, coords, name = get_base_info(elem, osm_type, with_name=True)
if name is None or coords is None : if name is None or coords is None :
continue continue
d = get_distance(cluster.centroid, coords) d = get_distance(cluster.centroid, coords)
if d < min_dist : if d < min_dist :
min_dist = d min_dist = d
new_name = name # add name new_name = name # add name
osm_type = elem.get('type') # add type: 'way' or 'relation' osm_type = osm_type # add type: 'way' or 'relation'
osm_id = id # add OSM id osm_id = id # add OSM id
return Landmark( return Landmark(
name=new_name, name=new_name,

View File

@@ -1,123 +0,0 @@
"""Add more information about the landmarks by writing a short description and keywords. """
def description_and_keywords(tags: dict):
"""
Generates a description and a set of keywords for a given landmark based on its tags.
Params:
tags (dict): A dictionary containing metadata about the landmark, including its name,
importance, height, date of construction, and visitor information.
Returns:
description (str): A string description of the landmark.
keywords (dict): A dictionary of keywords with fields such as 'importance', 'height',
'place_type', and 'date'.
"""
# Extract relevant fields
name = tags.get('name')
importance = tags.get('importance', None)
n_visitors = tags.get('tourism:visitors', None)
height = tags.get('height')
place_type = get_place_type(tags)
date = get_date(tags)
if place_type is None :
return None, None
# Start the description.
if importance is None :
if len(tags.keys()) < 5 :
return None, None
if len(tags.keys()) < 10 :
description = f"{name} is a well known {place_type}."
elif len(tags.keys()) < 17 :
importance = 'national'
description = f"{name} is a {place_type} of national importance."
else :
importance = 'international'
description = f"{name} is an internationally famous {place_type}."
else :
description = f"{name} is a {place_type} of {importance} importance."
if height is not None and date is not None :
description += f" This {place_type} was constructed in {date} and is ca. {height} meters high."
elif height is not None :
description += f" This {place_type} stands ca. {height} meters tall."
elif date is not None:
description += f" It was constructed in {date}."
# Format the visitor number
if n_visitors is not None :
n_visitors = int(n_visitors)
if n_visitors < 1000000 :
description += f" It welcomes {int(n_visitors/1000)} thousand visitors every year."
else :
description += f" It welcomes {round(n_visitors/1000000, 1)} million visitors every year."
# Set the keywords.
keywords = {"importance": importance,
"height": height,
"place_type": place_type,
"date": date}
return description, keywords
def get_place_type(tags):
"""
Determines the type of the place based on available tags such as 'amenity', 'building',
'historic', and 'leisure'. The priority order is: 'historic' > 'building' (if not generic) >
'amenity' > 'leisure'.
Params:
tags (dict): A dictionary containing metadata about the place.
Returns:
place_type (str): The determined type of the place, or None if no relevant type is found.
"""
amenity = tags.get('amenity', None)
building = tags.get('building', None)
historic = tags.get('historic', None)
leisure = tags.get('leisure')
if historic and historic != "yes":
return historic
if building and building not in ["yes", "civic", "government", "apartments", "residential", "commericial", "industrial", "retail", "religious", "public", "service"]:
return building
if amenity:
return amenity
if leisure:
return leisure
return None
def get_date(tags):
"""
Extracts the most relevant date from the available tags, prioritizing 'construction_date',
'start_date', 'year_of_construction', and 'opening_date' in that order.
Params:
tags (dict): A dictionary containing metadata about the place.
Returns:
date (str): The most relevant date found, or None if no date is available.
"""
construction_date = tags.get('construction_date', None)
opening_date = tags.get('opening_date', None)
start_date = tags.get('start_date', None)
year_of_construction = tags.get('year_of_construction', None)
# Prioritize based on availability
if construction_date:
return construction_date
if start_date:
return start_date
if year_of_construction:
return year_of_construction
if opening_date:
return opening_date
return None

View File

@@ -4,9 +4,10 @@ import yaml
from ..structs.preferences import Preferences from ..structs.preferences import Preferences
from ..structs.landmark import Landmark from ..structs.landmark import Landmark
from .take_most_important import take_most_important
from .cluster_manager import ClusterManager from .cluster_manager import ClusterManager
from ..overpass.overpass import Overpass, get_base_info from ..overpass.overpass import Overpass, get_base_info
from ..utils.bbox import create_bbox from .utils import create_bbox
from ..constants import AMENITY_SELECTORS_PATH, LANDMARK_PARAMETERS_PATH, OPTIMIZER_PARAMETERS_PATH from ..constants import AMENITY_SELECTORS_PATH, LANDMARK_PARAMETERS_PATH, OPTIMIZER_PARAMETERS_PATH
@@ -22,7 +23,7 @@ class LandmarkManager:
church_coeff: float # coeff to adjsut score of churches church_coeff: float # coeff to adjsut score of churches
nature_coeff: float # coeff to adjust score of parks nature_coeff: float # coeff to adjust score of parks
overall_coeff: float # coeff to adjust weight of tags overall_coeff: float # coeff to adjust weight of tags
# n_important: int # number of important landmarks to consider n_important: int # number of important landmarks to consider
def __init__(self) -> None: def __init__(self) -> None:
@@ -41,7 +42,7 @@ class LandmarkManager:
self.wikipedia_bonus = parameters['wikipedia_bonus'] self.wikipedia_bonus = parameters['wikipedia_bonus']
self.viewpoint_bonus = parameters['viewpoint_bonus'] self.viewpoint_bonus = parameters['viewpoint_bonus']
self.pay_bonus = parameters['pay_bonus'] self.pay_bonus = parameters['pay_bonus']
# self.n_important = parameters['N_important'] self.n_important = parameters['N_important']
with OPTIMIZER_PARAMETERS_PATH.open('r') as f: with OPTIMIZER_PARAMETERS_PATH.open('r') as f:
parameters = yaml.safe_load(f) parameters = yaml.safe_load(f)
@@ -54,12 +55,7 @@ class LandmarkManager:
self.logger.info('LandmakManager successfully initialized.') self.logger.info('LandmakManager successfully initialized.')
def generate_landmarks_list( def generate_landmarks_list(self, center_coordinates: tuple[float, float], preferences: Preferences) -> tuple[list[Landmark], list[Landmark]]:
self,
center_coordinates: tuple[float, float],
preferences: Preferences,
allow_clusters: bool = True
) -> list[Landmark] :
""" """
Generate and prioritize a list of landmarks based on user preferences. Generate and prioritize a list of landmarks based on user preferences.
@@ -67,17 +63,16 @@ class LandmarkManager:
and current location. It scores and corrects these landmarks, removes duplicates, and then selects the most important and current location. It scores and corrects these landmarks, removes duplicates, and then selects the most important
landmarks based on a predefined criterion. landmarks based on a predefined criterion.
Parameters : Args:
center_coordinates (tuple[float, float]): The latitude and longitude of the center location around which to search. center_coordinates (tuple[float, float]): The latitude and longitude of the center location around which to search.
preferences (Preferences): The user's preference settings that influence the landmark selection. preferences (Preferences): The user's preference settings that influence the landmark selection.
allow_clusters (bool, optional) : If set to False, no clusters will be fetched. Mainly used for the option to fetch landmarks nearby.
Returns: Returns:
tuple[list[Landmark], list[Landmark]]: tuple[list[Landmark], list[Landmark]]:
- A list of all existing landmarks. - A list of all existing landmarks.
- A list of the most important landmarks based on the user's preferences. - A list of the most important landmarks based on the user's preferences.
""" """
self.logger.info(f'Starting to fetch landmarks around {center_coordinates}...') self.logger.debug('Starting to fetch landmarks...')
max_walk_dist = int((preferences.max_time_minute/2)/60*self.walking_speed*1000/self.detour_factor) max_walk_dist = int((preferences.max_time_minute/2)/60*self.walking_speed*1000/self.detour_factor)
radius = min(max_walk_dist, int(self.max_bbox_side/2)) radius = min(max_walk_dist, int(self.max_bbox_side/2))
@@ -94,11 +89,10 @@ class LandmarkManager:
all_landmarks.update(current_landmarks) all_landmarks.update(current_landmarks)
self.logger.info(f'Found {len(current_landmarks)} sightseeing landmarks') self.logger.info(f'Found {len(current_landmarks)} sightseeing landmarks')
if allow_clusters :
# special pipeline for historic neighborhoods # special pipeline for historic neighborhoods
neighborhood_manager = ClusterManager(bbox, 'sightseeing') neighborhood_manager = ClusterManager(bbox, 'sightseeing')
historic_clusters = neighborhood_manager.generate_clusters() historic_clusters = neighborhood_manager.generate_clusters()
all_landmarks.update(historic_clusters) all_landmarks.update(historic_clusters)
# list for nature # list for nature
if preferences.nature.score != 0: if preferences.nature.score != 0:
@@ -119,19 +113,16 @@ class LandmarkManager:
landmark.duration = 30 landmark.duration = 30
all_landmarks.update(current_landmarks) all_landmarks.update(current_landmarks)
if allow_clusters : # special pipeline for shopping malls
# special pipeline for shopping malls shopping_manager = ClusterManager(bbox, 'shopping')
shopping_manager = ClusterManager(bbox, 'shopping') shopping_clusters = shopping_manager.generate_clusters()
shopping_clusters = shopping_manager.generate_clusters() all_landmarks.update(shopping_clusters)
all_landmarks.update(shopping_clusters)
# DETAILS HERE landmarks_constrained = take_most_important(all_landmarks, self.n_important)
# self.logger.info(f'All landmarks generated : {len(all_landmarks)} landmarks around {center_coordinates}, and constrained to {len(landmarks_constrained)} most important ones.') # self.logger.info(f'All landmarks generated : {len(all_landmarks)} landmarks around {center_coordinates}, and constrained to {len(landmarks_constrained)} most important ones.')
self.logger.info(f'Found {len(all_landmarks)} landmarks in total.')
return sorted(all_landmarks, key=lambda x: x.attractiveness, reverse=True) return all_landmarks, landmarks_constrained
def set_landmark_score(self, landmark: Landmark, landmarktype: str, preference_level: int) : def set_landmark_score(self, landmark: Landmark, landmarktype: str, preference_level: int) :
""" """
@@ -206,7 +197,7 @@ class LandmarkManager:
out = 'ids center tags' out = 'ids center tags'
) )
except Exception as e: except Exception as e:
self.logger.debug(f"Failed to fetch landmarks, proceeding without: {str(e)}") self.logger.error(f"Error fetching landmarks: {e}")
continue continue
return_list += self._to_landmarks(result, landmarktype, preference_level) return_list += self._to_landmarks(result, landmarktype, preference_level)
@@ -245,17 +236,6 @@ class LandmarkManager:
continue continue
tags = elem.get('tags') tags = elem.get('tags')
n_tags=len(tags)
# Skip this landmark if not suitable
if tags.get('building:part') is not None :
continue
if tags.get('disused') is not None :
continue
if tags.get('boundary') is not None :
continue
if tags.get('shop') is not None and landmarktype != 'shopping' :
continue
# Convert this to Landmark object # Convert this to Landmark object
landmark = Landmark(name=name, landmark = Landmark(name=name,
@@ -264,36 +244,57 @@ class LandmarkManager:
osm_id=id, osm_id=id,
osm_type=osm_type, osm_type=osm_type,
attractiveness=0, attractiveness=0,
n_tags=n_tags) n_tags=len(tags))
# Extract useful information for score calculation later down the road. # self.logger.debug('added landmark.')
landmark.image_url = tags.get('image')
landmark.website_url = tags.get('website')
landmark.wiki_url = tags.get('wikipedia')
landmark.name_en = tags.get('name:en')
# Check for place of worship # Browse through tags to add information to landmark.
if tags.get('place_of_worship') is not None : for key, value in tags.items():
# Skip this landmark if not suitable.
if key == 'building:part' and value == 'yes' :
break
if 'disused:' in key :
break
if 'boundary:' in key :
break
if 'shop' in key and landmarktype != 'shopping' :
break
# if value == 'apartments' :
# break
# Fill in the other attributes.
if key == 'image' :
landmark.image_url = value
if key == 'website' :
landmark.website_url = value
if value == 'place_of_worship' :
landmark.is_place_of_worship = True landmark.is_place_of_worship = True
landmark.name_en = tags.get('place_of_worship') if key == 'wikipedia' :
landmark.wiki_url = value
if key == 'name:en' :
landmark.name_en = value
if 'building:' in key or 'pay' in key :
landmark.n_tags -= 1
# Set the duration. Needed for the optimization. # Set the duration.
if tags.get('amenity') in ['aquarium', 'planetarium'] or tags.get('tourism') in ['aquarium', 'museum', 'zoo']: if value in ['museum', 'aquarium', 'planetarium'] :
landmark.duration = 60 landmark.duration = 60
elif tags.get('tourism') == 'viewpoint' : elif value == 'viewpoint' :
landmark.is_viewpoint = True landmark.is_viewpoint = True
landmark.duration = 10 landmark.duration = 10
elif tags.get('building') == 'cathedral' : elif value == 'cathedral' :
landmark.is_place_of_worship = False landmark.is_place_of_worship = False
landmark.duration = 10 landmark.duration = 10
# Compute the score and add landmark to the list. else:
self.set_landmark_score(landmark, landmarktype, preference_level) self.set_landmark_score(landmark, landmarktype, preference_level)
landmarks.append(landmark) landmarks.append(landmark)
continue
return landmarks return landmarks
def dict_to_selector_list(d: dict) -> list: def dict_to_selector_list(d: dict) -> list:
""" """
Convert a dictionary of key-value pairs to a list of Overpass query strings. Convert a dictionary of key-value pairs to a list of Overpass query strings.

View File

@@ -0,0 +1,17 @@
"""Helper function to return only the major landmarks from a large list."""
from ..structs.landmark import Landmark
def take_most_important(landmarks: list[Landmark], n_important) -> list[Landmark]:
"""
Given a list of landmarks, return the n_important most important landmarks
Args:
landmarks: list[Landmark] - list of landmarks
n_important: int - number of most important landmarks to return
Returns:
list[Landmark] - list of the n_important most important landmarks
"""
# Sort landmarks by attractiveness (descending)
sorted_landmarks = sorted(landmarks, key=lambda x: x.attractiveness, reverse=True)
return sorted_landmarks[:n_important]

View File

@@ -2,8 +2,8 @@
import logging import logging
from ..overpass.overpass import Overpass, get_base_info from ..overpass.overpass import Overpass, get_base_info
from ..structs.toilets import Toilets from ..structs.landmark import Toilets
from ..utils.bbox import create_bbox from .utils import create_bbox
# silence the overpass logger # silence the overpass logger
@@ -65,13 +65,11 @@ class ToiletsManager:
try: try:
result = self.overpass.fetch_data_from_api(query_str=query) result = self.overpass.fetch_data_from_api(query_str=query)
except Exception as e: except Exception as e:
self.logger.error(f"Error fetching toilets: {e}") self.logger.error(f"Error fetching landmarks: {e}")
return None return None
toilets_list = self.to_toilets(result) toilets_list = self.to_toilets(result)
self.logger.debug(f'Found {len(toilets_list)} toilets around {self.location}')
return toilets_list return toilets_list

View File

@@ -24,4 +24,4 @@ def create_bbox(coords: tuple[float, float], radius: int):
lon_min = lon - d_lon * 180 / m.pi lon_min = lon - d_lon * 180 / m.pi
lon_max = lon + d_lon * 180 / m.pi lon_max = lon + d_lon * 180 / m.pi
return (lat_min, lon_min, lat_max, lon_max) return (lat_min, lon_min, lat_max, lon_max)

1876
backend/uv.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -33,6 +33,7 @@ fetchTrip(
UserPreferences preferences, UserPreferences preferences,
) async { ) async {
Map<String, dynamic> data = { Map<String, dynamic> data = {
// Add user ID here for API request
"preferences": preferences.toJson(), "preferences": preferences.toJson(),
"start": trip.landmarks!.first.location, "start": trip.landmarks!.first.location,
}; };

View File

@@ -328,7 +328,7 @@ div.media {
</head> </head>
<body> <body>
<h1 id="title">report.html</h1> <h1 id="title">report.html</h1>
<p>Report generated on 13-Jul-2025 at 17:24:02 by <a href="https://pypi.python.org/pypi/pytest-html">pytest-html</a> <p>Report generated on 13-Feb-2025 at 07:19:30 by <a href="https://pypi.python.org/pypi/pytest-html">pytest-html</a>
v4.1.1</p> v4.1.1</p>
<div id="environment-header"> <div id="environment-header">
<h2>Environment</h2> <h2>Environment</h2>
@@ -429,7 +429,7 @@ div.media {
</table> </table>
</body> </body>
<footer> <footer>
<div id="data-container" data-jsonblob="{&#34;environment&#34;: {&#34;Python&#34;: &#34;3.12.3&#34;, &#34;Platform&#34;: &#34;Linux-6.8.0-63-generic-x86_64-with-glibc2.39&#34;, &#34;Packages&#34;: {&#34;pytest&#34;: &#34;8.3.4&#34;, &#34;pluggy&#34;: &#34;1.5.0&#34;}, &#34;Plugins&#34;: {&#34;html&#34;: &#34;4.1.1&#34;, &#34;anyio&#34;: &#34;4.8.0&#34;, &#34;metadata&#34;: &#34;3.1.1&#34;}}, &#34;tests&#34;: {}, &#34;renderCollapsed&#34;: [&#34;passed&#34;], &#34;initialSort&#34;: &#34;result&#34;, &#34;title&#34;: &#34;report.html&#34;}"></div> <div id="data-container" data-jsonblob="{&#34;environment&#34;: {&#34;Python&#34;: &#34;3.12.3&#34;, &#34;Platform&#34;: &#34;Linux-6.8.0-53-generic-x86_64-with-glibc2.39&#34;, &#34;Packages&#34;: {&#34;pytest&#34;: &#34;8.3.4&#34;, &#34;pluggy&#34;: &#34;1.5.0&#34;}, &#34;Plugins&#34;: {&#34;html&#34;: &#34;4.1.1&#34;, &#34;anyio&#34;: &#34;4.8.0&#34;, &#34;metadata&#34;: &#34;3.1.1&#34;}}, &#34;tests&#34;: {}, &#34;renderCollapsed&#34;: [&#34;passed&#34;], &#34;initialSort&#34;: &#34;result&#34;, &#34;title&#34;: &#34;report.html&#34;}"></div>
<script> <script>
(function(){function r(e,n,t){function o(i,f){if(!n[i]){if(!e[i]){var c="function"==typeof require&&require;if(!f&&c)return c(i,!0);if(u)return u(i,!0);var a=new Error("Cannot find module '"+i+"'");throw a.code="MODULE_NOT_FOUND",a}var p=n[i]={exports:{}};e[i][0].call(p.exports,function(r){var n=e[i][1][r];return o(n||r)},p,p.exports,r,e,n,t)}return n[i].exports}for(var u="function"==typeof require&&require,i=0;i<t.length;i++)o(t[i]);return o}return r})()({1:[function(require,module,exports){ (function(){function r(e,n,t){function o(i,f){if(!n[i]){if(!e[i]){var c="function"==typeof require&&require;if(!f&&c)return c(i,!0);if(u)return u(i,!0);var a=new Error("Cannot find module '"+i+"'");throw a.code="MODULE_NOT_FOUND",a}var p=n[i]={exports:{}};e[i][0].call(p.exports,function(r){var n=e[i][1][r];return o(n||r)},p,p.exports,r,e,n,t)}return n[i].exports}for(var u="function"==typeof require&&require,i=0;i<t.length;i++)o(t[i]);return o}return r})()({1:[function(require,module,exports){
const { getCollapsedCategory, setCollapsedIds } = require('./storage.js') const { getCollapsedCategory, setCollapsedIds } = require('./storage.js')

48
status
View File

@@ -1,48 +0,0 @@
error: wrong number of arguments, should be from 1 to 2
usage: git config [<options>]
Config file location
--[no-]global use global config file
--[no-]system use system config file
--[no-]local use repository config file
--[no-]worktree use per-worktree config file
-f, --[no-]file <file>
use given config file
--[no-]blob <blob-id> read config from given blob object
Action
--[no-]get get value: name [value-pattern]
--[no-]get-all get all values: key [value-pattern]
--[no-]get-regexp get values for regexp: name-regex [value-pattern]
--[no-]get-urlmatch get value specific for the URL: section[.var] URL
--[no-]replace-all replace all matching variables: name value [value-pattern]
--[no-]add add a new variable: name value
--[no-]unset remove a variable: name [value-pattern]
--[no-]unset-all remove all matches: name [value-pattern]
--[no-]rename-section rename section: old-name new-name
--[no-]remove-section remove a section: name
-l, --[no-]list list all
--[no-]fixed-value use string equality when comparing values to 'value-pattern'
-e, --[no-]edit open an editor
--[no-]get-color find the color configured: slot [default]
--[no-]get-colorbool find the color setting: slot [stdout-is-tty]
Type
-t, --[no-]type <type>
value is given this type
--bool value is "true" or "false"
--int value is decimal number
--bool-or-int value is --bool or --int
--bool-or-str value is --bool or string
--path value is a path (file or directory name)
--expiry-date value is an expiry date
Other
-z, --[no-]null terminate values with NUL byte
--[no-]name-only show variable names only
--[no-]includes respect include directives on lookup
--[no-]show-origin show origin of config (file, standard input, blob, command line)
--[no-]show-scope show scope of config (worktree, local, global, system, command)
--[no-]default <value>
with --get, use default value when missing entry