detailed most of the procedure
This commit is contained in:
@@ -2,25 +2,42 @@
|
||||
|
||||
= Implementation of changes <implementation>
|
||||
|
||||
This section describes the changes and improvements that were necessary to adapt the simulation suite in order to achieve the refined procedure. We distinguish between necessary changes that were required to reflect the underlying model and "beneficial" changes that only indirectly affect the quality of the simulation outputs.
|
||||
This section describes the adaptations that were necessary in order to utilize the individual treatment of halo mass accretion histories in #beorn. We distinguish between necessary changes that were required to reflect the underlying model and secondary changes that affect the quality of the simulation outputs indirectly.
|
||||
|
||||
== Profile generation taking into account halo mass history
|
||||
For each halo we require a flux profile that matches the halo properties which now include the accretion rate additionally to the mass and the redsift. The profiles are generated in a preprocessing step following the redshifts of the snapshots and the mass and accretion bins defined in the configuration. Since the dynamic range of accretion rates is large the resulting parameter space rapidlly expands. The computation of the profiles therefore utilizes vectorized operations to achieve reasonable runtimes.
|
||||
|
||||
// TODO - maybe put somewhere else
|
||||
// at least explain why it isn't a problem
|
||||
Note that this introduces another "second degree" inconsistency: the flux profile attributes the halo a radiative behavior that is motivated by its history. This is repeated for each snapshot creating possible conflicting histories. In the case of stable halo growth this is not a problem but in the case of erratic growth (e.g. major mergers) this can lead to unphysical behavior. A more consistent approach would be to assume a more flexible mass growth model that distinguishes different growth modes/regimes.
|
||||
|
||||
== Profile generation for extended parameter spce
|
||||
- vectorized computation of profiles
|
||||
- caching of profiles
|
||||
|
||||
== Parallel binned painting
|
||||
- shared memory multiprocessing
|
||||
- excess handling from overlaps
|
||||
Similarly to the computation of profiles the painting step is affected by the increased parameter space. #beorn's fast simulation times revolve around the crucial simplification of the halo model: halos with the same core properties are treated identically and can be mapped onto the grid in a single operation. Through the addition of the accretion rate as a parameter we introduce a degeneracy that reduces the number of halos that can be treated simultaneously even though their mass is identical. To mitigate this effect we implement a parallelized version of the painting step that distributes the workload onto multiple processes
|
||||
#footnote[
|
||||
A rudimentary parallel implementation using `MPI` already exists. It leverages the fact that each snapshot can processed independently and distributes the snapshots onto multiple processes.
|
||||
].
|
||||
This implementation utilizes a shared memory approach and uses processes on a single node that use a common memory space to store the grid. This allows for a more efficient usage of node resources since the memory overhead of duplicating the grid for each process is avoided.
|
||||
|
||||
Some of the painting procedure remains inherently sequential: the final ionization map enforces conservation of the photon count by distributing duplicate ionizations to neighboring cells and a parallel approach cannot guarantee perfect consistency. We aim to keep the single process computations to a minimum.
|
||||
|
||||
== Merger tree processing
|
||||
Fundamental changes include:
|
||||
-
|
||||
Treatment of invalid trees with negative growth, etc.
|
||||
- checked that high alpha halos are rare and balid to be ignored since their history is erratic - probably misidentified progenitors
|
||||
|
||||
#lorem(100)
|
||||
|
||||
|
||||
== Secondary changes
|
||||
#beorn was very opinionated in its assumptions and initial data. Since we intend it to create fast and reusable realisations we adapted the code to be more easily adjustable.
|
||||
- better io
|
||||
- better io (proper hdf5 handling, cache)
|
||||
- better loading
|
||||
- refactoring for modularity
|
||||
- refined outputs for testing + validation
|
||||
- reduction of runtime
|
||||
|
||||
Usage of `Pylians` for speedup @Pylians
|
||||
|
||||
#lorem(100)
|
||||
|
Reference in New Issue
Block a user