nearly fully fleshed out now

This commit is contained in:
2025-09-17 18:11:07 +02:00
parent 4302cfc914
commit fdedbdbee2
17 changed files with 260 additions and 298 deletions

View File

@@ -2,14 +2,37 @@
= Conclusion
== Summary
- We presented Beorn, a new code to simulate the 21cm signal from the EoR and Cosmic Dawn.
- Beorn is based on the halo model and uses a novel approach to compute the 21cm signal from 1D profiles around individual sources.
- Beorn is fast, flexible, and easy to use.
- We validated Beorn against numerical simulations and showed that it can reproduce their results with good accuracy.
- Beorn is publicly available and can be used for a variety of applications, including forecasting, parameter estimation, and data analysis.
- #beorn a semi-numerical tool to simulate the 21-cm signal
- uses the _halo model of reionization_
- describes sources in terms of their host DM halo
- $=>$ central dependence on halo growth
// since it affects the SFR and thus the emissivity
- more accurate treatment of *individual* mass accretion
// change in profiles trivially
- leads to significant changes to reionization history
// which could in theory be absorbed by shifting other paremeters
- map-level fluctuations
// which we can hope to observe (although many are subtle)
// unique position of 21-cm cosmology -> cannot discuss observational constraints
- #beorn python package: #link("https://github.com/cosmic-reionization/beorn")
// invite you to check out
- simulation-agnostic
- easier to use
- fully parallelized
== Outlook
- further validation
// finally ready for direct comparison with c2ray? now that parameters and loading have been properly implemented
== Upcoming improvements
- investigation + parameterization of stochasticity
- application to larger volumes
- fully parallel runs
// Assuming other relations related to production of photons is (hopefully by now well motivated) complex
// these cannot directly be inferred => expressed as a distribution as a function of another halo property
- application to larger volumes
// the scale-up -> large volumes with usable merger trees
// comitting to reserving some 100s of node hours (which I would still quantify as fast)