302 lines
7.9 KiB
Typst

#import "@preview/diatypst:0.2.0": *
#show: slides.with(
title: "N-Body project ",
subtitle: "Computational Astrophysics, HS24",
date: "04.02.2024",
authors: ("Rémy Moll"),
toc: false,
// layout: "large",
// ratio: 16/9,
)
#show footnote.entry: set text(size: 0.6em)
#set footnote.entry(gap: 3pt)
#set align(horizon)
#import "helpers.typ"
// Setup of code location
#let t1 = json("../task1.ipynb")
#let t2 = json("../task2-particle-mesh.ipynb")
// Finally - The real content
= N-body forces and analytical solutions
// == Objective
// Implement naive N-body force computation and get an intuition of the challenges:
// - accuracy
// - computation time
// - stability
// $==>$ still useful to compute basic quantities of the system, but too limited for large systems or the dynamical evolution of the system
== Overview - the system
Get a feel for the particles and their distribution
#columns(2)[
#helpers.image_cell(t1, "plot_particle_distribution")
// Note: for visibility the outer particles are not shown.
#colbreak()
The system at hand is characterized by:
- $N ~ 10^4$ stars
- a _spherical_ distribution
$==>$ treat the system as a *globular cluster*
#footnote[Unit handling [#link(<task1:function_apply_units>)[code]]]
]
// It is a small globular cluster with
// - 5*10^4 stars => m in terms of msol
// - radius - 10 pc
// Densities are now expressed in M_sol / pc^3
// Forces are now expressed
== Density
Compare the computed density
#footnote[Density sampling [#link(<task1:function_density_distribution>)[code]]]
with the analytical model provided by the _Hernquist_ model:
#grid(
columns: (3fr, 4fr),
inset: 0.5em,
block[
$
rho(r) = M/(2 pi) a / (r dot (r + a)^3)
$
where we infer $a$ from the half-mass radius:
$
r_"hm" = (1 + sqrt(2)) dot a
$
],
block[
#helpers.image_cell(t1, "plot_density_distribution")
]
)
// Note that by construction, the first shell contains no particles
// => the numerical density is zero there
// Having more bins means to have shells that are nearly empty
// => the error is large, NBINS = 30 is a good compromise
== Force computation
#grid(
columns: (3fr, 2fr),
inset: 0.5em,
block[
#helpers.image_cell(t1, "plot_force_radial")
],
block[
Discussion:
- the analytical
#footnote[Analytical force [#link(<task1:function_analytical_forces>)[code]]]
method replicates the behavior accurately
- at small softenings the $N^2$
#footnote[$N^2$ force [#link(<task1:function_n2_forces>)[code]]]
method has noisy artifacts
- a $1 dot epsilon$
#footnote[$epsilon$ computation [#link(<task1:function_interparticle_distance>)[code]]]
softening is a good compromise between accuracy and stability
]
)
// basic $N^2$ matches analytical solution without dropoff. but: noisy data from "bad" samples
// $N^2$ with softening matches analytical solution but has a dropoff. No noisy data.
// => softening $\approx 1 \varepsilon$ is a sweet spot since the dropoff is "late"
== Relaxation
We express system relaxation in terms of the dynamical time of the system.
$
t_"relax" = overbrace(N / (8 log N), n_"relax") dot t_"crossing"
$
where the crossing time of the system can be estimated through the half-mass velocity $t_"crossing" = v(r_"hm")/r_"hm"$.
We find a relaxation of $approx 30 "Myr"$ ([#link(<task1:compute_relaxation_time>)[code]])
#grid(
columns: (1fr, 1fr),
inset: 0.5em,
block[
#image("relaxation.png")
],
block[
- Each star-star interaction contributes $delta v approx (2 G m )/b$
- Shifting by $epsilon$ *dampens* each contribution
- $=>$ relaxation time increases
]
)
// The estimate for $n_{relax}$ comes from the contribution of each star-star encounter to the velocity dispersion. This depends on the perpendicular force
// $\implies$ a bigger softening length leads to a smaller $\delta v$.
// Using $n_{relax} = \frac{v^2}{\delta v^2}$, and knowing that the value of $v^2$ is derived from the Virial theorem (i.e. unaffected by the softening length), we can see that $n_{relax}$ should increase with $\varepsilon$.
// === Effect
// - The relaxation time **increases** with increasing softening length
// - From the integration over all impact parameters $b$ even $b_{min}$ is chosen to be larger than $\varepsilon$ $\implies$ expect only a small effect on the relaxation time
// **In other words:**
// The softening dampens the change of velocity => time to relax is longer
= Particle Mesh
== Overview - the system
#page(
columns: 2
)[
#helpers.image_cell(t2, "plot_particle_distribution")
$==>$ use $M_"sys" approx 10^4 M_"sol" + M_"BH"$
]
== Force computation
#helpers.code_reference_cell(t2, "function_mesh_force")
#helpers.image_cell(t2, "plot_force_radial")
#grid(
columns: (2fr, 1fr),
inset: 0.5em,
block[
#helpers.image_cell(t2, "plot_force_radial_single")
],
block[
- using the (established) baseline of $N^2$
#footnote[$N^2$ force [#link(<task1:function_n2_forces>)[code]]]
with $1 dot epsilon$
#footnote[$epsilon$ computation [#link(<task1:function_interparticle_distance>)[code]]]
softening
- small grids
#footnote[Mesh force [#link(<task2:function_mesh_force>)[code]]]
are stable but inaccurate at the center
- very large grids have issues with overdiscretization
$==> 75 times 75 times 75$ as a good compromise
]
)
// Some other comments:
// - see the artifacts because of the even grid numbers (hence the switch to 75)
// overdiscretization for large grids -> vertical spread even though r is constant
// this becomes even more apparent when looking at the data without noise - the artifacts remain
//
// We can not rely on the interparticle distance computation for a disk!
// Given softening length 0.037 does not match the mean interparticle distance 0.0262396757880128
//
// Discussion of the discrepancies
// TODO
#helpers.image_cell(t2, "plot_force_computation_time")
// Computed for 10^4 particles => mesh will scale better for larger systems
== Time integration
*Integration step*
#helpers.code_reference_cell(t2, "function_runge_kutta")
*Timesteps*
Chosen such that displacement is small (compared to the inter-particle distance) [#link(<task2:integration_timestep>)[code]]:
$
op(d)t = 10^(-4) dot S / v_"part"
$
// too large timesteps lead to instable systems <=> integration not accurate enough
*Full integration*
[#link(<task2:function_time_integration>)[code]]
#pagebreak()
== First results
#helpers.image_cell(t2, "plot_system_evolution")
== Varying the softening
#helpers.image_cell(t2, "plot_second_system_evolution")
== Stability [#link("../task2_nsquare_integration.gif")[1 epsilon]]
#page(
columns: 2
)[
#helpers.image_cell(t2, "plot_integration_stability")
]
== Particle mesh solver
#helpers.image_cell(t2, "plot_pm_solver_integration")
#helpers.image_cell(t2, "plot_pm_solver_stability")
= Appendix - Code <appendix>
== Code
#helpers.code_reference_cell(t1, "function_apply_units")
<task1:function_apply_units>
#pagebreak(weak: true)
#helpers.code_reference_cell(t1, "function_density_distribution")
<task1:function_density_distribution>
#pagebreak(weak: true)
#helpers.code_reference_cell(t1, "function_analytical_forces")
<task1:function_analytical_forces>
#pagebreak(weak: true)
#helpers.code_reference_cell(t1, "function_n2_forces")
<task1:function_n2_forces>
#pagebreak(weak: true)
#helpers.code_reference_cell(t1, "function_interparticle_distance")
<task1:function_interparticle_distance>
#pagebreak(weak: true)
#helpers.code_cell(t1, "compute_relaxation_time")
<task1:compute_relaxation_time>
#pagebreak(weak: true)
#helpers.code_reference_cell(t2, "function_mesh_force")
<task2:function_mesh_force>
#pagebreak(weak: true)
#helpers.code_cell(t2, "integration_timestep")
<task2:integration_timestep>
#pagebreak(weak: true)
#helpers.code_cell(t2, "function_time_integration")
<task2:function_time_integration>
#context {
counter(page).update(locate(<appendix>).page())
}