- CUAS-focused work was parked around commit `8946631f767ddda77365ebc32d45eef1e3d21936` (October 27, 2025).
- Since then, development emphasis shifted to foliage/global-LMA paths; CUAS branches may require restoration and re-validation for bit-rot before new-data testing.
### Current CUAS target-detection flow (March 2026 LV data)
For the March 2026 LV runs, the current working input list is:
```
/home/elphel/lwir16-proc/LV/lists/lv_site_05.list
```
Per-sequence outputs are created under linked center directories such as:
is the same merged sequence after unsharp masking (for the example above: sigma 2.0 px, amount 1.0).
The next CUAS stage is the moving-target stage:
- estimate a prevailing per-tile motion vector using 2D phase correlation on the merged/unsharp sequence,
- use those vectors as a "virtual moving camera",
- shift each contributing frame according to that motion vector,
- accumulate the shifted frames as a long exposure to improve SNR of dim moving targets,
- then locate/freeze targets on the accumulated data and render target overlays/video products.
This stage is where the sky mask matters operationally: terrain-rich areas are suppressed before accepting local maxima, so target search is effectively constrained to the allowed sky region.
#### CUAS motion-scan details: keyframes, pair geometry, and FAST/SLOW split
The per-tile motion vectors are not estimated from a single frame pair. For each keyframe index, the code builds a short temporal block and accumulates multiple pairwise correlations that all correspond to nearly the same constant-velocity target motion.
The main scan geometry is controlled by:
-`cuas_corr_offset`: separation between the two halves of each pair set
-`cuas_corr_pairs`: number of correlation pairs accumulated inside one keyframe block
-`cuas_half_step`: if `true`, adjacent keyframes advance by `cuas_corr_offset/2`; otherwise by `cuas_corr_offset`
-`cuas_precorr_ra` and `cuas_corr_step`: optional temporal smoothing/decimation before correlation
- for keyframe `n`, `frame0 = start_frame + n * corr_inc`
-`frame1 = frame0 + cuas_corr_offset`
Inside that keyframe, the code correlates multiple temporally aligned pairs:
-`(frame0 + dframe)` against `(frame1 + dframe)`
- where `dframe` runs from `corr_ra_step/2` to `< cuas_corr_pairs`, stepping by `corr_ra_step`
So with the common mental model of `offset = 8` and `pairs = 8`, the effective pairs are approximately:
-`0 vs 8`
-`1 vs 9`
- ...
-`7 vs 15`
or the decimated equivalent if `cuas_corr_step > 1`.
Each pair is correlated tile-by-tile in transform domain (`TDCorrTile`). The pair results are accumulated before conversion back to pixel-domain correlation. If `cuas_smooth` is enabled, the individual pair contributions are weighted by a sine window across the temporal block. After accumulation:
- TD correlations are normalized and converted to pixel-domain using `cuas_fat_zero`
The `*-CORR2D.tiff` files visualize these per-tile 15x15 correlation maps, one keyframe per slice.
FAST and SLOW currently share this same motion-scan geometry. The difference is only in the temporal prefiltering of the input sequence:
- FAST: `temporalUnsharpMask()` using `cuas_temporal_um`
- SLOW: `runningGaussian()` using `cuas_slow_ra`
Both then call the same `prepareMotionBasedSequence()` path and therefore use the same `cuas_corr_offset`, `cuas_corr_pairs`, `cuas_half_step`, `cuas_precorr_ra`, and `cuas_corr_step`.
Practical consequence: if targets are too fast and the peaks in `*-FAST-CORR2D.tiff` hit the `+-7` tile borders, reducing `cuas_corr_offset` will reduce the apparent motion in both FAST and SLOW modes. There is currently no separate fast-only correlation offset parameter; adding one would require a code change.
### Where the current code does this
-`CuasRanging.processMovingTargetsMulti()` renders the per-sequence `...-CUAS...` stack and applies the unsharp mask before creating `fpixels`.
-`CuasMotion.processMovingTargetsMulti()` runs the fast/slow motion-target preparation and resolves non-conflicting motion candidates.
-`CuasMotion.generateExtractFilterMovingTargets()` expands the target field, performs the motion-compensated accumulation, and renders target/background outputs.
-`CuasMotion.shiftAndRenderAccumulate()` is the "virtual moving camera / long exposure" step.
-`CuasMotion.getAccumulatedCoordinates()` searches the accumulated tiles for isolated local maxima, applies the sky mask, and refines target coordinates with the CUAS LMA fit.
### Top-level trigger path
The March 2026 CUAS workflow order is:
1. Normal per-sequence CUAS scene processing (`cuas_proc_mode = 0`) produces the merged/unsharp stacks, motion vectors, accumulated target frames, and per-sequence CUAS outputs.
2.`CUAS Combine` runs later on the already generated linked centers (`CuasMultiSeries.processGlobals()`).
3.`CUAS Video` runs after that to combine/package already produced CUAS video outputs.
So the motion-vector and long-exposure stage belongs to the first, per-sequence CUAS processing pass, while `CUAS Combine` and `CUAS Video` are later-stage steps.
The direct gate for producing the accumulated target outputs is:
```
IntersceneMatchParameters.cuas_generate
```
shown in the UI as:
```
Generate and save detected targets
```
If that checkbox is off, the code will still do earlier target/ranging work but will skip the accumulated target TIFF/video generation stage.
## Latest Additions
### Segment freezing with `keep_segments`
Index scenes (`*-index`) contain `*-INTERFRAME.corr-xml` with keys like: