Commit 190cef42 authored by Andrey Filippov's avatar Andrey Filippov

Project details on the CUAS mode

parent 2716c529
...@@ -180,6 +180,44 @@ This fails in some low-altitude forest cases. The correction should be per-seque ...@@ -180,6 +180,44 @@ This fails in some low-altitude forest cases. The correction should be per-seque
- IMS/camera orientation offset (`Interscene.getQuaternionCorrection()`) is disabled and should move to per-sequence. `QuaternionLma` should be restored to use linear motion plus rotations. We should weight angular samples (closer to reference are more accurate) and ensure time consistency. Angular data is in `*-egomotion.csv` columns 6–8. - IMS/camera orientation offset (`Interscene.getQuaternionCorrection()`) is disabled and should move to per-sequence. `QuaternionLma` should be restored to use linear motion plus rotations. We should weight angular samples (closer to reference are more accurate) and ensure time consistency. Angular data is in `*-egomotion.csv` columns 6–8.
## CUAS vs Moving-Camera Modes (Operational Difference Notes)
These notes describe the practical difference between:
- the moving-camera aerial pipeline (foliage/restoration), and
- the CUAS pipeline with gimbal rotation for target detection.
### Moving-camera aerial mode (current foliage/global-LMA development)
- Camera platform is moving.
- A scene sequence (typically ~500 scenes) is split into overlapping segments.
- Each segment is built around a captured center/reference scene.
- Scenes are transformed to be viewed from that segment reference pose.
- Matching/optimization is segment-based because far scenes may not overlap directly.
### CUAS mode (gimbal-rotation, mostly parked after 2025-10-27)
- Camera is fixed in ground frame while gimbal rotates view direction around a fixed axis.
- A virtual reference scene is used (`center_CLT = QuadCLT.restoreCenterClt(...)`) rather than a captured reference frame.
- Typical scan: about one full revolution per ~3 seconds (~0.33 Hz), with rotation radius around ~3 angular degrees.
- Resulting inter-frame shift remains small (roughly <10% of image size), so all scenes in a long sequence (~500) can overlap a single virtual center.
- Full-sequence processing is therefore feasible without segment splitting logic used in moving-camera mode.
### Why CUAS uses rotation
The main target class is difficult: simultaneously low-contrast, small, and slow apparent motion in pixels.
FPN (fixed-pattern noise) in microbolometer LWIR sensors can dominate this regime.
Rotation introduces faster target/background variation in image coordinates while FPN drifts more slowly, improving separability.
### FPN behavior and mitigation in CUAS
- FPN is not perfectly static over long periods.
- Boson-class sensors use periodic shutter-based correction (multiple dark frames every configured interval), which reduces but does not eliminate FPN.
- CUAS background extraction uses rotation-compensated averaging and subtraction of the averaged background.
- Two FPN components are targeted:
- persistent sensor/lens-embedded patterns (defects, dirt, etc.),
- row/column-correlated noise (sensor-architecture correlated stripes).
- Additional row/column normalization (1D row/column averages) is used before/with subtraction.
### Code-state context
- CUAS-focused work was parked around commit `8946631f767ddda77365ebc32d45eef1e3d21936` (October 27, 2025).
- Since then, development emphasis shifted to foliage/global-LMA paths; CUAS branches may require restoration and re-validation for bit-rot before new-data testing.
## Latest Additions ## Latest Additions
### Segment freezing with `keep_segments` ### Segment freezing with `keep_segments`
Index scenes (`*-index`) contain `*-INTERFRAME.corr-xml` with keys like: Index scenes (`*-index`) contain `*-INTERFRAME.corr-xml` with keys like:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment