Commit 7bc39323 authored by Andrey Filippov's avatar Andrey Filippov

Removed some stale code,fixed centered mode

parent 04b52a7f
......@@ -218,6 +218,124 @@ Rotation introduces faster target/background variation in image coordinates whil
- CUAS-focused work was parked around commit `8946631f767ddda77365ebc32d45eef1e3d21936` (October 27, 2025).
- Since then, development emphasis shifted to foliage/global-LMA paths; CUAS branches may require restoration and re-validation for bit-rot before new-data testing.
### Current CUAS target-detection flow (March 2026 LV data)
For the March 2026 LV runs, the current working input list is:
```
/home/elphel/lwir16-proc/LV/lists/lv_site_05.list
```
Per-sequence outputs are created under linked center directories such as:
```
/home/elphel/lwir16-proc/LV/linked/centeres_1773389747-1773390382/1773389818_152542-CENTER/v07
```
The first restored CUAS stage currently produces merged LWIR stacks after FPN and row/column mitigation. A typical file is:
```
1773389818_152542-CENTER-CUAS-MERGED-CUAS.tiff
```
In this stack:
- the first slice is the sequence average,
- the remaining slices are individual scenes with that average subtracted.
The corresponding unsharp-masked version, for example:
```
1773389818_152542-CENTER-CUAS-MERGED-CUAS-UM2.0_1.000_250.tiff
```
is the same merged sequence after unsharp masking (for the example above: sigma 2.0 px, amount 1.0).
The next CUAS stage is the moving-target stage:
- estimate a prevailing per-tile motion vector using 2D phase correlation on the merged/unsharp sequence,
- use those vectors as a "virtual moving camera",
- shift each contributing frame according to that motion vector,
- accumulate the shifted frames as a long exposure to improve SNR of dim moving targets,
- then locate/freeze targets on the accumulated data and render target overlays/video products.
This stage is where the sky mask matters operationally: terrain-rich areas are suppressed before accepting local maxima, so target search is effectively constrained to the allowed sky region.
#### CUAS motion-scan details: keyframes, pair geometry, and FAST/SLOW split
The per-tile motion vectors are not estimated from a single frame pair. For each keyframe index, the code builds a short temporal block and accumulates multiple pairwise correlations that all correspond to nearly the same constant-velocity target motion.
The main scan geometry is controlled by:
- `cuas_corr_offset`: separation between the two halves of each pair set
- `cuas_corr_pairs`: number of correlation pairs accumulated inside one keyframe block
- `cuas_half_step`: if `true`, adjacent keyframes advance by `cuas_corr_offset/2`; otherwise by `cuas_corr_offset`
- `cuas_precorr_ra` and `cuas_corr_step`: optional temporal smoothing/decimation before correlation
For one keyframe:
- `seq_length = cuas_corr_offset + cuas_corr_pairs`
- `corr_inc = cuas_half_step ? (cuas_corr_offset / 2) : cuas_corr_offset`
- for keyframe `n`, `frame0 = start_frame + n * corr_inc`
- `frame1 = frame0 + cuas_corr_offset`
Inside that keyframe, the code correlates multiple temporally aligned pairs:
- `(frame0 + dframe)` against `(frame1 + dframe)`
- where `dframe` runs from `corr_ra_step/2` to `< cuas_corr_pairs`, stepping by `corr_ra_step`
So with the common mental model of `offset = 8` and `pairs = 8`, the effective pairs are approximately:
- `0 vs 8`
- `1 vs 9`
- ...
- `7 vs 15`
or the decimated equivalent if `cuas_corr_step > 1`.
Each pair is correlated tile-by-tile in transform domain (`TDCorrTile`). The pair results are accumulated before conversion back to pixel-domain correlation. If `cuas_smooth` is enabled, the individual pair contributions are weighted by a sine window across the temporal block. After accumulation:
- TD correlations are normalized and converted to pixel-domain using `cuas_fat_zero`
- per-tile local maxima are extracted
- centroid recentering (`cuas_cent_radius`, `cuas_n_recenter`) estimates `vx`, `vy`, peak strength, and in-center fraction
The `*-CORR2D.tiff` files visualize these per-tile 15x15 correlation maps, one keyframe per slice.
FAST and SLOW currently share this same motion-scan geometry. The difference is only in the temporal prefiltering of the input sequence:
- FAST: `temporalUnsharpMask()` using `cuas_temporal_um`
- SLOW: `runningGaussian()` using `cuas_slow_ra`
Both then call the same `prepareMotionBasedSequence()` path and therefore use the same `cuas_corr_offset`, `cuas_corr_pairs`, `cuas_half_step`, `cuas_precorr_ra`, and `cuas_corr_step`.
Practical consequence: if targets are too fast and the peaks in `*-FAST-CORR2D.tiff` hit the `+-7` tile borders, reducing `cuas_corr_offset` will reduce the apparent motion in both FAST and SLOW modes. There is currently no separate fast-only correlation offset parameter; adding one would require a code change.
### Where the current code does this
- `CuasRanging.processMovingTargetsMulti()` renders the per-sequence `...-CUAS...` stack and applies the unsharp mask before creating `fpixels`.
- `CuasMotion.processMovingTargetsMulti()` runs the fast/slow motion-target preparation and resolves non-conflicting motion candidates.
- `CuasMotion.generateExtractFilterMovingTargets()` expands the target field, performs the motion-compensated accumulation, and renders target/background outputs.
- `CuasMotion.shiftAndRenderAccumulate()` is the "virtual moving camera / long exposure" step.
- `CuasMotion.getAccumulatedCoordinates()` searches the accumulated tiles for isolated local maxima, applies the sky mask, and refines target coordinates with the CUAS LMA fit.
### Top-level trigger path
The March 2026 CUAS workflow order is:
1. Normal per-sequence CUAS scene processing (`cuas_proc_mode = 0`) produces the merged/unsharp stacks, motion vectors, accumulated target frames, and per-sequence CUAS outputs.
2. `CUAS Combine` runs later on the already generated linked centers (`CuasMultiSeries.processGlobals()`).
3. `CUAS Video` runs after that to combine/package already produced CUAS video outputs.
So the motion-vector and long-exposure stage belongs to the first, per-sequence CUAS processing pass, while `CUAS Combine` and `CUAS Video` are later-stage steps.
The direct gate for producing the accumulated target outputs is:
```
IntersceneMatchParameters.cuas_generate
```
shown in the UI as:
```
Generate and save detected targets
```
If that checkbox is off, the code will still do earlier target/ranging work but will skip the accumulated target TIFF/video generation stage.
## Latest Additions
### Segment freezing with `keep_segments`
Index scenes (`*-index`) contain `*-INTERFRAME.corr-xml` with keys like:
......
......@@ -595,7 +595,8 @@ import ij.process.ImageProcessor;
not_empty = true;
fpixels=new float[pixels[i].length];
for (j=0;j<fpixels.length;j++) fpixels[j]=(float)pixels[i][j];
array_stack.addSlice(titles[i], fpixels);
String titlesi=(i < titles.length) ? titles[i] : ("st"+i); // when debugging, titles may to match pixels
array_stack.addSlice(titlesi, fpixels);
}
if (not_empty) {
ImagePlus imp_stack = new ImagePlus(title, array_stack);
......
......@@ -584,12 +584,18 @@ public class CuasMultiSeries {
public void processGlobals() {
int debugLevel = 0;
int setup_uas = setupUasTiles();
System.out.println("processGlobals(): setupUasTiles() -> "+setup_uas);
int assign_uas_target = assignUasTarget();
System.out.println("processGlobals(): assignUasTarget() -> "+assign_uas_target);
if (uasLogReader != null) {
int setup_uas = setupUasTiles();
System.out.println("processGlobals(): setupUasTiles() -> "+setup_uas);
int assign_uas_target = assignUasTarget();
System.out.println("processGlobals(): assignUasTarget() -> "+assign_uas_target);
} else {
System.out.println("processGlobals(): no UAS flight log, skipping UAS-specific assignment");
}
printAssignmentStats();
printUasStats();
if (uasLogReader != null) {
printUasStats();
}
printAssignments();
combineLocalTargets(
true, // boolean skip_assigned, // if global ID is assigned, do not mess with that pair
......@@ -608,7 +614,9 @@ public class CuasMultiSeries {
min_disparity_velocity); // double min_disparit_velocity);
linearRangeInterpolation();
printAverageRanges(avg_range_ts);
printAverageVsUASRanges(avg_range_ts);
if (uasLogReader != null) {
printAverageVsUASRanges(avg_range_ts);
}
saveUpdatedTargets();
//
ImagePlus imp_radar = testGenerateRadarImage(
......@@ -692,6 +700,10 @@ public class CuasMultiSeries {
}
public void printAverageVsUASRanges(double [][][] avg_range_ts) {
if (uasLogReader == null) {
System.out.println("printAverageVsUASRanges(): skipping, no UAS flight log");
return;
}
System.out.println("printAverageVsUASRanges(): Compare average UAS range with flight log");
int ngtarg = 0;
System.out.println("name, timestamp,scene,range,fl_range,axial_velocity,disparity");
......@@ -733,6 +745,9 @@ public class CuasMultiSeries {
* @return number of keyframes with missing UAS log (should be 0)
*/
public int setupUasTiles() {
if (uasLogReader == null) {
return 0;
}
final Thread[] threads = ImageDtt.newThreadArray();
final AtomicInteger ai = new AtomicInteger(0);
final AtomicInteger amiss = new AtomicInteger(0);
......@@ -771,6 +786,9 @@ public class CuasMultiSeries {
* @return number of assigned local targets
*/
public int assignUasTarget() {
if (uasLogReader == null) {
return 0;
}
final double tmtch_pix= clt_parameters.imp.cuas_tmtch_pix;
final double tmtch_frac = clt_parameters.imp.cuas_tmtch_frac;
final int tileSize = GPUTileProcessor.DTT_SIZE;
......
......@@ -211,8 +211,7 @@ public class CuasRanging {
return null;
}
if (uasLogReader == null) {
System.out.println("uasLogReader == null, it is needed");
return null;
System.out.println("processMovingTargetsMulti(): proceeding without UAS flight log");
}
this.cuasMotion = new CuasMotion (
clt_parameters, // CLTParameters clt_parameters,
......@@ -347,7 +346,9 @@ public class CuasRanging {
{ // always save after calculating ranges and adding UAS data. Later may be moved to after rangeTargets() above, only when calculated
getRangeFromDisparity(targets); // adds RSLT_INFINITY field
// generate results video (move from earlier)
addUasData(targets); // add flight log data to the nearest tiles, either existing or new
if (uasLogReader != null) {
addUasData(targets); // add flight log data to the nearest tiles, either existing or new
}
// re-saving data with flight log (ground truth) additions
ImagePlus imp_with_range = CuasMotion.showTargetSequence(
targets, // double [][][] vector_fields_sequence,
......@@ -357,7 +358,7 @@ public class CuasRanging {
cuasMotion.getTilesX()); // int tilesX) {
center_CLT.saveImagePlusInModelDirectory(imp_with_range); // ImagePlus imp)
}
if (generate_csv) {
if (generate_csv && (uasLogReader != null)) {
saveTargetStats(targets); // final double [][][] targets_single) {
}
}
......@@ -1017,8 +1018,19 @@ public class CuasRanging {
System.out.println("rangeTargets()): disparity_map==null on nrefine="+nrefine);
break;
}
/*
double disp_diff = disparity_map[ImageDtt.DISPARITY_INDEX_POLY][ref_tile]; // null
double str = disparity_map[ImageDtt.DISPARITY_INDEX_POLY+1][ref_tile];
*/
double disp_diff = Double.NaN;
double str = 0;
if ((disparity_map[ImageDtt.DISPARITY_INDEX_POLY] == null) || (disparity_map[ImageDtt.DISPARITY_INDEX_POLY] == null)) {
System.out.println("rangeTargets()): disparity_map[ImageDtt.DISPARITY_INDEX_POLY]==null, nrefine="+nrefine);
} else {
disp_diff = disparity_map[ImageDtt.DISPARITY_INDEX_POLY][ref_tile]; // null
str = disparity_map[ImageDtt.DISPARITY_INDEX_POLY+1][ref_tile];
}
if (Double.isNaN(disp_diff)) {
if (use_non_lma) {
disp_diff = disparity_map[ImageDtt.DISPARITY_INDEX_CM][ref_tile];
......@@ -2699,6 +2711,9 @@ public class CuasRanging {
final double [][][] targets_single) {
int num_seq = targets_single.length;
UasLogReader uasLogReader = cuasMotion.getUasLogReader();
if (uasLogReader == null) {
return;
}
String [] slice_titles = cuasMotion.getSliceTitles(); // timestamps
int tilesX = cuasMotion.getTilesX();
int tilesY = targets_single[0].length / tilesX;
......@@ -2731,6 +2746,11 @@ public class CuasRanging {
// relies on calcMatchingTargetsLengths(.., true,...) called from recalcOmegas() to set [RSLT_GLOBAL]
public void saveTargetStats(
final double [][][] targets_single) {
UasLogReader uasLogReader = cuasMotion.getUasLogReader();
if (uasLogReader == null) {
System.out.println("saveTargetStats(): skipping target-vs-flight-log CSV, no UAS flight log provided");
return;
}
final int tilesX = cuasMotion.getTilesX();
final GeometryCorrection gc = center_CLT.getGeometryCorrection();
final int tileSize = GPUTileProcessor.DTT_SIZE;
......@@ -2767,7 +2787,6 @@ public class CuasRanging {
}
sb.append("\n"); // there will be 1 extra blank column
String [] slice_titles = cuasMotion.getSliceTitles(); // timestamps
UasLogReader uasLogReader = cuasMotion.getUasLogReader();
ErsCorrection ersCorrection = center_CLT.getErsCorrection();
for (int nseq = 0; nseq < num_seq; nseq++) {
String timestamp = slice_titles[nseq];
......
......@@ -7234,7 +7234,9 @@ java.lang.NullPointerException
// boolean test_vegetation = true;
if (master_CLT.hasCenterClt()) { // cuas mode
if (debugLevel >-3) {
System.out.println("===== Running CUAS ranging. =====");
}
CuasRanging cuasRanging = new CuasRanging (
clt_parameters, // CLTParameters clt_parameters,
master_CLT, // QuadCLT center_CLT,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment