Not sure, but I have a hunch this might speed up the process of scanning large areas?
For example if you scan a 1500m area in 150m squares you would have a grid similar to the one shown.
As NearestObject has a radius of 50m you only need to check objects that fall within that radius.
So the first square (White1) will contain all the objects within the 150m square, as each new object is detected it will be checked against the content of the white square:
If Not (Bush3 In [Bush1,Bush2]) Then {add Bush3}
So you end up with:
White1=[Bush1,Bush2,Bush3]
Any object found with the red area will be added to another array called edge, without any checks:
Edge1=[Bush3]
Now move on to White2, the next 150m square to the immediate right.
The first object in White2 will be Bush4, as that lies within 45 meters of the left hand side of white2, it needs to be checked against Edge1:
If Not (Bush4 In [Bush3]) And Not (Bush4 In White2) Then {Add Bush4}
As Bush5 lies beyond the 45m limit it only has to be checked against the content of White2 and does not need to be added to Edge2
Once you have processed one row, the content of White1,White2....e.t.c are added to the main object list. The yellow and green squares work in a similar way except they will be stored in an array.
The dimensions were picked for convenience (a call to nearest object every 15m), but the principle remains the same for areas over 50m x 50m.
The other thing was:
{
IF ("House" countType [nearestObject [_x,_y,3]] == 1)
THEN
{
_buildings = _buildings + [nearestObject [_x,_y,3]]
};
};
Here your calling [nearestObject [_x,_y,3] twice to return the same object. Why not just call it once and store the result?