Sorry for non-informative topic title but I didn't know how to say what I want to ask in such a short sentence.
Suppose I'm writting a script which will run on a lot of units and the units which are affected by the script change many times durring the mission. I want to ask which is beter if it comes to performance:
Starting a new script that monitores a specified condition for every unit:
[_unit, _cover] exec checkForMovement.sqs
_unit = _this select 0
_cover = _this select 1
@ (not(currentCommand _unit in acceptableCommands)) OR not(alive _unit)
_unit setunitpos "auto"
?_unit HideBehindScripted false
IgnoreList = IgnoreList - [_cover]
exit
Or having one script that is running for the whole mission and has monitores every unit from a global array to which I add the unit:
UnitsToMonitor = UnitsToMonitor + [[_unit, _cover]]
acceptableCommands = ["STOP", "WAIT", "FIRE"]
#mainloop
;sending global variable to local to avoid unwanted interference from global space
_unitsToMonitor = UnitsToMonitor
_numOfUnits = count _unitsToMonitor
~0.01
? _numOfUnits == 0: goto "mainloop"
#subloop
_i = 0
_currentArray = _unitsToMonitor select _i
_currentUnit = _currentArray select 0
_currentCover = _currentArray select 1
? not(currentCommand _currentUnit in acceptableCommands)) OR not(alive _currentUnit): _currentUnit setunitpos "auto"; _currentUnit HideBehindScripted false; IgnoreList = IgnoreList - [_currentCover]; unitsToMonitor = unitsToMonitor - [_currentArray]
~0.01
_i = _i + 1
? _i < numOfUnits: goto "subloop"
goto "mainloop"
The presented scripts can have some bugs because I wrote them straight at the forums but I wanted to ask about the principle. Is it better to have multiple instances of the first script or one script that monitors global array and runs during the whole mission?