![]() |
#11
|
|||
|
|||
![]() Quote:
For missions where no front lines are marked, just assume that all territory is friendly, or all territory that isn't within X meters of a hostile ground unit is friendly. Even so, my original partial decision tree for bailout decisions shows the sort of work that is necessary to make aircraft behave in a "smart" fashion for just one small aspect of flight. Humans have plenty of experience with "don't do this, it's probably dangerous," so we understand the ideas that friendly territory is better than enemy territory, landing is (usually) better than bailing, and it's (usually) better to crash land or bail out over land than water. We also have the ability to extrapolate from basic principles. Computer AI is like programming a baby. The computer doesn't automatically "know" anything, and has to be "taught" that certain things or behaviors are bad. Even worse, it has no ability to extrapolate and it's typically really poor at certain types of visual pattern recognition that humans take for granted. Last edited by Pursuivant; 11-14-2014 at 09:00 PM. |
|
|