![]() |
#4
|
|||
|
|||
![]()
K_Freddie, yes, landmarks important, and pre-flight should leave room for those things. Planning the mission etc. So far making the AI navigator aware of where we want to go (a single, current destination) as well as our own present location is easy. Just tell him the grid letter, number, and then using the system people use online (dividing a big 10km grid into 9 zones), call out what 'numpad' key zone it corresponds to. The destination is set as the middle of that zone. But making him aware of everything else that is relevant is more difficult to solve in a smooth way.
Can always input the home base, other friendly air bases, maybe a couple of waypoints and a few landmarks when entering a new map and he'll remember it. But don't want to have to do it more than once for a map. Can check logs to find out what map is being played at the moment, and check what plane we are flying to figure out what 'side' airports are friendly and which are not, and then automatically remember what we taught him (by saving it automatically in a file and it being used automatically the next time the map is being played). Another possibility is more technically difficult but allows being very precise - entering the stuff oneself in an .ini text file. But seems like tedius work. Maybe mainly for landmarks rather than essential info like airfield locations. Would be great to be able to have landmarks like big cities be referable to by their name somehow. But that wanders into the land of great complexity (too many different maps, cities and city names, which also poses extreme problems with the voice files needed and the difficulty of being limited in input - the program can only recognize keyboard keys, and since most are used for other things, the allowed keys will be limited to the numpad keys, which has to cover all communication from user to AI Navigator). No, the system should be simple, universally adaptable to any map, and effective. The teach as you go and remember it method seems to be most practical. No one seems to know/share anything about how real navigation was done by a navigator as in the examples I requested, but it is okay, I'll make some good guesses on how big time delay should be before AI navigator answers. Some of the more unusual features planned include: - (Definitely included) If using voice recognition as intended (such as using "Shoot!", which sends a keypress or series of keypresses when it recognizes a phrase scripted to it), will allow letting the AI navigator know which key is our press-to-talk key for TeamSpeak. When we are speaking to others, he will delay saying anything until we are done. If I could know if others were speaking on teamspeak somehow, I would make him shut up until they are done too. - (Probably included) Permit the AI navigator to function only when in an aircraft with at least one more crew member besides the pilot onboard and unharmed. By reading the event logs and using devicelink as well it is possible to know this. - (Maybe included) Fill in some of the gap from rear gunners not being able to speak at all online (an annoying unrealistic thing of MP currently, and unimmersive). Can't communicate what he sees, but can at least let the pilot know he's wounded or heavily wounded, and also comment on if a target was shot down or blown up. This stuff is visible on the hudlog and/or chat anyway so it's not cheaty, but it will be more realistic and immersive. - (Not very likely) Possibly able to make this crewmember carry out functions that would belong to his responsibilities anyway. - With the upcoming 4.10 patch for IL-2, we get interesting radio navigation. Triangulation is possible if knowing the bearing to any other landmark (or radio beacon) nearby. Should be able to add a function for the navigator being able to use the bearings (informed by the pilot) for these in a logical way. |
|
|