Note that this is not always accurate - it relates to the largest transaction size for this name, not necessarily the latest or the combined size of multiple transactions. This can be made accurate as soon as we have a "Resources" table to store this info. Trying to do it before then will be too inefficient in terms of queries.
A longer term solution to this hypothetical problem is to store relays in RAM or a temp folder only. Or maybe add an indicator file to instruct the cleanup manager to delete it. But this will require more development. 10 deletion attempts (each 1 second apart) should be enough for now.
This encourages shorter relays, since longer ones will take more time to respond, and also prevents a peer from intentionally taking a long time to respond so that it overwrites an existing entry.
Longer term we could consider keeping track of all respondents for each hash, if there are still issues with data retrieval. I suspect this won't be needed though, as the requesting peer has 16+ different peers connected, and therefore potentially 16 different mappings already.
Should fix issue with v4 transactions where these aren't used. Matches with the NOT NULL DEFAULT 0 which automatically transitions existing v4 ARBITRARY transactions to use the same defaults.
This allows node operators to return their authentication to the legacy rules (local requests allowed), without introducing javascript vulnerabilities. The websites, apps, etc are just prevented from loading, to avoid the risk of any API calls from javascript.
The modifications made to these methods were causing issues with other transaction types that were expecting blank strings instead of null. To keep risk to a minimum, I have split into two different sets of functions until there is more time to unify them.
1) Each relay request expires after 5 seconds, after which nodes will stop relaying it, preventing any kind of infinite loop. So it has to reach the destination peer within 5 seconds. This should be fine, because the original peer's request would timeout anyway, so there's nothing to be gained by continuing to relay it.
2) Each relay request stops being forwarded after 3 "hops" - i.e. once it has been relayed through 3 different peers, it will no longer be transmitted any further. If we assume that each node has 16 connections, that allows it to reach a theoretical maximum of 4096 peers in 3 hops. In practice it will be less, and may not reach everyone due to peer "islands". But it will automatically retry a few times on a timer, so should hopefully find what it needs eventually. Plus, it still has the ability to make a direct connection to anyone hosting the data, as long as they are port forwarded.