Default thread limits per message type can be specified in Settings.setAdditionalDefaults(), e.g
maxThreadsPerMessageType.add(new ThreadLimit("GET_ARBITRARY_DATA_FILE", 5));
These can also be overridden on a per-node basis in settings.json, e.g
"maxThreadsPerMessageType": [
{ "messageType": "GET_ARBITRARY_DATA_FILE", "limit": 3 },
{ "messageType": "GET_ARBITRARY_DATA_FILE_LIST", "limit": 3 }
]
settings.json values take priority, but any message types that aren't specified in settings.json will still be included from the Settings.java defaults. This allows single message types to be overridden in settings.json without removing the limits for all of the other message types.
Any messages that arrive are discarded if the node is already at the thread limit for that message type.
Warnings are now shown in the logs if the total number of active threads reaches 90% of the allocated thread pool size. Additionally, it can warn per message type by specifying a per-message-type warning threshold in settings.json, e.g
"threadCountPerMessageTypeWarningThreshold": 20
The above setting would warn in the logs if a single message type was consuming more than 20 threads at once, therefore making it a candidate to be limited in maxThreadsPerMessageType.
Initial values of maxThreadsPerMessageType are guesses and may need modifying based on real world results. Limiting threads may impact functionality, so this should be carefully tested.
Also be aware that the thread tracking may reduce network performance slightly, so be sure to test thoroughly on slower hardware.
There are 3 values that need to be specified for this feature trigger:
- blockRewardBatchSize: the number of blocks per batch (1000 recommended).
- blockRewardBatchStartHeight: the block height at which the system switches to batch reward distributions. Must be a multiple of blockRewardBatchSize. Ideally this would be set to a block height that will occur at least 1-2 weeks after the release / auto update goes live.
- blockRewardBatchAccountsBlockCount: the number of blocks to include online accounts in the lead up to the batch reward distribution block (25 recommended).
Once active, rewards are no longer distributed every block and instead are distributed every 1000th block. This includes all fees from the distribution block and the previous 999 blocks.
The online accounts used for the distribution are taken from one of the previous 25 blocks (e.g. blocks xxxx975-xxxx999). The validation ensures that it is always the block with the highest number of online accounts in this range. If this number of online accounts is shared by multiple blocks, it will pick the one with the lowest height.
The idea behind 25 blocks is that it's low enough to reduce load in the other 975 blocks, but high enough for it to be extremely difficult for block signers to influence reward payouts.
Batch distribution blocks contain a copy of the online accounts from one of these 25 preceding blocks. However, online account signatures and online accounts timestamp are excluded to save space. The core will validate that the copy of online accounts is exactly matching those of the earlier validated block, and are therefore valid.
Fairly comprehensive unit tests have been written but this needs a lot more testing on testnets before it can be considered stable.
Future note: Once released, the online accounts mempow can optionally be scaled back so that it is no longer running 24/7, and instead is only running for 2-3 hours each day in the lead up to the batch distribution block. In each 1000 block cycle, the mempow would ideally start at least an hour before block xxxx975 is due to be minted, and continue for an hour after block xxxx000 (the reward distribution block). None of the ideas mentioned in this last paragraph are coded yet.
- Add async responder thread from @catbref which was previously only in place for LTC
- Log when computing mempow nonces
- Skip transaction import if signature is invalid
- Added checks to a message test to mimic trade bot transaction lookups
- PUBLICIZE transactions are no longer possible.
- ARBITRARY transactions are now only possible using a fee.
- MESSAGE transactions only confirm when they are being sent to an AT. Messages to regular addresses (or no recipient) will expire after 24 hours.
- Difficulty for confirmed MESSAGE transactions increases from 14 to 16.
- Difficulty for unconfirmed MESSAGE transactions decreases from 14 to 12.
This is a short term limit, is well above current usage levels, and can be increased substantially in future once the block minter code has been improved.
This allows Q-Apps and websites to be developed and tested in real time, by proxying an existing webserver such as localhost:5173 from React/Vite. The proxy adds all QDN functionality to the existing server in real time. Needs UI integration before all features can be used.
Using a separate database method for now to reduce risk of interfering with other parts of the code which use it. It can be combined later when there is more testing time.
It doesn't quite work as intended, so it's best that it doesn't interfere right away. 24 hours should be long enough for any issues to be manually resolved.
The dedicated cache manager thread is now used for metadata updates only. If metadata ever goes missing from the db, it would be straightforward to have a background thread that corrects any discrepancies between the filesystem and the db. Not adding that until it is needed.
Offers with more than 3 failures will be hidden from the API and websocket, to prevent unbuyable offers from staying in the order books and continuously failing. maxTradeOfferAttempts can be optionally increased on a node to show more trades that would otherwise be hidden.
By default, only the latest resource is returned for a name/service combination. All identifiers can be optionally returned by setting `mode` to "ALL".
More search modes can be added in the future, for instance "RELEVANT" or "POPULAR" (these are just ideas, and are not currently supported).
"query" searches name, identifier, title and description fields
"title" searches title only
"description" searches description only
All support "&prefix=true", to indicate searching by prefix only.
These can ultimately be used to help inform the cleanup manager on the best order to delete files when the node runs out of space. Public data should be given priority over private data (unless the node is part of a data market contract for that data - this isn't developed yet).
Allows for video seeking when using URL playback, even though the Range header isn't implemented yet. This could be heavily optimized by adding full support of the Range/Content-Range headers, however this is still a big step forward as it allows for (inefficient) seeking.
Should fix problems on systems unable to use IPv6 wildcard (::) for listening, and avoids having to manually specify "bindAddress": "0.0.0.0" in settings.json.
Any list with the following prefix will be used in block/follow logic:
blockedNames
blockedAddresses
followedNames
For instance, any names in a list named "blockedNames_CustomBlockList" would also be blocked, along with those in the standard "blockedNames" list.
This will ultimately allow apps to offer custom block/follow lists to users (once list functionality is added to the Q-Apps API).
Unhandled requests (where no file exists) are now forwarded to the index file, to allow for custom routing in the app. This applies to the APP service only. For the WEBSITE and other services, unhandled requests will return a 404. In future, we may be able to allow websites to opt in to URL routing too, and maybe even allow both services to specify custom routing rules in a file.
This causes the data to be stored with the requested filename, instead of generating a random one. Also, randomly generated filenames now use a timestamp instead of a random number.
- "identifier" is an alternative to "query" that will search identifiers only.
- "name" is an alternative to "query" that will search names only.
- "query" remains the same as before - it searches both name and identifier fields.
- "prefix" is a boolean, and when true it will only match the beginning of each field. Works with "identifier", "name", and "query" params.
Usage:
Add `encoding=BASE64` query string parameter to opt in to base64 encoding of returned chat data. Defaults to BASE58 for backwards support.
Compatible endpoints:
GET /chat/messages
GET /chat/message/{signature}
GET /chat/active/{address}
GET /websockets/chat/active/*
GET /websockets/chat/messages
This is required as it's the only place that holds the original order of each block's transactions. We cannot sort them, because the comparator function for transactions has some dependencies on the existing order for AT transactions. As a result, topOnly nodes cannot perform this reshape, and will be unable to run this version.
All /arbitrary endpoints responsible for publishing data now support an optional "preview" query string parameter. If true, these endpoints will return a URL path to open the preview, rather than returning transaction bytes.
ArbitraryDataFile now has a fileCount() method which returns the total number of files associated with that piece of data - i.e. chunks, metadata, and the complete file in cases where it isn't chunked.
When "archiveVersion" is set to 2 in settings, this should allow the archive size to reduce by over 90%. Some nodes might want to maintain an older/larger version, for the purposes of development/debugging, so this is currently opt-in.
Default is version "1". If version "2" is specified, the API will return the full transaction JSON on success, rather than just "true".
Example usage:
curl -X POST "http://localhost:12391/transactions/process" -H "X-API-VERSION: 2" -d "signedTransactionBytesHere"
The former is available for all blocks, whereas the latter is only available for unpruned blocks.
Also removed joins with the Blocks table - as the Blocks table is also pruned - and instead retrieved the height from the Transactions table.
This should fix an edge case where AT states data was pruned/trimmed but it was then later required in consensus. The older state was deleted because it was replaced by a new "latest" state in a brand new block. But once the new "latest" state was orphaned from the block, the old "latest" state was then required again.
This works around the problem by excluding very recent blocks in the latest AT states data, so that it is unaffected by real-time sync activity.
The trade off is that we could end up retaining more AT states than needed, so a secondary cleanup process may need to run at some time in the future to remove these. But it should only be a minimal amount of data, and can be cleaned up with a single query. This would have been happening to a certain degree already.
# Conflicts:
# src/main/java/org/qortal/controller/repository/AtStatesPruner.java
# src/main/java/org/qortal/controller/repository/AtStatesTrimmer.java
Examples:
### Get URL to load a QDN resource
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "THUMBNAIL",
name: "QortalDemo",
identifier: "qortal_avatar"
// path: "filename.jpg" // optional - not needed if resource contains only one file
});
```
### Get URL to load a QDN website
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "WEBSITE",
name: "QortalDemo",
});
```
### Get URL to load a specific file from a QDN website
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "WEBSITE",
name: "AlphaX",
path: "/assets/img/logo.png"
});
```
The URL used to access the gateway is now interpreted, and the most appropriate resource is served. This means it can be used in different ways to retrieve any type of content from QDN. For example:
/QortalDemo
/QortalDemo/minting-leveling/index.html
/WEBSITE/QortalDemo
/WEBSITE/QortalDemo/minting-leveling/index.html
/APP/QortalDemo
/THUMBNAIL/QortalDemo/qortal_avatar
/QCHAT_IMAGE/birtydasterd/qchat_BfBeCz
/ARBITRARY_DATA/PirateChainWallet/LiteWalletJNI/coinparams.json
This allows any QDN resource (e.g. an IMAGE) to be linked to from a website/app and then rendered on screen. It isn't yet supported in gateway or domain map mode, as these need some more thought.
The simplest way to link to another QDN website is to include a link with the format:
<a href="qortal://WEBSITE/QortalDemo">link text</a>
This can be expanded to link to a specific path, e.g:
<a href="qortal://WEBSITE/QortalDemo/minting-leveling/index.html">link text</a>
Or it can be initiated programatically, via qortalRequest():
let res = await qortalRequest({
action: "LINK_TO_QDN_RESOURCE",
service: "WEBSITE",
name: "QortalDemo",
path: "/minting-leveling/index.html" // Optional
});
Note that qortal:// links don't yet support identifiers, so the above format is not confirmed.
This prevents foreign coins from leaving the local wallet when there is a high probability that the trade will fail, and therefore should reduce the chances of losing transaction fees due to refunds.
Whenever this occurs, the UI will show "Trade has an existing buy request or is pending cancellation." after clicking Buy.
- All APIs are now served over the gateway and domain map, with the exception of /admin/*
- AdminResource moved to a "restricted" folder, so that it isn't served over the gateway/domainMap ports.
- This opens the door to websites/apps calling core APIs directly for certain read-only functions, as an alternative to using qortalRequest().
Requests are now internally routed to the existing API handlers. This should allow new Q-Apps API endpoints to be added much more quickly, as well as removing the need to maintain their code separately from the regular API endpoints.
This should fix an edge case where AT states data was pruned/trimmed but it was then later required in consensus. The older state was deleted because it was replaced by a new "latest" state in a brand new block. But once the new "latest" state was orphaned from the block, the old "latest" state was then required again.
This works around the problem by excluding very recent blocks in the latest AT states data, so that it is unaffected by real-time sync activity.
The trade off is that we could end up retaining more AT states than needed, so a secondary cleanup process may need to run at some time in the future to remove these. But it should only be a minimal amount of data, and can be cleaned up with a single query. This would have been happening to a certain degree already.
isTooDivergent will be true or false if a definitive decision has been made, or missing from the response if not yet known. Therefore it should be safe to treat `"isTooDivergent": false` as a peer that is on the same chain.
Currently enabled for topOnly nodes only. This will detect if the node is on a divergent chain, and will force a bootstrap or resync (depending on settings) in order to rejoin the main chain.
This avoids the wasted time and consensus confusion causes by syncing and then validation failing. This is significant after the algo has run, as many signers will become invalid.
The validation is currently set to a feature trigger of height 0, although this will likely be set to a future block, in case there are any cases in the chain's history where this validation may fail (e.g. transfer privs?)
When users buy QORT ("Alice"-side), most of the API time is spent computing mempow for the MESSAGE sent to Bob's AT.
This is the final stage startResponse() and after Alice's P2SH is already broadcast.
To speed this up, the MESSAGE part is moved into its own thread allowing startResponse() to return sooner, improving the user experience.
Caveats:
If MESSAGE importAsUnconfirmed() somehow fails the the buy won't complete and Alice will have to wait for P2SH refund.
If Alice shuts down her node while MESSAGE mempow is being computed then it's possible the shutdown will be blocked until mempow is complete.
Currently only implemented in LitecoinACCTv3TradeBot as this is only proof-of-concept.
Tested with multiple buys in the same block.
This is a hopeful fix for extra memory usage since mempow activated, due to adding a lot of load to the garbage collector. It only applies to accounts verified from the import queue; the optimization hasn't been applied to block processing. But verifying online accounts when processing blocks is rare and generally would only last a short amount of time.
This allows testnets to more easily coexist on the same machines that are running a mainnet instance, and still tests the mempow computation and verification in a non-resource-intensive way.
We no longer need all the code complexity, now that 24 hours have passed since activation. We don't validate online accounts beyond 12 hours, and the data is trimmed after 24 hours.
- Added "singleNodeTestnet" setting, allowing for fast and consecutive block minting, and no requirement for a minimum number of peers.
- Added "recoveryModeTimeout" setting (previously hardcoded in Synchronizer).
- Updated testnets documentation to include new settings and a quick start guide.
- Added "generic" minting account that can be used in testnets (not functional on mainnet), to simplify the process for new devs.
We expect a "Couldn't build a to-be-minted block" log on every startup due to trying to mint before having any accounts. This one has moved from error to info level because error logs can be quite intrusive when using an IDE.
This allows one message to reference another, e.g. for replies, edits, and reactions. We can't use the existing reference field as this is used for encryption and generally points to the user's lastReference at the time of signing.
"chatReference" is based on the "nameReference" field used in various name transactions, for similar purposes.
This needs a feature trigger timestamp to activate, and that same timestamp will need to be used in the UI since that is responsible for building the chat transactions.
The BLOCK_SUMMARIES message type would differentiate between an empty response and a missing/invalid response. However, in V2, a response with empty summaries would throw a BufferUnderflowException and be treated by the caller as a null message.
This caused problems when trying to find a common block with peers that have diverged by more than 8 blocks. With V1 the caller would know to search back further (e.g. 16 blocks) but in V2 it was treated as "no response" and so the caller would give up instead of increasing the look-back threshold.
This fix will identify BLOCK_SUMMARIES_V2 messages with no content, and return an empty array of block summaries instead of a null message.
Should be enough to recover any stuck nodes, as long as they haven't diverged more than 240 blocks from the main chain.
This should hopefully fix a potential issue where peer's chain tip data becomes contaminated with other summary data, causing incorrect sync decisions.
The `start.sh` & `stop.sh` scripts have already been marked as executables in the source folder... But since we have only piped their contents, we need to set correct file permissions again.
This allows the first pass to always be served from the peer's cache of 8 summaries. This allows a maximum of 7 to be returned, because the 8th spot is needed for the parent block's signature.
Loading from the cache should speed up sync decisions, particularly when choose which peer to sync from. The greater the number of connected peers, the more significant this optimization will be. It should also reduce wasted network requests and data usage.
Adding this check prior to making a network request is a simple way to introduce the new cached summaries from BLOCK_SUMMARIES_V2 without having to rewrite a lot of the complex sync / peer comparison logic. Longer term we may want to rewrite that logic to read from the cache directly, but it doesn't make sense to introduce that level of risk at this point time, especially as the Synchronizer may be rewritten soon to prefer longer chains.
Even so, this is still quite a high risk commit so lots of testing will be needed.
The main difference here is that we now remove items from the onlineAccountsImportQueue in a batch, _after_ they have been imported. This prevents duplicates from being added to the queue in the previous time gap between them being removed and imported.
This caused large gaps with no presence data. They are removed when they expire, causing the local count to drop to zero, and the node would only start requesting them again once a peer had pushed one or more entries proactively.
Touches quite a few files because:
* Deprecate HEIGHT_V2 because it doesn't contain enough info to be fully useful during sync.
Newer peers will re-use BLOCK_SUMMARIES_V2.
* For newer peers, instead of sending / broadcasting HEIGHT_V2,
send top N block summaries instead, to avoid requests for minor reorgs.
* When responding to GET_BLOCK, and we don't actually have the requested block,
we currently send an empty BLOCK_SUMMARIES message instead of not responding,
which would cause a slow timeout in Synchronizer.
This pattern has spread to other network message response code,
so now we introduce a generic 'unknown' message type for all these cases.
* Remove PeerChainTipData class entirely and re-use BlockSummaryData instead.
* Each Peer instance used to hold PeerChainTipData - essentially single latest block summary - but now holds a List of latest block summaries.
* PeerChainTipData getter/setter methods modified for compatibility at this point in time.
* Repository methods that return BlockSummaryData (or lists of) now try to fully populate them,
including newly added block reference field.
* Re-worked Peer.canUseCommonBlockData() to be more readable
* Cherry-picked patch to Message.fromByteBuffer() to pass an empty, read-only ByteBuffer to subclass fromByteBuffer() methods, instead of null.
This allows natural use of BufferUnderflowException if a subclass tries to use read(), or hasRemaining(), etc. from an empty data-payload message.
Previously this could have caused an NPE.
It will now request online accounts every 1 minute instead of every 5 seconds, except for the first 5 minutes following a new online accounts timestamp, in which it will request every 5 seconds (referred to as the "burst" interval). It will also use the burst interval for the first 5 minutes after the node starts.
This is based on the idea that most online accounts arrive soon after a new timestamp begins, and so there is no need to request accounts so frequently after that. This should reduce data usage by a significant amount.
Once mempow is fully rolled out, the "burst" feature can be reduced or removed, since online accounts will be sent ahead of time, generally 15-30 mins prior to the new online accounts timestamp becoming active.
This closes a gap where accounts would be moved from onlineAccountsImportQueue to onlineAccountsToAdd, but not yet imported. During this time, there was nothing to stop them from being added to the import queue again, causing duplicate validations.
For regular groups, we require that the owner adds/removes the admins, so group membership is adequately checked. However for null-owned groups this check is skipped. So we need an additional condition to prevent non-group members from issuing a transaction for approval by the group admins.
* The dev group (ID 1) is owned by the null account with public key 11111111111111111111111111111111
* To regain access to otherwise blocked owner-based rules, it has different validation logic
* which applies to groups with this same null owner.
*
* The main difference is that approval is required for certain transaction types relating to
* null-owned groups. This allows existing admins to approve updates to the group (using group's
* approval threshold) instead of these actions being performed by the owner.
*
* Since these apply to all null-owned groups, this allows anyone to update their group to
* the null owner if they want to take advantage of this decentralized approval system.
*
* Currently, the affected transaction types are:
* - AddGroupAdminTransaction
* - RemoveGroupAdminTransaction
*
* This same approach could ultimately be applied to other group transactions too.
Using the feature trigger timestamp here should be much less error prone than a whole new block message version. Once mempow has been live for at least 24 hours, the feature trigger can be removed and the code cleaned up, as all online accounts signatures will use the new format from that time onwards (legacy signatures are trimmed after 24 hours).
Due to the large amount of data, it can take some time for the request to be processed and data to be transferred. It may make more sense to load the initial state from the standard API, and just use the websockets for updates.
This should fix 500 error which could prevent forced refund attempts for LTC and other chains. It wouldn't have affected any normal trade-bot refund functionality.
This allows for different share bin distribution starting at an undecided future block height. This height will correspond with the QORA reduction. New values decided in recent community vote.
This allows the QORA share percentage to be modified at different heights, based on community votes. Added unit test to simulate a reduction.
# Conflicts:
# src/test/java/org/qortal/test/minting/RewardTests.java
This rewrite may have been causing problems with connections in the network, due to peers being forgotten too easily. Reverting for now to see if it solves the problem.
This reverts commit d81071f2544ec72807f70892388ae2ddec8dbc8c.
# Conflicts:
# src/main/java/org/qortal/network/Network.java
If this proves to not have any significant bad effects on re-orgs, we could consider setting these even higher or even disabling the auto disconnect by default.
This allows users to increase their default birthday if they know that no wallets were created before a certain block, to reduce sync time. It also fixed some failed unit tests that relied on transactions between blocks 1900000 and 2000000.
Using a hardcoded signature ensures that the libraries cannot be swapped out without a core auto update, which requires the standard dev team approval process.
This replaces the previously hardcoded "numberOfAdditionalBatchesToSearch" variable, and specifies the minimum number of empty consecutive addresses required before a set of wallet transactions is considered complete. Used for foreign transaction lists and balances.
This is a simple way to discard the 5-minute online account timestamps (from out of date nodes) once the switch to 30-minute online account timestamps has taken place.
Online account nonces are appended to the onlineAccountsSignatures to avoid the need for a new field, but it probably makes more sense to separate them.
Although BlockMinter could reattach a repository session to its cache of potential blocks,
and these blocks would in turn reattach that repository session to their transactions,
further transaction-specific fields (e.g. creator PublicKeyAccount) were not being updated.
This would lead to NPEs like the following:
Exception in thread "BlockMinter" java.lang.NullPointerException
at org.qortal.repository.hsqldb.HSQLDBRepository.cachePreparedStatement(HSQLDBRepository.java:587)
at org.qortal.repository.hsqldb.HSQLDBRepository.prepareStatement(HSQLDBRepository.java:569)
at org.qortal.repository.hsqldb.HSQLDBRepository.checkedExecute(HSQLDBRepository.java:609)
at org.qortal.repository.hsqldb.HSQLDBAccountRepository.getBalance(HSQLDBAccountRepository.java:327)
at org.qortal.account.Account.getConfirmedBalance(Account.java:72)
at org.qortal.transaction.MessageTransaction.isValid(MessageTransaction.java:200)
at org.qortal.block.Block.areTransactionsValid(Block.java:1190)
at org.qortal.block.Block.isValid(Block.java:1137)
at org.qortal.controller.BlockMinter.run(BlockMinter.java:301)
where the Account has an associated repository session which is now obsolete.
This commit reverts BlockMinter back to obtaining a repository session before entering main loop.
To prevent a single or very small number of minters receiving the rewards for an entire tier, share bins can now require "activation". This adds the requirement that a minimum number of accounts must be present in a share bin before it is considered active. When inactive, the rewards and minters are added to the previous tier.
Summary of new functionality:
- If a share bin has more than one, but less than 30 accounts present, the rewards and accounts are shifted to the previous share bin.
- This process is iterative, so the accounts can shift through multiple tiers until the minimum number of accounts is met, OR the share bin's starting level is less than shareBinActivationMinLevel.
- Applies to level 7+, so that no backwards support is needed. It will only take effect once the first account reaches level 7.
This requires hot swapping the sharesByLevel data to combine tiers where needed, so is a considerable shift away from the immutable array that was in place previously.
All existing and new unit tests are now passing, however a lot more testing will be needed.
- This adds mempow requirements to online account importing (after activation timestamp), however doesn't yet add any requirements to block validation.
- It also causes the 'next' online accounts timestamp to be computed in addition to the 'current', so that the computed nonce value is ready when the next online accounts timestamp window begins.
Online account credit is a more useful definition of "minting" than block signing, from the user's perspective. Should bring UI minting/syncing status in line with the core's systray status.
- Reduce concurrent reward share limit from 6 to 3 (or from 5 to 2 when including self share) - as per community vote.
- Founders remain at 6 (5 when including self share) - also decided in community vote.
- When all slots are being filled, require that at least one is a self share, so that not all can be used for sponsorship.
- Activates at future undecided timestamp.
We already mark peers as misbehaved if they returned invalid signatures, but this wasn't sufficient when multiple copies of the same invalid block exist on the network (e.g. after a hard fork). In these cases, we need to be more proactive to avoid syncing with these peers, to increase the chances of preserving other candidate blocks.
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.
On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
Previously, a peer would be continuously considered not 'old' if it had a connection attempt in the past day. This prevented some peers from being removed, causing nodes to hold a large repository of peers. On slower systems, this large number of known peers resulted in low numbers of outbound connections being made, presumably because of the time taken to iterate through dataset, using up a lot of allKnownPeers lock time.
On devices that experienced the problem, it could be solved by deleting all known peers. This adds confidence that the old peers were the problem.
- Show "Minting" as long as online accounts are submitted to the network (previously it related to block signing).
- Fixed bug causing it to regularly show "Synchronizing 100%".
- Only show "Synchronizing" if the chain falls more than 2 hours behind - anything less is unnecessary noise.
Symptoms are:
* AutoUpdate trying to run new ApplyUpdate process, but nothing appears in log-apply-update.?.txt
* Main qortal.jar process continues to run without updating
* Last AutoUpdate line in log.txt.? is:
2022-06-18 15:42:46 INFO AutoUpdate:258 - Applying update with: /usr/local/openjdk11/bin/java -Djava.net.preferIPv4Stack=false -Xss256k -Xmx1024m -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=127.0.0.1:5005 -cp new-qortal.jar org.qortal.ApplyUpdate
Changes are:
* child process now inherits parent's stdout / stderr (was piped from parent)
* child process is given a fresh stdin, which is immediately closed
* AutoUpdate now converts -agentlib JVM arg to -DQORTAL_agentlib
* ApplyUpdate converts -DQORTAL_agentlib to -agentlib
The latter two changes are to prevent a conflict where two processes try to reuse the same JVM debugging port number.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.
Set min peer version to 3.3.203 in BlockV2Message class.
Bump v3 min peer version from 3.2.203 to 3.3.203
No need for toOnlineAccountTimestamp(long) as we only ever use getCurrentOnlineAccountTimestamp().
Latter now returns Long and does the call to NTP.getTime() on behalf of caller, removing duplicated NTP.getTime() calls and null checks in multiple callers.
Add aggregate-signature feature-trigger timestamp threshold checks where needed, near sign() and verify() calls.
Improve logging - but some logging will need to be removed / reduced before merging.
Aggregated signature should reduce block payload significantly,
as well as associated network, memory & CPU loads.
org.qortal.crypto.BouncyCastle25519 renamed to Qortal25519Extras.
Our class provides additional features such as DH-based shared secret,
aggregating public keys & signatures and sign/verify for aggregate use.
BouncyCastle's Ed25519 class copied in as BouncyCastleEd25519,
but with 'private' modifiers changed to 'protected',
to allow extension by our Qortal25519Extras class,
and to avoid lots of messy reflection-based calls.
Slight optimization to BlockMinter by adding OnlineAccountsManager.hasOnlineAccounts():boolean instead of returning actual data, only to call isEmpty()!
Move online account cache code from Block into OnlineAccountsManager, simplifying Block code and removing duplicated caches from Block also.
This tidies up those remaining set-based getters in OnlineAccountsManager.
No need for currentOnlineAccountsHashes's inner Map to be sorted so addAccounts() creates new ConcurentHashMap insteaad of ConcurrentSkipListMap.
Changed GetOnlineAccountsV3Message to use a single byte for count of hashes as it can only be 1 to 256.
256 is represented by 0.
Comments tidy-up.
Change v3 broadcast interval from 10s to 15s.
Adding support for GET_ONLINE_ACCOUNTS_V3 to Controller, which calls OnlineAccountsManager.
With OnlineAccountsV3, instead of nodes sending their list of known online accounts (public keys),
nodes now send a summary which contains hashes of known online accounts, one per timestamp + leading-byte combo.
Thus outgoing messages are much smaller and scale better with more users.
Remote peers compare the hashes and send back lists of online accounts (for that timestamp + leading-byte combo) where hashes do not match.
Massive rewrite of OnlineAccountsManager to maintain online accounts.
Now there are three caches:
1. all online accounts, but split into sets by timestamp
2. 'hashes' of all online accounts, one hash per timestamp+leading-byte combination
Mainly for efficient use by GetOnlineAccountsV3 message constructor.
3. online accounts for the highest blocks on our chain to speed up block processing
Note that highest blocks might be way older than 'current' blocks if we're somewhat behind in syncing.
Other OnlineAccountsManager changes:
* Use scheduling executor service to manage subtasks
* Switch from 'synchronized' to 'concurrent' collections
* Generally switch from Lists to Sets - requires improved OnlineAccountData.hashCode() - further work needed
* Only send V3 messages to peers with version >= 3.2.203 (for testing)
* More info on which online accounts lists are returned depending on use-cases
To test, change your peer's version (in pom.xml?) to v3.2.203.
Reduced AT state info from per-AT address + state hash + fees to AT count + total AT fees + hash of all AT states.
Modified Block and Controller to support above. Controller needs more work regarding CachedBlockMessages.
Note that blocks fetched from archive are in old V1 format.
Changed Triple<BlockData, List<TransactionData>, List<ATStateData>> to BlockTransformation to support both V1 and V2 forms.
Set min peer version to 3.3.203 in BlockV2Message class.
A full sync is unavoidable for P2SH redeem/refund, so we need to be able to save our progress. Creating a new null seed wallet each time isn't an option because it relies on having a recent checkpoint to avoid having to sync large amounts of blocks every time (sync is per wallet, not per node).
This allows for compatibility with TRANSFER_PRIVS validation in commit 8950bb7, which treats any account with a non-null reference as "existing". It also avoids possible unknown side effects from trying to process and store transactions with a null reference - something that wouldn't have been possible until the validation was removed.
This should prevent the failed transactions that are encountered when issuing two or more in a short space of time. Using a feature trigger (hard fork) to release this, to avoid potential consensus confusion around the time of the update (older versions could consider the main chain invalid until updating).
This will hopefully reduce the number of failed tradeoffer listings that result in a nonfunctional tradebot (and subsequent PENDING status shown in the UI)
This is needed to allow redeem/refund of P2SH without having an actively synced and initialized wallet. It also ultimately avoids us having to retain the wallet entropy in the trade bot states. Various safety checks have been introduced to make sure that a disposable wallet is never used for anything other than P2SH redeem/refund.
This has been modified to a) use full public keys instead of PKH, and b) hand off all transaction building, signing, and broadcasting to the (heavily customized) Pirate light wallet library.
Currently, new transactions take a very long time to be included in each block (or reach the intended recipient), because each node has to obtain a repository lock and import the transaction before it notifies its peers. This can take a long time due to the lock being held by the block minter or synchronizer, and this compounds with every peer that the transaction is routed through.
Validating signatures doesn't require a lock, and so can take place very soon after receipt of a new transaction. This change causes each node to broadcast a new transaction to its peers as soon as its signature is validated, rather than waiting until after the import.
When a notified peer then makes a request for the transaction data itself, this can now be loaded from the sig-valid import queue as an alternative to the repository (since they won't be in the repository until after the import, which likely won't have happened yet).
One small downside to this approach is that each unconfirmed transaction is now notified twice - once after the signature is deemed valid, and again in Controller.onNewTransaction(), but this should be an acceptable trade off given the speed improvements it should achieve. Another downside is that it could cause invalid transactions (with valid signatures) to propagate, but these would quickly be added to each peer's invalidUnconfirmedTransactions list after the import failure, and therefore be ignored.
Importing has to be single threaded since it requires the database lock, but there's nothing to stop us from validating signatures on multiple threads, as no lock is required. So it makes sense to separate these two functions to allow for possible multi threaded signature validation in the future, to speed up the process.
Everything remains single threaded in this commit. It should be functionally the same as before, to reduce risk.
Note: this relies on (a modified version of) liblitewallet-jni which is not included, but will ultimately be compiled for each supported architecture and hosted on QDN.
LiteWalletJni code is based on https://github.com/PirateNetwork/cordova-plugin-litewallet - thanks to @CryptoForge for the help in getting this up and running.
Note: it's important that this timestamp is set on a 1-hour boundary (such as 16:00:00) to ensure a clean switchover.
# Conflicts:
# src/main/java/org/qortal/block/BlockChain.java
Also removed CrossChainDigibyteACCTv1Resource, since this is unused, and it seems excessive to maintain support of this for every coin (and potentially every ACCT version).
Direct connections for arbitrary data are currently unlikely to succeed, because those allowing incoming connections generally have their slots maxed out and have reached maxPeers. The idea here is that some connections remain reserved for dedicated arbitrary data transfers, therefore temporarily circumventing the limit (up to a defined maximum number of reserved connections).
Arbitrary data connections will auto disconnect after 2 minutes (we might be able to reduce this at a later date), and it also probably makes sense for the requesting node to disconnect as soon as it has all the chunks that it needs (this part isn't implemented yet).
One downside of this feature is that the listen socket is now going to be accepting connections most of the time, since it is unlikely that we will regularly have 4 data peers connected. This could be improved by modifying the OP_ACCEPT behaviour based on whether we are expecting any data peers to connect. In most cases, this would allow it to remain closed. But for the sake of simplicity I will leave that optimization for a future commit.
This is used to force a quick disconnect for peers that are only connecting for the purposes of requesting data for a specific arbitrary transaction signature.
BlockMessage was broken because the repository 'connection' associated with the message's Block object was closed between message queuing and message sending.
The fix was to serialize Message subclasses on construction, thus freeing reliance on objects passed into constructor.
The serialized byte[] is held by the message between queuing and sending.
This forces messages into one of two 'modes': outgoing or incoming.
Outgoing messages contain serialized byte[] whereas incoming messages unpack a ByteBuffer into Message subclass fields.
As a result, all network message types have been refactored in this way.
More details in Message's class comment.
A knock-on effect is that incoming messages cannot then be sent out - a new message needs to be constructed.
Some changes needed to Arbitrary controller package classes in this respect.
Bonus: Network no longer needs broadcast threads because 'broadcasting' is now simply the act of queuing a message for many peers.
Instead of synchronizing/blocking in Peer.sendMessage(),
we queue messages in a concurrent blocking TransferQueue, with timeout.
In EPC, ChannelWriteTasks consume from TransferQueue, unblocking callers to Peer.sendMessage().
If a new message is to be sent, or socket output buffer is full,
then OP_WRITE is used to wait for socket to become writable again.
Only one ChannelWriteTask per peer can be active/pending at a time.
Each ChannelWriteTask tries to send as much as it can in one go.
Other minor tidy-ups.
As per work done by szisti in PR#45:
Extracted network 'Tasks' to their own classes.
Network.NetworkProcessor reduced to only producing Tasks.
Improved usage of SocketChannel interest-ops.
Eventually this might lead to reducing task-producing synchronization lock into more granular locks.
Work still needed to convert sending messages to a queue and to make use of OP_WRITE instead of sleeping to wait for socket buffer to empty.
Disabled PeerConnectTask producer from checking against connected peers via DNS as it's too slow.
Swapped Peer's replyQueues from SynchronizedMap(wrapped HashMap) to ConcurrentHashMap.
Other minor changes within networking.
As per work done by szisti in PR#45:
Extracted MessageException from inside Message into its own class.
Extracted MessageType from inside Message into its own class.
Converted reflection-based Message.fromByteBuffer method call to non-reflection, functional interface, method-reference.
This should have minor performance improvement but stronger method signature and type enforcement, as well as better IDE integration.
Message.fromByteBuffer method 'contract' tightened up to:
1. throw BufferUnderflowException if there are not enough bytes to deserialize message
2. throw MessageException if bytes contain invalid data
3. should not return null
Message.toData method 'contract' tightened up to:
1. return null if the message has no payload to serialize
2. throw IOException directly - no need to try-catch in each subclass
Several Message-subclass fields now marked 'final' as per IDE suggestion.
Several Message-subclass fromByteBuffer() method signatures have changed 'throws' list.
Several bytes.remaining() != some-value changed to bytes.remaining() < some-value as per new contract.
Some bytes.remaining() checks removed for fixed-length messages because we can rely on ByteBuffer throwing BufferUnderflowException.
Some bytes.remaining() checks retained for variable-length messages, or messages that read a large amount of data, to prevent wasted memory allocations.
Other minor tidying up
Temporarily increase sleep from 1ms to 100ms when waiting for outgoing socket buffer to empty.
Real fix is to rewrite using an outgoing message queue and OP_WRITE interest op.
De-register a peer's socket channel OP_READ interest op when producing a ChannelTask for that peer.
This should prevent duplicate ChannelTasks for the same peer.
Re-register OP_READ once node has read from peer's channel.
When node has reached max connections, Network will ignore pending incoming connections by:
1. not calling accept()
2. de-registering OP_ACCEPT 'interest op' on the listen socket's channel
When a peer disconnects, Network might re-register OP_ACCEPT interest op on listen socket.
Slight reworking of EPC to simplify when producer can block
and generally make some of the conditional code more readable.
Improved logging with task class names and logging level editable during runtime!
Use /peer/enginestats?newLoggingLevel=DEBUG (or TRACE or back to INFO) to change.
Note that it is currently not easy to distinguish between MESSAGE-type and PAYMENT-type AT transactions, so PAYMENT-type is currently the only one supported (and used). A hard fork will likely be needed in order to specify the type within each message.
This is a more standardized alternative to using GET /transactions/search?address=xyz. This avoids the need to build full transaction search ability into the lite node protocols right away.
This should bring in enough data for very basic chat and wallet functionality (using addresses rather than registered names).
Data currently comes from a single random peer, however this can be expanded to request from multiple peers to gain confidence in the accuracy of the data. If bad data is returned from a peer, it's not the end of the world since the transaction would just be considered invalid by full nodes and would be thrown out. But this should be mostly avoidable by taking data from multiple sources to improve confidence in its accuracy.
Lite nodes can't sync or mint blocks, and they also have a very limited ability to verify unconfirmed transactions due to a lack of contextual information (i.e. the blockchain). For now, most validation is skipped and they simply act as relays to help get transactions around the network. Full and topOnly nodes will disregard any invalid transactions upon receipt as usual, and since the lite nodes aren't signing any blocks, there is little risk to the reduced validation, other than the experience of the lite node itself. This can be tightened up considerably as the lite nodes become more powerful, but the current approach works as a PoC.
This is currently for name registration transactions only, but can be adapted (or duplicated) for other transaction types when needed.
Note: this switches from a greater-than (>) to a greater-than-or-equal (>=) timestamp comparison, as it makes more sense this way. It shouldn't affect the previous transition since there were no REGISTER_NAME transactions at that exact timestamp.
Adapted from code originally written by catbref from before genesis, and essentially prevents syncing backwards. This needs significant testing on testnet.
It is quite likely that existing resources with both metadata and an empty chunks array will need to be republished, because this bug may have led to incorrect file deletions.
The command used was:
./protoc --plugin=protoc-gen-grpc-java=/Users/user/Downloads/protoc-gen-grpc-java-1.45.1-osx-x86_64.exe -I=src/main/resources/proto/zcash/ --java_out=src/main/java/ --grpc-java_out=src/main/java/ src/main/resources/proto/zcash/service.proto
Then repeat, replacing service.proto with compact_formats.proto and darkside.proto
Darkside isn't needed for mainnet functionality, but included for completeness, and might be useful for testing.
This feature is disabled by default so can be tidied up later. For now, the unhandled scenario is logged and the checking continues on.
One name's transactions are too complex for the current integrity check code to verify (MangoSalsa), but it has been verified manually. All other names pass the automated test.
If an account is renamed and then at some point renamed back to one of the original names, it confused the names rebuilding code. The current solution is to track the linked names that have already been rebuilt, and then break out of the loop once a name is encountered a second time.
This is the likely cause of inconsistent name entries across different nodes, as we can't guarantee that every environment will return the same transaction order from the SQL queries.
Some users are seeing 500 errors deriving from this code. This should hopefully allow more info to be obtained, as well as causing it to omit the status for resources that encounter problems.
Peers without a recent block are removed at the start of the sync process, however, due to the time lag involved in fetching block summaries and comparing the list of peers, some of these could subsequently drop back to a non-recent block and still be chosen as the next peer to sync with. The end result being that nodes could unnecessarily orphan as many as 20 blocks due to syncing with a peer that doesn't have a recent block (but has a couple of high weight blocks after the common block).
This commit adds some additional filtering to avoid this situation.
1) Peers without a recent block are removed as candidates in comparePeers(), allowing for alternate peers to be chosen.
2) After comparePeers() completes, the list is filtered a second time to make sure that all are still recent.
3) Finally, the peer's state is checked one last time in syncToPeerChain(), just before any orphaning takes place.
Whilst just one of the above would probably have been sufficient, the consequences of this bug are so severe that it makes sense to be very thorough.
The only exception to the above is when the node is in "recovery mode", in which case peers without recent blocks are allowed to be included. Items 1 and 3 above do not apply in recovery mode. Item 2 does apply, since the entire comparePeers() functionality is already skipped in a recovery situation due to our chain being out of date.
Fix UPDATE_NAME not processing empty 'newName' transactions correctly.
Fix some emoji code-points not being processed correctly.
Updated tests.
Now included ICU4J v70.1 - WARNING: this could add around 10MB to JAR size!
Bumped homoglyph to v1.2.1.
This should hopefully reduce confusion due to APIs reporting 99% synced even though up to date. The systray should never show this since it already treats blocks in the last 30 mins as synced.
This could very slightly reduce load due to skipping the internal filtering inside log4j. Given that this method is causing major problems with CPU at times, I'm trying to make it as optimized as possible.
This can ultimately be used to notify the serving peer to expect a direct connection from the requesting peer (to allow it to temporarily bypass maxConnections for long enough for the files to be retrieved). Or it could even possibly be used to trigger a reverse connection (from the serving peer to the requesting peer).
- Slow down loops that query the db
- Check for new metadata every 5 minutes instead of constantly
- Check for new data every 1 minute instead of constantly
This could be further improved in the future by having block.process() notify the ArbitraryDataManager that there is new data to process. This would avoid the need for the frequent checks/loops, and only a single complete sweep would be needed on node startup (as long as failures are then retried). But I will avoid this additional complexity for now.
Load sorted list of reward share public keys into memory, so that the indexes can be obtained. This is around 100x faster than querying each index separately (and the savings will increase as more keys are added).
For 4150 reward share keys, it was taking around 5000ms to query individually, vs 50ms using this approach.
The main trade off is that these 4150 keys require around 130kB of additional memory when minting (and this will increase proportionally with more minters). However, this one query was often accounting for 50% of the entire core's CPU usage, so the additional memory usage seems insignificant by comparison.
To gain confidence, I ran both old and new approaches side by side, and confirmed that the indexes matched exactly.
There are no logic changes here other than moving performOnlineAccountsTasks() onto its own thread, so that it's not subject to anything that might be slowing down the main controller thread.
- Removed synchronization from connectedPeers, and replaced it with an unmodifiableList.
- Added additional immuatable caches: handshakedPeers and outboundHandshakedPeers
This should greatly reduce the amount of time spent waiting around for access to the connectedPeers array, since it is now immediately accessible without needing to obtain a lock. It also removes calls to stream() which were consuming large amounts of CPU to constantly filter the connected peers down to a list of handshaked peers.
Thanks to @catbref for these great suggestions.
- Signature validation is now able to run concurrently with synchronization, to reduce the chances of the queue building up, and to speed up the propagation of new transactions. There's no need to break out of the loop - or avoid looping in the first place - since signatures can be validated without holding the blockchain lock.
- A blockchain lock isn't even attempted if a sync request is pending.
Main changes are:
* Check transaction signature validity in initial round, without blockchain lock
* Convert List of incoming transactions to Map so we can record whether we have validated transaction signature before to save rechecking effort
* Add invalid signature transactions to invalidUnconfirmedTransactions map with INVALID_TRANSACTION_RECHECK_INTERVAL expiry (~60min)
* Other minor changes related to List->Map change and Java object synchronization
This is very inefficient and will soon be replaced with dedicated ArbitraryResources / ArbitraryMetadata tables. But this is acceptable in the short term, especially if limit and offset are used.
- Rate limiter is disabled when using the API
- fetchArbitraryMetadata() returns the actual metadata content rather than a boolean
- Exceptions are thrown on certain errors, rather than returning null
This involved a slight rewrite to remove the "includeMetadataOnly" boolean. Metadata is now always excluded, otherwise it complicates the caching too much.
# Conflicts:
# src/main/java/org/qortal/api/resource/ArbitraryResource.java
# src/main/java/org/qortal/controller/arbitrary/ArbitraryDataStorageManager.java
Peers that were thought to be missing output address data may actually have just been using a different key - "address" instead of "addresses". Now reading the addresses from both keys, which may remove the need for the previously added checks.
For future reference, the command used was:
mvn install:install-file -Dfile=/Users/user/Downloads/waifupnp-1.1/WaifUPnP.jar -DgroupId=com.dosse -DartifactId=WaifUPnP -Dversion=1.1 -Dpackaging=jar -DlocalRepositoryPath=lib
This is the equivalent of increasing the max address gap from 15 to 21. The electrum standalone wallet uses 20, so this should be the most we will ever need.
/render APIs use priority 10, whereas /arbitrary use priority 0, to prevent thumbnail downloads from holding up website loading. The priorities can be adjusted later, with maybe some service types being given higher priority than others.
This should fix an issue where network threads could be blocked when new transactions arrived, due to waiting for the incomingTransactions lock to free up.
An alternate option would be to avoid force disconnecting while relays are in progress, but some nodes could have active relays 100% of the time and therefore would never recycle their peers. So it is simpler to just increase the average peer connection time for everyone.
Previously, only one peer's response for a hash would be remembered, even if multiple others reported back too. This would cause useful mapping to be lost.
This is likely a short term solution (to allow existing code to be repurposed) until replaced with a task-based approach, as this will allow for a much greater number of threads.
async = fail immediately with 404 if missing, and request in the background
attempts = the number of times to request the data (synchronous mode only for now)
This allows TIMESTAMP_TOO_OLD transactions to be tracked for a shorter time (10 minutes) than the other invalid transactions (60 minutes). Should reduce network traffic and db load around the time that transactions are expiring, as there is a lag before they are noticed and removed from each node. Due to the variance, it could cause other peers to request them again after deleting. They are now ignored for 10 minutes to avoid request spam.
An incoming invalid unconfirmed transaction will be added to this map if its timestamp is more than 30 minutes old. This should allow enough time and opportunities for it to be imported and included in a block (allowing for re-orgs which could switch its status from invalid to valid).
Once added, it will be removed after an hour to allow for another chance to be requested from any peers that still have it. If invalid again, it's added back to the map for another hour.
This fixes a 24 hour long loop, where invalid transactions are requested over and over from peers that have already imported them. It could be improved further by periodically removing invalid unconfirmed transactions from the database, but this will be a higher risk.
The results of this feature should be less network traffic, and less blockchain locks (which should ultimately increase the responsiveness of the synchronizer).
This ensures that only a single round of requests (per coin) is used for the wallettransactions and balance APIs. It also speeds up loading on subsequent requests. The 2 minute cache isn't much longer than the foreign block times, so shouldn't cause values to be too out of date.
Hopeful fix for incorrect balances in wallets with large numbers of transactions. At the very least, this gives us control of the code that calculates the balance.
Previously we would only try the first response and then discard the others due to being duplicates. They are now added to a queue and retried by the dedicated thread (up to the 60 second timeout).
This should fix conflicts caused by the synchronizer and controller now being on separate threads. It may also reduce the chances of the database corrupting on shutdown, but this remains to be seen.
This solves a problem where incoming transactions could rarely obtain a blockchain lock (due to multiple transactions arriving at once) and therefore most messages were thrown away. It was also causing constant blockchain locks to be acquired, which would often prevent the synchronizer from running.
Additional params:
- timestamp: to allow for hard forks. Default: the current time
- level: the account level, to allow for the future possibility of different fees per level. Not currently used.
This is to hopefully improve network stability whilst a more advanced solution is being worked on. It also allows us to collect some data on how well the network behaves when there are less block candidates. It should have no effect on minting rewards (other than any side effects as a result of improved network stability).
This allows the GET /arbitrary/{service}/{name} and GET /{service}/{name}/{identifier} endpoints to operate without any authentication. Useful for those who are running public QDN nodes and need to serve data over http(s).
Increased GetOnlineAccountsMessage.MAX_ACCOUNT_COUNT from 1000 to 5000.
The V2 versions are more efficiently encoded and also cache the payload bytes
which reduces CPU when sending to multiple peers.
Serialization / deserialization unit tests included.
Tentative V2 message activation set at core version 3.1.2
see Controller.ONLINE_ACCOUNTS_V2_PEER_VERSION
Note that this is not always accurate - it relates to the largest transaction size for this name, not necessarily the latest or the combined size of multiple transactions. This can be made accurate as soon as we have a "Resources" table to store this info. Trying to do it before then will be too inefficient in terms of queries.
A longer term solution to this hypothetical problem is to store relays in RAM or a temp folder only. Or maybe add an indicator file to instruct the cleanup manager to delete it. But this will require more development. 10 deletion attempts (each 1 second apart) should be enough for now.
This encourages shorter relays, since longer ones will take more time to respond, and also prevents a peer from intentionally taking a long time to respond so that it overwrites an existing entry.
Longer term we could consider keeping track of all respondents for each hash, if there are still issues with data retrieval. I suspect this won't be needed though, as the requesting peer has 16+ different peers connected, and therefore potentially 16 different mappings already.
This brings the behaviour closer to the old version so should hopefully reduce the amount of newly introduced issues. If an API key is unavailable, it will fall back to using `kill -15 $pid` (i.e. a SIGTERM).
Should fix issue with v4 transactions where these aren't used. Matches with the NOT NULL DEFAULT 0 which automatically transitions existing v4 ARBITRARY transactions to use the same defaults.
This allows node operators to return their authentication to the legacy rules (local requests allowed), without introducing javascript vulnerabilities. The websites, apps, etc are just prevented from loading, to avoid the risk of any API calls from javascript.
The modifications made to these methods were causing issues with other transaction types that were expecting blank strings instead of null. To keep risk to a minimum, I have split into two different sets of functions until there is more time to unify them.
1) Each relay request expires after 5 seconds, after which nodes will stop relaying it, preventing any kind of infinite loop. So it has to reach the destination peer within 5 seconds. This should be fine, because the original peer's request would timeout anyway, so there's nothing to be gained by continuing to relay it.
2) Each relay request stops being forwarded after 3 "hops" - i.e. once it has been relayed through 3 different peers, it will no longer be transmitted any further. If we assume that each node has 16 connections, that allows it to reach a theoretical maximum of 4096 peers in 3 hops. In practice it will be less, and may not reach everyone due to peer "islands". But it will automatically retry a few times on a timer, so should hopefully find what it needs eventually. Plus, it still has the ability to make a direct connection to anyone hosting the data, as long as they are port forwarded.
This is likely longer than needed, but it's best to allow extra for now and then optimize the timeouts once we've had some experience with real world data.
This involves modifying the log4j2.properties file on node startup to fix an incompatibility with ${dirname:-}. Thanks to AlphaX Projects for tracking down this incompatibility.
This allows an entire registered name to be preauthorized, therefore allowing for instance a website to automatically request other resources from the same author, such as videos.
This delegates the task to the browser rather than doing it in java. It should also catch a few remaining types of links that we had missed - e.g. ones that originate from within js files.
This avoids duplicate entries from the same host/ip with differing ports. This can occur due to some requests using ephemeral port numbers. Ideally we would filter these out altogether, but this at least acts as a safety net to prevent a very cluttered db and associated "broadcast storm". The main tradeoff here is that multiple nodes on the same IP address will be recorded as a single entry. This doesn't seem like it will be a major limitation, because one of them will remain available.
Since some files won't have any mirrors, this prevents the cleanup manager from deleting the only copy in existence when freeing up space. This feature can be disabled by setting "originalCopyIndicatorFileEnabled": false in settings.json (or by deleting the ".original" files). The trade off is that the only copy in existence could be deleted if space gets low.
This will also allow for better reporting of own vs third party files in the local UI (not yet implemented).
This allows for consistent messaging about each status to be shown in different parts of the system. Previously these strings were hardcoded in the loading screen html so were inaccessible elsewhere.
Logs can be reinstated by adding these lines to log4j2.properties:
logger.arbitrary.name = org.qortal.arbitrary
logger.arbitrary.level = debug
logger.arbitrarycontroller.name = org.qortal.controller.arbitrary
logger.arbitrarycontroller.level = debug
This is the start of a refactor to use ArbitraryDataResource objects rather than passing around separate resourceId, service and identifier, or other duplicate objects. Most of this will need to be done after the initial release due to time constraints.
The simplest solution was to only include a newline at the end of the patch file if the source file ended with a newline. This is used to inform the merge code as to whether to add the newline to the end of the resulting file. Without this, the checksums do not match (and therefore previously the complete file would have been included as a result).
Several parts of the code request resources to be loaded/built, and these separate threads were tripping over each other and causing build failures. This has been avoided by making sure the resource isn't already building before requesting it.
Requesting a resource with the identifier "default" now maps to a blank string. This allows the /arbitrary/{service}/{name}/{identifier} endpoints to be used for default resources too, as they previously didn't support a blank string as the third parameter.
The main differences are:
- Compute nonce instead of specifying a transaction fee
- Add blank/empty values for all the additional fields, as they are unused for auto updates
Originally I had hoped to do this by using an intermediate iframe, to keep the message handlers separate from the user content, but this created CORS issues of its own. This approach is far simpler.
Now that we require API key authentication - and therefore security is greatly improved - many users will want to bypass the whitelist in order for the UI to communicate with their remote node. This gives an easy way to do this, without having to override the default whitelist. This boolean can now optionally be added to the default settings.json that is published with new releases, without removing the code's ability to update default whitelist values.
The procedure outlined in commit f4b06fb is now incorrect. Updated procedure:
- A node can opt into relay mode via the "relayModeEnabled":true setting.
- From this time onwards, they will ask their peers if they ever receive a file list request that they cannot serve by themselves.
- Whenever a peer responds with a file list, it is forwarded on to the originally requesting peer, complete with the peer address of the node that responded. Currently, only the first response is forwarded, but we may later decide to forward all responses.
- As well as forwarding, the relay peer keeps track of the peers that report to be holding hashes (these mappings are held for 30 seconds).
- The originally requesting peer can then make a request to the relay peer for the data file(s).
- The relay peer uses the mapping to forward the request on to another peer, and then forwards the response (i.e. the data file) back to the peer that originally requested the file.
It's best that the source peer's address isn't exposed to the requesting peer. The relay peer can keep track of this mapping itself.
The only real issue with this approach is that we can't use data from ArbitraryDataFileListMessage to update our ArbitraryPeers data, because we can't distinguish between relay peers and hosting peers. But this isn't something we currently do anyway, as we have the ARBITRARY_SIGNATURES message type to take care of updating ArbitraryPeers mappings.
Setting this to false prevents new connections being made to peers that report to have the data that is needed. This is likely only useful for testing, as disabling it in production would reduce the success rate of data retrieval.
This reuses most of the code already in place in the core related to forwarding.
- A node can opt into relay mode via the "relayModeEnabled": true setting
- From this time onwards, they will ask their peers if they ever receive a file list request that they cannot serve by themselves
- Whenever a peer responds with a file list, it is forwarded on to the originally requesting peer, complete with the peer address of the node that responded
- The original peer can then make a request for the data file(s) themselves using a similar approach, specifying the IP address of the ultimate peer so that the relay node knows who to ask. This part is not implemented yet.
This makes them extremely generic, improves filenames, and makes it easier to create custom lists. It doesn't have backwards support, but the lists feature isn't working properly in core 2.1+ anyway.
Also modified the directory structure of single file resources to make them consistent with multi file resources.
For multi file resources, the original folder is renamed to "data", resulting in a layout such as:
data/file1.txt
data/file2.txt
data/dir1/file3.txt
For single file resources, the file is now moved into a "data" folder, like so:
data/file.txt
This is slightly unconventional, but is appropriate within the context of QDN to keep everything consistent.
This adds support for "unconfirmable" data uploads, which will be useful for Q-Chat. It also handles cases where a transaction is orphaned and then subsequently becomes invalid.
A website must contain one of the following files in its root directory to be considered valid:
index.html
index.htm
default.html
default.htm
home.html
home.htm
This is the first page that is loaded when loading a Qortal-hosted website.
This would happen if a name fills their limit, and then additional names are followed. Alternatively it could happen if the total storage capacity reduces due to disk space being used by other apps. Chunks are deleted at random to reduce the chance of the same chunk being deleted everywhere. Data loss is possible here for transactions that don't have many peers. We'll have to see in practice how much of a problem this is, but it's better than the scenario where one content creator consumes all space on their followers' nodes, leaving no space for other names that are subsequently followed.
This is calculated by the total capacity divided by the number of names the node follows. The idea here is that a single content creator can't upload terabytes of data and consume all the space on their followers' nodes. They can only use a proportion, with equal space given to each followed name. And since the limit is dynamic, following more names reduces the allocation to existing names.
This discourages an incorrect file size being included with a transaction, as the system will reject it and won't even serve it to other peers.
FUTURE: we could introduce some kind of blacklist to track invalid files like this, and avoid repeated attempts to retrieve them. It is okay for now as the system will backoff after a few attempts.
This API call could get quite heavy when large amounts of files are hosted, but it's preferable to maintaining a list in the database. Ideally we need to keep the database generic so that it can be bootstrapped without interfering with the state. We can always add caching and rate limiting if needed.
Chunk hashes are now stored off chain in a metadata file. The metadata file's hash is then included in the transaction.
The main benefits of this approach are:
1. We no longer need to limit the total file size, because adding more chunks doesn't increase the transaction size.
2. This increases the chain capacity by a huge amount - a 512MB file would have previously increased the transaction size by 16kB, whereas it now requires only an additional 32 bytes.
3. We no longer need to use variable difficulty; every transaction is the same size and so the difficulty can be constant no matter how large the files are.
4. Additional metadata (such as title, description, and tags) can ultimately be stored in the metadata file, as apposed to using a separate transaction & resource.
5. There is also scope for adding hashes of individual files into the metadata file, if we ever wanted to allow single files to be requested without having to download and build the entire resource. Although this is unlikely to be available in the short term.
The only real negative is that we now how to fetch the metadata file before we know anything about the chunks for a transaction. This seems to be quite a small trade off by comparison.
Since we're not live yet, there is no backwards support for on-chain hashes, so a new data testchain will be required. This hasn't been tested outside of unit tests yet, so there will likely be several fixes needed before it is stable.
It's not good to be moving files around in a method that should really be read only. This also adds an intentional checkAndRelocateMiscFiles() call rather than relying on a call to isDataLocal() which may be removed at any time.
This would have been caught by the max differences check anyway, but it's a good check to have in place in case we recalibrate or remove the differences check in the future.
Could be improved in the future to return different codes depending on its status (e.g. doesn't exist = 404, 102 for loading, 500 for error, etc), but 404 makes the most sense until that has been developed
- If an identifier parameter is missing or empty, it will return an unfiltered list of all possible identifiers.
- If an identifier is specified, only resources with a matching identifier will be returned.
- If default is set to true, only resources without identifiers will be returned.
Files are now keyed by signature, in the format:
data/si/gn/signature/hash
For times when there is no signature available (i.e. at the time of initial upload), files are keyed by hash, in the format:
data/_misc/ha/sh/hash
Files in the _misc folder are subsequently relocated to a path that is keyed by the resulting signature.
The end result is that chunks are now grouped on the filesystem by signature. This allows more transparency as to what is being hosted, and will also help simplify the reporting and management of local files.
This should allow for a relatively even distribution of chunks, but there is a (currently unavoidable) risk of files with very few mirrors being deleted altogether.
Longer term this could be improved by checking that one of our peers has a file, before it's deleted locally
Nodes will stop proactively storing new data when they reach 90% capacity.
A new "maxStorageCapacity" setting has been added to allow the user to optionally limit the allocated space for this node. Limits are approximate only, not exact.
publicDataEnabled - whether to store decryptable data (default true)
privateDataEnabled - whether to store data without a decryption key (default false)
The missing data check was triggering decryptions, extractions, etc. Replaced with some code which checks for the presence of chunks on the local machine, without getting involved with any build process overhead.
- "NOT_STARTED" is now "DOWNLOADED"
- "DOWNLOADING" is now "MISSING_DATA"
- Removed "DOWNLOAD_FAILED"
Some of these could be reintroduced once the system is able to support them.
These can be used to check the current status of a resource. The different statuses are:
NOT_STARTED,
DOWNLOADING
DOWNLOADED
BUILDING
READY
DOWNLOAD_FAILED
BUILD_FAILED
UNSUPPORTED
Not all statuses are returned yet. The build process needs more functionality to be able to support DOWNLOADED and DOWNLOAD_FAILED. Also, BUILDING and BUILD_FAILED are currently unable to distinguish between different resources with the same registered name, so need some attention.
Each service supports basic validation params, plus has the option for an entirely custom validation function.
Initial validation settings:
- IMAGE must be less than 10MiB
- THUMBNAIL must be less than 500KiB
- METADATA must be less than 10KiB and must contain JSON keys "title", "description", and "tags"
This is needed to avoid triggering a CORS preflight (which occurs when using an X-API-KEY header). The core isn't currently capable of responding to a preflight and the UI therefore blocks the entire request. See: https://stackoverflow.com/a/43881141
This allows users to set only their data path, and for the temp folder to automatically follow it. The temp folder can be moved to a custom location by setting the "tempDataPath" setting.
An API key is now _required_ for sensitive API calls that would previously have allowed local loopback authentication.
Previously, a request would have been considered authenticated if it originated from the same machine, however this creates a security issue when running third party code (particularly javascript) via the data network.
The solution is to now require an API key to authenticate sensitive API calls no matter where the request originates from.
It works as follows:
- When the core is first installed, it has no API key generated and will block sensitive calls until generated.
- A new POST /admin/apikey/generate API endpoint has been added, which can be used the generate an API key for a newly installed node. The UI will ultimately call this automatically.
- This API returns the generated key so that it can be stored by the requesting app (most likely the UI).
- From then on, the generate API requires authentication via the existing API key in order to regenerate a key. It can be used as a security measure if the existing key is compromised.
- The API key must be passed to all sensitive API endpoints from then on, even when calling it from the same local machine.
- If the core already has a legacy API key specified via the 'apiKey' setting, this will be automatically copied to the new format so that a new one doesn't need to be generated.
- The API key itself is stored in a flat file in the qortal directory (the path can be customized using the `apiKeyPath` setting). Deleting this file and restarting the core will allow a new one to be regenerated.
The process of serving resources to a browser will likely be needed for more than just websites (e.g. it will be needed for apps too) so it makes sense to abstract it to its own class.
This should help keep the peer lookup table size down, as there is no need to locate files for transactions that existed before the most recent PUT transaction.
Built resources are deleted when either:
- The resource reaches the expiry interval specified in the builtDataExpiryInterval setting (default 30 days)
- The resource is published by a name that is in the local blacklist
Resources only exist in the reader cache once they have been viewed, to remove the loading time on subsequent views. But some may prefer to reduce this expiry time (at the expense of longer load times and more CPU), as data is held unencrypted in the cache.
We still allow it to be fetched even if it's outside of the storage policy, as the cleanup manager will delete the files very soon after, and they won't be allowed to be served to other peers due to other checks already in place.
This allows other peers to find out where they can obtain these files if we were to stop hosting them later. Or even if we continue hosting copies, it still informs the network on other locations, for better decentralization.
We don't want the network being spammed when a file isn't available by any reachable peers. This feature ensures retries are spaced out over longer timeframes. Basic logic:
- Wait 5 minutes in between failed attempts
- After 5 failed attempts (i.e. 25 mins) only try once per day from then on
- A core restart resets the counters
The stats gathered here can also be used to inform the core of when it should attempt a direct connection with a peer to obtain the data. That part isn't implemented yet.
This allows for custom list creation without the need for creating API endpoints to go along with it. This should save time now that we are using lists more.
- "APP" will allow for user-created apps and the Qortal app store
- "METADATA" will be used to supply info about apps/websites/resources, such as title, description, tags, etc
When using POST /arbitrary/{service}/{name}... it will now automatically decide which method to use (PUT/PATCH) based on a few factors:
- If there are already 10 or more layers, use PUT to reset back to a single layer
- If the next layer's patch is more than 20% of the total resource file size, use PUT
- If the next layer modifies more than 50% of the total file count, use PUT
- Otherwise, use PATCH
The PUT method causes a new base layer to be created and all previous update history for that resource becomes obsolete. The PATCH method adds a small delta layer on top of the existing layer(s).
The idea is to wipe the slate clean with a new base layer once the patches start to get demanding for the network to apply. Nodes which view the content will ultimately have build timeouts to prevent someone from deploying a resource with hundreds of complex layers for example, so this approach is there to maximize the chances of the resource being buildable.
The constants above (10 layers, 20% total size, 50% file count) will most likely need tweaking once we have some real world data.
We may choose to save on CPU by not compressing individual files, so this allows the network to support that. However it is still using compression by default, to reduce file sizes.
This process could potentially be simplified if we were to modify the structure of the actual zipped data (on the writer side), but this approach is more of a "catch-all" (on the reader side) to support multiple different zip structures, giving us more flexibility. We can still choose to modify the written zip structure if we choose to, which would then cause most of this new code to be skipped.
Note: the filename of a single file is not currently retained; it is renamed to "data" as part of the packaging process. Need to decide if this is okay before we go live.
Thumbnails will be used in order to show logos/screenshots in the list of websites or other resources. Playlists will allow for media apps to group videos/audio/images into collections, e.g. albums.
Until now we have been limited to one data resource per name/service combination. This meant that each name could only have a single website, git repo, image, video, etc, and adding another would overwrite the previous data. The identifier property now allows an optional string to be supplied with each resource, therefore allowing an unlimited amount of resources per name/service combination.
Some examples of what this will allow us to do:
- Create a video library app which holds multiple videos per name
- Same as above but for photos
- Store multiple images against each name, such as an avatar, website thumbnails, video thumbnails, etc. This will be necessary for many "system level" features.
- Attach multiple websites to each name. The default website (with blank/null identifier) would remain the entry point, but other websites could be hosted essentially as subdomains, and then linked from the default site. This also provides a means to go beyond the 500MB website size limit.
Not all of these features will exist initially, but having this identifier included in the protocol layer allows them to be added at any time.
This is generated whenever a data resource cannot be built because it is missing data for at least one layer. Using a custom exception type here enables a few new features:
1. A single build process is now able to request missing data from all the layers that need it. Previously it would only request from the first missing layer and would then give up. This resulted in the user/application having to issue the build command multiple times rather than just once, until all layers had been requested.
2. GET /arbitrary/{service}/{name} will now block the response and retry in the background until the data arrives. This allows it to be used synchronously. Note: we'll need to add a timeout.
3. Loading a website via GET /site/{name} will avoid adding to the failed builds queue when a MissingDataException is thrown, which allows it to be quickly retried. The interface already auto refreshes, allowing the site to load as soon as it's available.
We may need to temporarily hold files for the purpose of viewing, but restrictions need to be in place to avoid these being served to peers of stored for longer than they are needed.
- If storage policy includes "FOLLOWING", only process transactions relating to the followed names.
- If storage policy is "ALL", process all transactions.
- If storage policy is "NONE" or "VIEWED", don't process or prefetch any data.
This is will be used to coordinate all build processes and threads. This way it keeps it separate from the ArbitraryDataManager class, which was getting a bit cluttered.
This causes the build to fail on the first pass due to missing chunks, however it now fails with a message indicating that it should be retried shortly. The website loader is already set up in such a way that it will be automatically retried, during which time the loading screen is shown.
Also added code to remove the resource from the "failed builds list" once the chunks arrive, so that it is able to be rebuilt sooner than the FAILURE_TIMEOUT (currently 5 minutes).
- Don't attempt to fetch data for transactions which fall outside of the storage policy
- Delete files relating to transactions that are no longer within the scope of the storage policy
Note: some additional work needs to be done to ensure that viewed files are deleted when using a storage policy that excludes "VIEWED" content.
This means that no additional structural code is required to add new lists. The only non-generic aspect are the API endpoints - it's best to keep these specific until we have a need for user-created lists.
There's no real need to maintain support for signature mapping anymore. Using this new method means that the latest version of a site is always served via the traditional domain name, whereas using transaction signatures caused older versions to be shown.
Example settings.json configuration:
"domainMapServiceEnabled": true,
"domainMapServicePort": 80,
"domainMap": [
{
"domain": "webdemo.qortal.uk",
"name": "QortalDemo"
},
{
"domain": "www.reqorder.org",
"name": "ReQorder"
}
]
This maps ARBITRARY transactions to peer addresses, but also includes additional metadata/stats to track the success rate and reachability.
Once a node receives files for a transaction, it broadcasts this info to its peers so they can update their records.
TLDR: this allows us to locate peers that are hosting a copy of the file we need.
This ensures that only the owner of a name is able to update data associated with that name.
Note that this doesn't take into account the ability for group members to update a resource, so this will need modifying when that feature is ultimately introduced (likely after v3.0)
We may not need to validate this at all now that we have the ability to validate the current layer, but I'll leave it as it could be useful for debugging. It is disabled by default so not an issue.
- The "diff type" is now specified per file, allowing for different diff methods in each modified file.
- Patches will only be created when both the before and after files are less than 100kiB in size.
- Patches are validated after creation, and if invalid it will fall back to including the entire file.
This has identified a bug where patching fails for files without trailing newline characters, which still needs to be fixed. Until then, it will fall back to including the entire file in these cases.
A PUT creates a new base layer meaning anything before that point is no longer needed. These files are now deleted automatically by the cleanup manager. This involved relocating a lot of the cleanup manager methods into a shared utility, so that they could be used by the arbitrary data manager. Without this, they would be fetched from the network again as soon as they were deleted.
This deletes redundant copies of data, and also converts complete files to chunks where needed. The idea being that nodes only hold chunks, since they currently are much more likely to serve a chunk to another peer than they are to serve a complete file.
It doesn't yet cleanup files that are unassociated with transactions, nor does it delete anything from the _temp folder.
This improves scalability but isn't sufficient for a long term solution. TODO: It probably makes sense to add an additional query for recent transactions only, so that they are fetched quickly.
This is needed because we want to allow brand new accounts to publish data without a fee. A similar approach to CrossChainResource.buildAtMessage(). We already require PoW on all arbitrary transactions, so no additional logic beyond this should be needed.
This adds the loadAsynchronously() method to ArbitraryDataReader, in addition to the existing loadSynchronously() method.
When requesting a website in a browser, previously the building of the resource's layers would be done synchronously in the API handler. This understandably caused many issues, so the building is now done asynchronously by a dedicated thread. A loading screen is shown in its place which auto refreshes every second until the build has completed.
It's possible that this concept will struggle in the real world if operating systems, virus scanners, etc start interfering with our file stucture. Right now it is using a zero tolerance approach when checking the validity of each layer. We may choose to loosen this slightly if we encounter problems, e.g. by excluding hidden files. But for now it is best to be as strict as possible.
This decides whether to build a new state or use an existing cached state when serving a data resource. It will cache a built resource until a new transaction (i.e. layer) arrives. This drastically reduces load, and still allows for almost instant propagation of new layers.
This is used to store the transaction signature and build timestamp for each built data resource. It involved a refactor of the ArbitraryDataMetadata class to introduce a subclass for each file ("patch" and "cache"). This allows more files to be easily added later.
This defends against a missing or out-of-order transaction. If this ever fails validation, we may need to rethink the way we are requesting transactions. But in theory this shouldn't happen, given that the "last reference" field of a transaction ensures that out-of-order transactions are invalid already.
This bug was introduced now that the temp directory is contained within the data directory. Without this, it would leave it in the temp folder and then fail at a later stage.
This ensures that the temporary files are being kept with the rest of the data, rather than somewhere inappropriate such as on flash storage. It also allows the user to locate them somewhere else, such as on a dedicated drive.
This adds support for the PATCH method in addition to the existing PUT method.
Currently, a patch includes only files that have been added or modified, as well as placeholder files to indicate those that have been removed.
This is not production ready, as I am hoping to create patches on a more granular level - i.e. just the modified bytes of each file. It would also make sense to track deletions using a metadata/manifest file in a hidden folder.
It also adds early support of accessing files using a name rather than a signature or hash.
This ensures that nodes are storing unreadable files, outside of the context of Qortal. For public data, the decryption keys themselves are on-chain, included in the "secret" field of arbitrary transactions. When we introduce the concept of private data, we can simply exclude the secret key from the transaction so that only the owner can decrypt it.
When encrypting the file, I have added the 16 byte initialization vector as a prefix to the cyphertext, and it is then automatically extracted back out when decrypting. This gives us the option to encrypt more than one file with the same key, if we ever need it. Right now, we are using a unique key per file, so it's not actually needed, but it's good to have support.
Adds "name", "method", "secret", and "compression" properties. These are the foundations needed in order to handle updates, encryption, and name registration. Compression has been added so that we have the option of switching to different algorithms whilst maintaining support for existing transactions.
These combine some Qora services (SERVICE_NAME_STORAGE, SERVICE_BLOG_POST, and SERVICE_BLOG_COMMENT) with existing Qortal services (SERVICE_AUTO_UPDATE), and some new additions (SERVICE_ARBITRARY_DATA, SERVICE_WEBSITE, and SERVICE_GIT_REPOSITORY)
Previously we would ask all connected peers for the file itself, but this caused the network to be swamped when multiple peers responded with the same file.
This new approach instead asks all connected peers to send back a list of hashes for all files they have relating to a transaction signature. The requesting node then uses these lists to make separate requests for each missing file.
This is a quick solution to rebuild directory structures with missing files. This whole area of the code needs some reworking, as serving the site from a temporary folder is not a robust long term solution.
Domain names can be mapped to arbitrary transaction signatures via the node's settings, and then served over port 80 or 443. This allows Qortal hosted sites to be accessible via a traditional domain name.
Example configuration to map two domains:
"domainMapServiceEnabled": true,
"domainMapServicePort": 80,
"domainMap": [
{
"domain": "example.com",
"signature": "tEsw4kUn4ZJfPha7CotUL6BHkFPs79BwKXdY6yrf28YTpDn4KSY6ZKX3nwZCkqDF9RyXbgaVnB1rTEExY3h9CQA"
},
{
"domain": "demo.qortal.org",
"signature": "ZdBWWPMhR7AZwSu5xZm89mQEacekqkNfAimSCqFP6rQGKaGnXR9G4SWYpY5awFGfhmNBWzvRnXkWZKCsj6EMgc8"
}
]
Each domain needs to be pointed to the Qortal data node via an A record or CNAME. You can add redundant nodes by adding multiple A records for the same domain (this is known as DNS Failover).
Note that running a webserver on port 80 (or anything less than 1024) requires running the data node as root. There are workarounds to this, such as disabling privileged ports, or using a reverse proxy. I will investigate this more as time goes on, but this is okay for a proof of concept.
It's now capable of syncing chunks as well as complete files. This isn't production ready as it currently requests/receives the same file from multiple peers at once, which slows down the sync and wastes lots of bandwidth. Ideally we would find an appropriate peer first and then sync the file from them.
This introduces the hash58 property, which stores the base58 hash of the file passed in at initialization. It leaves digest() and digest58() for when we need to compute a new hash from the file itself.
Until now it wasn't possible to set up a chain with zero transaction fees due to a hardcoded zero check in Payment.isValid(), and a divide by zero error in Transaction.hasMinimumFeePerByte()
- Adds support for files up 500MiB per transaction (at 2MiB chunk sizes). Previously, the max data size was 4000 bytes.
- Adds a nonce, giving us the option to remove the transaction fees altogether on the data chain.
These features become enabled in version 5 of arbitrary transactions.
This is probably the most efficient way to process the data on the fly, but it's still not very scalable. A better approach would be to pre-process the HTML when building the file structure, and then serve them completely statically (i.e. using a standard webserver rather than via application memory). But it makes sense to keep it this way for development and maybe early beta testing.
This can be used to preview a site before signing a transaction and announcing it to the network. The response will need reworking to return JSON (along with most of the other new APIs)
This fixes an NPE when trying to send a file that doesn't exist. It also removes the caching, which we can add again later if it turns out to be needed.
This deletes a file referenced by a user supplied SHA256 digest string (which we will use as the file's "ID" in the Qortal data system). In the future this could be extended to delete all associated chunks, but first we need to build out the data chain so we have a way to look up chunks associated with a file hash.
Block.calcKeyDistance() cannot be called on some trimmed blocks, because the minter level is unable to be inferred in some cases. This generally hasn't been an issue, but the new Block.logDebugInfo() method is invoking it for all blocks. For now I am adding defensiveness to the debug method, but longer term we might want to add defensiveness to Block.calcKeyDistance() itself, if we ever encounter this issue again. I will leave it alone for now, to reduce risk.
# Conflicts:
# pom.xml
# src/main/java/org/qortal/controller/Synchronizer.java
Removed all fast sync code from Controller.syncToPeerChain(), so it is now the same as `master`.
Currently set to 1, as serialization of the BlocksMessage data on mainnet is too slow to use this for any significant number of blocks right now. Hopefully we can find a way to optimise this process, which will allow us to use this for multiple block syncing.
Until then, sticking with single blocks should still be enough to help solve the network congestion and re-orgs we are seeing, because it gives us the ability to request the next block based on the previous block's signature, which was unavailable using GET_BLOCK. This removes the requirement to fetch all block signatures upfront, and therefore it shouldn't matter if the peer does a partial re-org whilst a node is syncing to it.
If fast syncing is enabled in the settings (by default it's disabled) AND the peer is running at least v1.5.0, then it will route through to a new method which fetches multiple blocks at a time, in a very similar way to Synchronizer.syncToPeerChain().
If fast syncing is disabled in the settings, or we are communicating with a peer on an older version, it will continue to sync blocks individually.
When communicating with a peer that is running at least version 1.5.0, it will now sync multiple blocks at once in Synchronizer.syncToPeerChain(). This allows us to bypass the single block syncing (and retry mechanism), which has proven to be unviable when there are multiple active forks with several blocks in each chain.
For peers below v1.5.0, the logic should remain unaffected and it will continue to sync blocks individually.
Q-Apps are static web apps written in javascript, HTML, CSS, and other static assets. The key difference between a Q-App and a fully static site is its ability to interact with both the logged-in user and on-chain data. This is achieved using the API described in this document.
# Section 0: Basic QDN concepts
## Introduction to QDN resources
Each published item on QDN (Qortal Data Network) is referred to as a "resource". A resource could contain anything from a few characters of text, to a multi-layered directory structure containing thousands of files.
Resources are stored on-chain, however the data payload is generally stored off-chain, and verified using an on-chain SHA-256 hash.
To publish a resource, a user must first register a name, and then include that name when publishing the data. Accounts without a registered name are unable to publish to QDN from a Q-App at this time.
Owning the name grants update privileges to the data. If that name is later sold or transferred, the permission to update that resource is moved to the new owner.
## Name, service & identifier
Each QDN resource has 3 important fields:
- `name` - the registered name of the account that is publishing the data (which will hold update/edit privileges going forwards).<br/><br/>
- `service` - the type of content (e.g. IMAGE or JSON). Different services have different validation rules. See [list of available services](#services).<br/><br/>
- `identifier` - an optional string to allow more than one resource to exist for a given name/service combination. For example, the name `QortalDemo` may wish to publish multiple images. This can be achieved by using a different identifier string for each. The identifier is only unique to the name in question, and so it doesn't matter if another name is using the same service and identifier string.
## Shared identifiers
Since an identifier can be used by multiple names, this can be used to the advantage of Q-App developers as it allows for data to be stored in a deterministic location.
An example of this is the user's avatar. This will always be published with service `THUMBNAIL` and identifier `qortal_avatar`, along with the user's name. So, an app can display the avatar of a user just by specifying their name when requesting the data. The same applies when publishing data.
## "Default" resources
A "default" resource refers to one without an identifier. For example, when a website is published via the UI, it will use the user's name and the service `WEBSITE`. These do not have an identifier, and are therefore the "default" website for this name. When requesting or publishing data without an identifier, apps can either omit the `identifier` key entirely, or include `"identifier": "default"` to indicate that the resource(s) being queried or published do not have an identifier.
<aname="services"></a>
## Available service types
Here is a list of currently available services that can be used in Q-Apps:
### Public services ###
The services below are intended to be used for publicly accessible data.
IMAGE,
THUMBNAIL,
VIDEO,
AUDIO,
PODCAST,
VOICE,
ARBITRARY_DATA,
JSON,
DOCUMENT,
LIST,
PLAYLIST,
METADATA,
BLOG,
BLOG_POST,
BLOG_COMMENT,
GIF_REPOSITORY,
ATTACHMENT,
FILE,
FILES,
CHAIN_DATA,
STORE,
PRODUCT,
OFFER,
COUPON,
CODE,
PLUGIN,
EXTENSION,
GAME,
ITEM,
NFT,
DATABASE,
SNAPSHOT,
COMMENT,
CHAIN_COMMENT,
WEBSITE,
APP,
QCHAT_ATTACHMENT,
QCHAT_IMAGE,
QCHAT_AUDIO,
QCHAT_VOICE
### Private services ###
For the services below, data is encrypted for a single recipient, and can only be decrypted using the private key of the recipient's wallet.
QCHAT_ATTACHMENT_PRIVATE,
ATTACHMENT_PRIVATE,
FILE_PRIVATE,
IMAGE_PRIVATE,
VIDEO_PRIVATE,
AUDIO_PRIVATE,
VOICE_PRIVATE,
DOCUMENT_PRIVATE,
MAIL_PRIVATE,
MESSAGE_PRIVATE
## Single vs multi-file resources
Some resources, such as those published with the `IMAGE` or `JSON` service, consist of a single file or piece of data (the image or the JSON string). This is the most common type of QDN resource, especially in the context of Q-Apps. These can be published by supplying a base64-encoded string containing the data.
Other resources, such as those published with the `WEBSITE`, `APP`, or `GIF_REPOSITORY` service, can contain multiple files and directories. Publishing these kinds of files is not yet available for Q-Apps, however it is possible to retrieve multi-file resources that are already published. When retrieving this data (via FETCH_QDN_RESOURCE), a `filepath` must be included to indicate the file that you would like to retrieve. There is no need to specify a filepath for single file resources, as these will automatically return the contents of the single file.
## App-specific data
Some apps may want to make all QDN data for a particular service available. However, others may prefer to only deal with data that has been published by their app (if a specific format/schema is being used for instance).
Identifiers can be used to allow app developers to locate data that has been published by their app. The recommended approach for this is to use the app name as a prefix on all identifiers when publishing data.
For instance, an app called `MyApp` could allow users to publish JSON data. The app could choose to prefix all identifiers with the string `myapp_`, and then use a random string for each published resource (resulting in identifiers such as `myapp_qR5ndZ8v`). Then, to locate data that has potentially been published by users of MyApp, it can later search the QDN database for items with `"service": "JSON"` and `"identifier": "myapp_"`. The SEARCH_QDN_RESOURCES action has a `prefix` option in order to match identifiers beginning with the supplied string.
Note that QDN is a permissionless system, and therefore it's not possible to verify that a resource was actually published by the app. It is recommended that apps validate the contents of the resource to ensure it is formatted correctly, instead of making assumptions.
## Updating a resource
To update a resource, it can be overwritten by publishing with the same `name`, `service`, and `identifier` combination. Note that the authenticated account must currently own the name in order to publish an update.
## Routing
If a non-existent `filepath` is accessed, the default behaviour of QDN is to return a `404: File not found` error. This includes anything published using the `WEBSITE` service.
However, routing is handled differently for anything published using the `APP` service.
For apps, QDN automatically sends all unhandled requests to the index file (generally index.html). This allows the app to use custom routing, as it is able to listen on any path. If a file exists at a path, the file itself will be served, and so the request won't be sent to the index file.
It's recommended that all apps return a 404 page if a request isn't able to be routed.
# Section 1: Simple links and image loading via HTML
## Section 1a: Linking to other QDN websites / resources
The `qortal://` protocol can be used to access QDN data from within Qortal websites and apps. The basic format is as follows:
For cases where you would prefer to explicitly include an identifier (to remove ambiguity) you can use the keyword `default` to access a resource that doesn't have an identifier. For instance:
```
<ahref="qortal://WEBSITE/QortalDemo/default">link to root of website</a>
<ahref="qortal://WEBSITE/QortalDemo/default/minting-leveling/index.html">link to subpage of website</a>
```
## Section 1b: Linking to other QDN images
The same applies for images, such as displaying an avatar:
Javascript apps allow for much more complex integrations with Qortal's blockchain data.
## Section 2a: Direct API calls
The standard [Qortal Core API](http://localhost:12391/api-documentation) is available to websites and apps, and can be called directly using a standard AJAX request, such as:
However, this only works for read-only data, such as looking up transactions, names, balances, etc. Also, since the address of the logged in account can't be retrieved from the core, apps can't show personalized data with this approach.
## Section 2b: User interaction via qortalRequest()
To take things a step further, the qortalRequest() function can be used to interact with the user, in order to:
- Request address and public key of the logged in account
- Publish data to QDN
- Send chat messages
- Join groups
- Deploy ATs (smart contracts)
- Send QORT or any supported foreign coin
- Add/remove items from lists
In addition to the above, qortalRequest() also supports many read-only functions that are also available via direct core API calls. Using qortalRequest() helps with futureproofing, as the core APIs can be modified without breaking functionality of existing Q-Apps.
### Making a request
Qortal core will automatically inject the `qortalRequest()` javascript function to all websites/apps, which returns a Promise. This can be used to fetch data or publish data to the Qortal blockchain. This functionality supports async/await, as well as try/catch error handling.
```
async function myfunction() {
try {
let res = await qortalRequest({
action: "GET_ACCOUNT_DATA",
address: "QZLJV7wbaFyxaoZQsjm6rb9MWMiDzWsqM2"
});
console.log(JSON.stringify(res)); // Log the response to the console
} catch(e) {
console.log("Error: " + JSON.stringify(e));
}
}
myfunction();
```
### Timeouts
Request timeouts are handled automatically when using qortalRequest(). The timeout value will differ based on the action being used - see `getDefaultTimeout()` in [q-apps.js](src/main/resources/q-apps/q-apps.js) for the current values.
If a request times out it will throw an error - `The request timed out` - which can be handled by the Q-App.
It is also possible to specify a custom timeout using `qortalRequestWithTimeout(request, timeout)`, however this is discouraged. It's more reliable and futureproof to let the core handle the timeout values.
# Section 3: qortalRequest Documentation
## Supported actions
Here is a list of currently supported actions:
- GET_USER_ACCOUNT
- GET_ACCOUNT_DATA
- GET_ACCOUNT_NAMES
- SEARCH_NAMES
- GET_NAME_DATA
- LIST_QDN_RESOURCES
- SEARCH_QDN_RESOURCES
- GET_QDN_RESOURCE_STATUS
- GET_QDN_RESOURCE_PROPERTIES
- GET_QDN_RESOURCE_METADATA
- GET_QDN_RESOURCE_URL
- LINK_TO_QDN_RESOURCE
- FETCH_QDN_RESOURCE
- PUBLISH_QDN_RESOURCE
- PUBLISH_MULTIPLE_QDN_RESOURCES
- DECRYPT_DATA
- SAVE_FILE
- GET_WALLET_BALANCE
- GET_BALANCE
- SEND_COIN
- SEARCH_CHAT_MESSAGES
- SEND_CHAT_MESSAGE
- LIST_GROUPS
- JOIN_GROUP
- DEPLOY_AT
- GET_AT
- GET_AT_DATA
- LIST_ATS
- FETCH_BLOCK
- FETCH_BLOCK_RANGE
- SEARCH_TRANSACTIONS
- GET_PRICE
- GET_LIST_ITEMS
- ADD_LIST_ITEMS
- DELETE_LIST_ITEM
More functionality will be added in the future.
## Example Requests
Here are some example requests for each of the above:
### Get address of logged in account
_Will likely require user approval_
```
let account = await qortalRequest({
action: "GET_USER_ACCOUNT"
});
let address = account.address;
```
### Get public key of logged in account
_Will likely require user approval_
```
let pubkey = await qortalRequest({
action: "GET_USER_ACCOUNT"
});
let publicKey = account.publicKey;
```
### Get account data
```
let res = await qortalRequest({
action: "GET_ACCOUNT_DATA",
address: "QZLJV7wbaFyxaoZQsjm6rb9MWMiDzWsqM2"
});
```
### Get names owned by account
```
let res = await qortalRequest({
action: "GET_ACCOUNT_NAMES",
address: "QZLJV7wbaFyxaoZQsjm6rb9MWMiDzWsqM2"
});
```
### Search names
```
let res = await qortalRequest({
action: "SEARCH_NAMES",
query: "search query goes here",
prefix: false, // Optional - if true, only the beginning of the name is matched
query: "search query goes here", // Optional - searches both "identifier" and "name" fields
identifier: "search query goes here", // Optional - searches only the "identifier" field
name: "search query goes here", // Optional - searches only the "name" field
prefix: false, // Optional - if true, only the beginning of fields are matched in all of the above filters
exactMatchNames: true, // Optional - if true, partial name matches are excluded
default: false, // Optional - if true, only resources without identifiers are returned
mode: "LATEST", // Optional - whether to return all resources or just the latest for a name/service combination. Possible values: ALL,LATEST. Default: LATEST
minLevel: 1, // Optional - whether to filter results by minimum account level
includeStatus: false, // Optional - will take time to respond, so only request if necessary
includeMetadata: false, // Optional - will take time to respond, so only request if necessary
nameListFilter: "QApp1234Subscriptions", // Optional - will only return results if they are from a name included in supplied list
followedOnly: false, // Optional - include followed names only
// before: 1683546000000, // Optional - limit to resources created before timestamp
// after: 1683546000000, // Optional - limit to resources created after timestamp
limit: 100,
offset: 0,
reverse: true
});
```
### Search QDN resources (multiple names)
```
let res = await qortalRequest({
action: "SEARCH_QDN_RESOURCES",
service: "THUMBNAIL",
query: "search query goes here", // Optional - searches both "identifier" and "name" fields
identifier: "search query goes here", // Optional - searches only the "identifier" field
names: ["QortalDemo", "crowetic", "AlphaX"], // Optional - searches only the "name" field for any of the supplied names
prefix: false, // Optional - if true, only the beginning of fields are matched in all of the above filters
exactMatchNames: true, // Optional - if true, partial name matches are excluded
default: false, // Optional - if true, only resources without identifiers are returned
mode: "LATEST", // Optional - whether to return all resources or just the latest for a name/service combination. Possible values: ALL,LATEST. Default: LATEST
includeStatus: false, // Optional - will take time to respond, so only request if necessary
includeMetadata: false, // Optional - will take time to respond, so only request if necessary
nameListFilter: "QApp1234Subscriptions", // Optional - will only return results if they are from a name included in supplied list
followedOnly: false, // Optional - include followed names only
// before: 1683546000000, // Optional - limit to resources created before timestamp
// after: 1683546000000, // Optional - limit to resources created after timestamp
limit: 100,
offset: 0,
reverse: true
});
```
### Fetch QDN single file resource
```
let res = await qortalRequest({
action: "FETCH_QDN_RESOURCE",
name: "QortalDemo",
service: "THUMBNAIL",
identifier: "qortal_avatar", // Optional. If omitted, the default resource is returned, or you can alternatively use the keyword "default"
encoding: "base64", // Optional. If omitted, data is returned in raw form
rebuild: false
});
```
### Fetch file from multi file QDN resource
Data is returned in the base64 format
```
let res = await qortalRequest({
action: "FETCH_QDN_RESOURCE",
name: "QortalDemo",
service: "WEBSITE",
identifier: "default", // Optional. If omitted, the default resource is returned, or you can alternatively request that using the keyword "default", as shown here
filepath: "index.html", // Required only for resources containing more than one file
rebuild: false
});
```
### Get QDN resource status
```
let res = await qortalRequest({
action: "GET_QDN_RESOURCE_STATUS",
name: "QortalDemo",
service: "THUMBNAIL",
identifier: "qortal_avatar", // Optional
build: true // Optional - request that the resource is fetched & built in the background
Note: this publishes a single, base64-encoded file. Multi-file resource publishing (such as a WEBSITE or GIF_REPOSITORY) is not yet supported via a Q-App. It will be added in a future update.
```
let res = await qortalRequest({
action: "PUBLISH_QDN_RESOURCE",
name: "Demo", // Publisher must own the registered name - use GET_ACCOUNT_NAMES for a list
service: "IMAGE",
identifier: "myapp-image1234" // Optional
data64: "base64_encoded_data",
// filename: "image.jpg", // Optional - to help apps determine the file's type
// title: "Title", // Optional
// description: "Description", // Optional
// category: "TECHNOLOGY", // Optional
// tag1: "any", // Optional
// tag2: "strings", // Optional
// tag3: "can", // Optional
// tag4: "go", // Optional
// tag5: "here", // Optional
// encrypt: true, // Optional - to be used with a private service
// recipientPublicKey: "publickeygoeshere" // Only required if `encrypt` is set to true
});
```
### Publish multiple resources at once to QDN
_Requires user approval_.<br/>
Note: each resource being published consists of a single, base64-encoded file, each in its own transaction. Useful for publishing two or more related things, such as a video and a video thumbnail.
```
let res = await qortalRequest({
action: "PUBLISH_MULTIPLE_QDN_RESOURCES",
resources: [
name: "Demo", // Publisher must own the registered name - use GET_ACCOUNT_NAMES for a list
service: "IMAGE",
identifier: "myapp-image1234" // Optional
data64: "base64_encoded_data",
// filename: "image.jpg", // Optional - to help apps determine the file's type
// title: "Title", // Optional
// description: "Description", // Optional
// category: "TECHNOLOGY", // Optional
// tag1: "any", // Optional
// tag2: "strings", // Optional
// tag3: "can", // Optional
// tag4: "go", // Optional
// tag5: "here", // Optional
// encrypt: true, // Optional - to be used with a private service
// recipientPublicKey: "publickeygoeshere" // Only required if `encrypt` is set to true
fee: 0.00000020 // Optional fee per byte (default fee used if omitted, recommended) - not used for QORT or ARRR
});
```
### Search or list chat messages
```
let res = await qortalRequest({
action: "SEARCH_CHAT_MESSAGES",
before: 999999999999999,
after: 0,
txGroupId: 0, // Optional (must specify either txGroupId or two involving addresses)
// involving: ["QZLJV7wbaFyxaoZQsjm6rb9MWMiDzWsqM2", "QSefrppsDCsZebcwrqiM1gNbWq7YMDXtG2"], // Optional (must specify either txGroupId or two involving addresses)
// reference: "reference", // Optional
// chatReference: "chatreference", // Optional
// hasChatReference: true, // Optional
encoding: "BASE64", // Optional (defaults to BASE58 if omitted)
Note: this returns a "Resource does not exist" error if a non-existent resource is requested.
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "THUMBNAIL",
name: "QortalDemo",
identifier: "qortal_avatar"
// path: "filename.jpg" // optional - not needed if resource contains only one file
});
```
### Get URL to load a QDN website
Note: this returns a "Resource does not exist" error if a non-existent resource is requested.
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "WEBSITE",
name: "QortalDemo",
});
```
### Get URL to load a specific file from a QDN website
Note: this returns a "Resource does not exist" error if a non-existent resource is requested.
```
let url = await qortalRequest({
action: "GET_QDN_RESOURCE_URL",
service: "WEBSITE",
name: "AlphaX",
path: "/assets/img/logo.png"
});
```
### Link/redirect to another QDN website
Note: an alternate method is to include `<a href="qortal://WEBSITE/QortalDemo">link text</a>` within your HTML code.
```
let res = await qortalRequest({
action: "LINK_TO_QDN_RESOURCE",
service: "WEBSITE",
name: "QortalDemo",
});
```
### Link/redirect to a specific path of another QDN website
Note: an alternate method is to include `<a href="qortal://WEBSITE/QortalDemo/minting-leveling/index.html">link text</a>` within your HTML code.
```
let res = await qortalRequest({
action: "LINK_TO_QDN_RESOURCE",
service: "WEBSITE",
name: "QortalDemo",
path: "/minting-leveling/index.html"
});
```
### Get the contents of a list
_Requires user approval_
```
let res = await qortalRequest({
action: "GET_LIST_ITEMS",
list_name: "followedNames"
});
```
### Add one or more items to a list
_Requires user approval_
```
let res = await qortalRequest({
action: "ADD_LIST_ITEMS",
list_name: "blockedNames",
items: ["QortalDemo"]
});
```
### Delete a single item from a list
_Requires user approval_.
Items must be deleted one at a time.
```
let res = await qortalRequest({
action: "DELETE_LIST_ITEM",
list_name: "blockedNames",
item: "QortalDemo"
});
```
# Section 4: Examples
Some example projects can be found [here](https://github.com/Qortal/Q-Apps). These can be cloned and modified, or used as a reference when creating a new app.
## Sample App
Here is a sample application to display the logged-in user's avatar:
```
<html>
<head>
<script>
async function showAvatar() {
try {
// Get QORT address of logged in account
let account = await qortalRequest({
action: "GET_USER_ACCOUNT"
});
let address = account.address;
console.log("address: " + address);
// Get names owned by this account
let names = await qortalRequest({
action: "GET_ACCOUNT_NAMES",
address: address
});
console.log("names: " + JSON.stringify(names));
if (names.length == 0) {
console.log("User has no registered names");
return;
}
// Download base64-encoded avatar of the first registered name
Publishing an in-development app to mainnet isn't recommended. There are several options for developing and testing a Q-app before publishing to mainnet:
### Preview mode
Select "Preview" in the UI after choosing the zip. This allows for full Q-App testing without the need to publish any data.
### Testnets
For an end-to-end test of Q-App publishing, you can use the official testnet, or set up a single node testnet of your own (often referred to as devnet) on your local machine. See [Single Node Testnet Quick Start Guide](testnet/README.md#quick-start).
### Debugging
It is recommended that you develop and test in a web browser, to allow access to the javascript console. To do this:
1. Open the UI app, then minimise it.
2. In a Chromium-based web browser, visit: http://localhost:12388/
3. Log in to your account and then preview your app/website.
4. Go to `View > Developer > JavaScript Console`. Here you can monitor console logs, errors, and network requests from your app, in the same way as any other web-app.
StringqdnContextVar=String.format("<script>var _qdnContext=\"%s\"; var _qdnTheme=\"%s\"; var _qdnService=\"%s\"; var _qdnName=\"%s\"; var _qdnIdentifier=\"%s\"; var _qdnPath=\"%s\"; var _qdnBase=\"%s\"; var _qdnBaseWithPath=\"%s\";</script>",qdnContext,theme,service,name,identifier,path,qdnBase,qdnBaseWithPath);
throwApiExceptionFactory.INSTANCE.createCustomException(request,ApiError.UNAUTHORIZED,"Local requests not allowed when localAuthBypassEnabled is enabled in settings");
@Schema(description="Litecoin BIP32 extended public key",example="tpub___________________________________________________________________________________________________________")
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.