Compare commits

...

132 Commits

Author SHA1 Message Date
aeab1acbbc Bump version to 4.7.1 2025-04-08 20:13:43 -07:00
crowetic
0b37666d2b
Merge pull request #250 from kennycud/master
Merging current 'test release' from kennycud repo after extensive testing by community.
2025-04-08 08:34:07 -07:00
kennycud
bcf3538d18 dd cache enabled to true by default 2025-04-07 12:19:56 -07:00
kennycud
b2d9d0539e removed cache orphaning, crowetic and I agree it should have never been added to begin with 2025-04-05 11:42:21 -07:00
kennycud
1bd6076e33 forgot IndexCache.java in the last commit
replaced index service attribute with a category attribute and reduced index attribute names to single characters to reduce memory footprint, t is for term, n is for name, c is for category, l if for link

changed default indexing frequency from 1 minute to 10 minutes to reduce memory use

added arbitrary resource endpoint for index search by issuer name and index prefix

added some additional error handling concerning unrecognized properties in the indices
2025-04-03 10:23:58 -07:00
kennycud
a6309e925b replaced index service attribute with a category attribute and reduced index attribute names to single characters to reduce memory footprint, t is for term, n is for name, c is for category, l if for link
changed default indexing frequency from 1 minute to 10 minutes to reduce memory use

added arbitrary resource endpoint for index search by issuer name and index prefix

added some additional error handling concerning unrecognized properties in the indices
2025-04-03 10:18:45 -07:00
kennycud
23de8a98bc removed logging 2025-03-21 18:59:32 -07:00
kennycud
d0a85d4717 QDN bug resolution 2025-03-21 18:44:41 -07:00
kennycud
a893888a2e reduced logging level for invalid formatting 2025-03-21 18:43:06 -07:00
kennycud
bd4472c2c0
Merge pull request #5 from Philreact/feature/search-keywords
added keywords to qortalRequest
2025-03-19 17:37:14 -07:00
kennycud
10dda255e2 added rebuild arbitrary rebuild resource timer task 2025-03-16 18:49:28 -07:00
kennycud
934c23402a added logging, so we can better understand the exception thrown 2025-03-16 18:47:59 -07:00
kennycud
4188f18a9a added error handling 2025-03-16 18:46:38 -07:00
kennycud
e48fd96c1e nullified impossible time constraints 2025-03-14 14:15:18 -07:00
kennycud
e76694e214 implemented before and after filtering 2025-03-13 13:46:17 -07:00
kennycud
dbf49309ec added some critical exception handling for arbitrary data indexing support 2025-03-12 14:24:23 -07:00
kennycud
ab4730cef0 initial implementation of arbitrary data indexing support 2025-03-12 11:21:57 -07:00
kennycud
7f3c1d553f removed name based arbitrary resource storage capacity limits and added arbitrary resource cache rebuild logging verbosity 2025-03-10 15:17:04 -07:00
ab0ef85458 added keywords to SEARCH_QDN_RESOURCES 2025-03-08 20:43:34 +02:00
b64674783a Merge remote-tracking branch 'kenny/master' into feature/search-keywords 2025-03-08 20:19:56 +02:00
kennycud
92fb52220a
Merge pull request #4 from Philreact/feature/search-keywords
Feature/search keywords
2025-03-06 07:10:26 -08:00
kennycud
2d0bdca8dc
Merge pull request #3 from Philreact/bugfix/get-qdn-resource-metadata
fix var bug for GET_QDN_RESOURCE_PROPERTIES
2025-03-06 07:06:43 -08:00
2e9f358d0b changed to list and added to cache 2025-03-06 16:10:30 +02:00
6a6380e9e7 Merge remote-tracking branch 'kenny/master' into feature/search-keywords 2025-03-06 14:02:36 +02:00
kennycud
11c2d1bd14 a solution for the metadata and status members getting nullified in the cache 2025-03-05 18:47:01 -08:00
1d79df078e Merge remote-tracking branch 'kenny/master' into feature/search-keywords 2025-03-05 21:14:47 +02:00
kennycud
4baafd1305 more arbitrary data optimizations, including the arbitrary resources cache rebuild and a setting to support it, added and removed notifications, added method to the arbitrary repository, also removed an unnecessary setting that was added in the last commit 2025-03-03 10:37:39 -08:00
f8cee2e0b7 Merge remote-tracking branch 'kenny/master' into feature/search-keywords 2025-02-27 18:36:00 +02:00
kennycud
676885ea2d optimized arbitrary metadata fetching, added arbitrary data cache manager notifications, removed redundant notifications, added method to arbitrary repository and a setting to support the optimization 2025-02-24 16:36:13 -08:00
kennycud
1f4ca6263f data monitor initial implementation 2025-02-19 17:18:05 -08:00
kennycud
df37372180 trade ledger export implementation, completed trades bug fix 2025-02-11 18:45:57 -08:00
086b0809d6 remove log 2025-02-08 22:20:03 +02:00
33650cc432 when the path is render/hash do not save path for nav history 2025-02-05 15:11:48 +02:00
c22abc440b change label 2025-02-04 18:02:56 +02:00
258eb3a0f9 added keywords query for arbitrary research search 2025-02-04 15:42:25 +02:00
kennycud
91ceafe0e3 supporting multiple minting groups instead of supporting one and only one minting group 2025-02-03 18:19:56 -08:00
kennycud
9017db725e Merge remote-tracking branch 'origin/master' 2025-02-01 18:44:17 -08:00
kennycud
a42f214358 invite orphan vulnerability patch, detailed test case coming in a commit soon 2025-02-01 18:43:48 -08:00
ecd4233dd0 fix fetch block qortalRequest 2025-01-26 00:13:58 +02:00
e5b6e893cd GET_AT missing a slash 2025-01-24 21:30:55 +02:00
9e45d640bc fix var bug 2025-01-23 23:52:39 +02:00
crowetic
faee7c8f6a
Merge pull request #247 from crowetic/master
push featureTrigger blocks back a bit to give more time to prepare+sign auto-update
2025-01-21 19:26:47 -08:00
ca238c995e push featureTrigger blocks back a bit to give more time for auto-update. 2025-01-21 19:12:17 -08:00
e434a28d00 Merge remote-tracking branch 'origin/master' 2025-01-21 19:11:19 -08:00
996d5e0e86 push featureTrigger blocks back a bit to give more time for auto-update. 2025-01-21 19:10:06 -08:00
8b797b5bd5 push featureTrigger blocks back a bit to give more time for auto-update. 2025-01-21 19:05:57 -08:00
crowetic
999cfafe00
Merge pull request #246 from crowetic/master
updates/fixes to publish-auto-update.pl
2025-01-21 18:24:13 -08:00
4991618f19 updates/fixes to publish-auto-update.pl 2025-01-21 18:22:25 -08:00
crowetic
4c35239bb1
Merge pull request #245 from crowetic/master
bump version to 4.7.0 and set featureTrigger block heights
2025-01-21 18:10:13 -08:00
d6cf45b311 bump version to 4.7.0 and set featureTrigger block heights 2025-01-21 18:07:25 -08:00
crowetic
ea9a24dca2
Merge pull request #244 from kennycud/master
Balance Recorder & Hard Forks
2025-01-21 17:35:26 -08:00
kennycud
72f0194487 get admin query fix and hardfork 2025-01-17 19:31:13 -08:00
kennycud
b2dbcbb603 made adjustments to support the ignore level feature trigger and removed the fail-safe feature trigger since the ignore level feature trigger now satisfies it implicitly 2025-01-13 13:52:17 -08:00
kennycud
69cba78d94 exclude blocked implementation completion 2025-01-11 19:01:13 -08:00
kennycud
70f4ff4fb3 ignore level for reward share feature hard fork 2025-01-11 18:20:28 -08:00
kennycud
a8a8904ebf removed the NULL account from the dev admin reward distribution and added some fail safes in case the admin groups are empty 2025-01-08 16:19:38 -08:00
kennycud
2805bb8364 corrected an arithmetic error 2025-01-07 13:20:39 -08:00
kennycud
d9a7648d36 access to decoded online accounts by block 2025-01-05 15:59:09 -08:00
kennycud
2392b7b155 system info and database connection status access 2025-01-05 13:49:31 -08:00
kennycud
f5d338435a Since the Groups table is now named Groups with back ticks, it is now case-sensitive. Since it is now case-sensitive it needs to be in all caps, so when other SQL statements call on this table using the Groups without backticks it will be compatible. When Groups is used in a statement without back ticks or quotes it automatically gets converted into capital letters. 2025-01-02 18:10:25 -08:00
kennycud
8f6b55a98b rollback the Groups table back quotes, because this only works with my testing environment and causes problems in production 2024-12-31 13:57:39 -08:00
kennycud
278243f01c rollback the negation of founder effective minting level, because I made it under the assumption that it was used for reward distributions when it is used for block signatures only 2024-12-31 13:54:20 -08:00
kennycud
756f3a243d negate founder effective minting level for admins replace founders hardfork 2024-12-30 18:36:44 -08:00
kennycud
950c4a5b35 Merge remote-tracking branch 'origin/master' 2024-12-30 16:06:24 -08:00
kennycud
ebc58c5c5c qualified Groups table name, so it will be compatible with HSQLDB updated release which uses Groups for as a reserved word 2024-12-30 16:01:53 -08:00
kennycud
8bbb994876
Merge pull request #2 from Philreact/master
added seller/buyer to filter completed trades
2024-12-30 12:19:01 -08:00
kennycud
c2ba9d142c crowetic's logging suggestions for the new reward distribution update 2024-12-30 12:15:27 -08:00
kennycud
a300ac2393 added capabilities for groups with null ownership including banning and kicking members and member ban cancellations; enforcing group approval thresholds to invites and invite cancellations; the established add and remove admin capabilities were used as guidance for this implementation; this was added as a hardfork to preserve group transactions from previous blocks 2024-12-29 18:08:04 -08:00
kennycud
bdbbd0152f updated the hard fork heights for the test chain 2024-12-28 14:01:01 -08:00
kennycud
45d88c1bac Admin share typo fix and new test case submission. 2024-12-26 14:40:44 -08:00
kennycud
3952705edd Admin replace founders hardfork and online validation fail-safe hardfork. 2024-12-26 13:53:00 -08:00
kennycud
4f0aabfb36 For Balance Recorder, reward recordings only, that is the default. 2024-12-25 13:24:24 -08:00
5ac0027b5a fix css for qdn resource loading 2024-12-25 09:16:35 +02:00
e9b75b051b added seller/buyer to filter completed trades 2024-12-24 14:39:31 +02:00
kennycud
c71f5fa8bf added another logging line to troubleshoot QDN problem 2024-12-13 15:21:51 -08:00
kennycud
5e145de52b Balance Recorder initial implementation. 2024-12-12 13:46:18 -08:00
kennycud
543d0a7d22 Merge remote-tracking branch 'origin/master' 2024-12-10 14:07:32 -08:00
kennycud
5346c97922 added logging to help solve the updated field problem, the problem is the updated field is not getting updated 2024-12-10 14:07:11 -08:00
crowetic
c2bfa26376
Merge pull request #242 from crowetic/master
Bump version to 4.6.6 and other changes
2024-12-06 10:48:37 -08:00
crowetic
386387fa16 Added modifications to current Windows Installer build in preparation for 4.6.6 release
modified AdvancedInstaller settings, created new installer visual settings and included logo utilized for that. Modified Readme file to include additional instructions.
2024-12-05 20:18:52 -08:00
071325cf6d Bump version to 4.6.6 to prepare for update, modified auto-update repos settings to plan for removal of reliance upon GitHub, increased maxPeerConnectionTime to 6 hours instead of 4, and set default minPeerversion to 4.6.5. 2024-12-05 20:13:32 -08:00
a23eb02182 Revert "modified autoUpdateRepos further to plan ahead."
This reverts commit 04203e7c31c943d48c96097615d32bc67e318d47.
2024-12-05 20:08:24 -08:00
04203e7c31 modified autoUpdateRepos further to plan ahead. 2024-12-05 20:00:31 -08:00
crowetic
749143e98e
Merge pull request #241 from crowetic/master
Selective acceptance of recent PRs to Qortal master branch, and updated start.sh script. See description for details.
2024-12-04 14:12:04 -08:00
9b20192b30 Changes need to be reverted prior to the PR from crowetic repo being merged. All of these changes aside from those in the 'network' folder, will be re-applied with crowetic's PR.
Revert "Various changes"

This reverts commit adbba0f94767cda6251668c5206015dfccb44941.
2024-12-04 14:08:25 -08:00
8d6830135c Changes need to be reverted prior to new PR from crowetic repo.
Revert "Update dependencies"

This reverts commit e3a85786e7ebd7fa78f8bf711ee5493e136fc149.
2024-12-04 14:07:43 -08:00
448b536238 Modified start script to work with optimized Garbage Collection made available in version 4.6.6 and beyond. Suggestion to machines with 6GB of RAM or less, increase the percentage from 50 to 75. Qortal Core will only utilize the RAM it needs, up to the percentage set maximum. 2024-12-03 09:09:42 -08:00
2e989aaa57 A merge of just alpha's validation changes, phil and quick's commits, and kenny's changes to test. 2024-12-03 08:29:53 -08:00
crowetic
8bd293ccd5
Merge pull request #217 from QuickMythril/4.6.2-unit-test-fix
Fix for more unit tests fails
2024-12-02 14:44:16 -08:00
crowetic
a8d73926b3
Merge pull request #238 from AlphaX-Qortal/master
Added real address to API results - Currently the address shown in the API results when querying blocks, shows an address formed by the 'reward share public key', this address is not useful for viewing, as it is not the address utilized for QORT. This change makes it so the 'real' Qortal address is displayed instead of this useless address. Thanks @AlphaX-Qortal

Added group member check to validations - validation fixes.

Network changes - Moved unnecessary 'we already have connection' messages from info logging to debug. Updated minPeerVersion default to current release version. (4.6.5). Updated default peer list. Updated syntax. Updated formatting.

Updated dependencies

Thanks @AlphaX-Qortal
2024-12-02 14:42:34 -08:00
crowetic
bd214a0fd3
Merge pull request #220 from Qortal/master2
adjust timeouts for qortalrequests
2024-12-02 14:28:53 -08:00
crowetic
2347118e59
Merge pull request #239 from kennycud/master
Restructuring database connections for better garbage collection - resolves long-standing memory leak in multiple places that was discovered more specifically after the thread crashes were made to restart if crashed. Thanks so much to @kennycud  for this improvement!
2024-12-02 14:26:43 -08:00
crowetic
7fb093e98a
Merge pull request #237 from Philreact/active-chat-haschatreference
add haschatreference query to activechats endpoint
2024-12-02 14:24:45 -08:00
AlphaX-Qortal
e3a85786e7 Update dependencies 2024-12-02 15:06:46 +01:00
AlphaX-Qortal
adbba0f947 Various changes
- Added real address to API results
- Added group member check to validations
- Network changes
2024-12-02 14:22:05 +01:00
61dec0e4b7 add haschatreference query to activechats endpoint 2024-12-01 12:38:38 +02:00
kennycud
08a2284ce4 deleting file that interferes with building the last commit 2024-11-27 18:06:32 -08:00
kennycud
2e3f97b51f Merge remote-tracking branch 'origin/master' 2024-11-27 17:43:51 -08:00
kennycud
84b973773a restructuring database connections for better garbage collection, adding in the initial implementation of the balance recorder 2024-11-27 17:43:18 -08:00
AlphaX
8ffb0625a1
Bump version to 4.6.5 2024-11-26 23:27:35 +01:00
AlphaX
2ce02faa07
Bump version to 4.6.4 2024-11-26 19:42:13 +01:00
AlphaX
89999e6b33
Set feature trigger 2024-11-26 19:41:15 +01:00
AlphaX
4d28ba692d
Update minimum peer version 2024-11-26 19:34:45 +01:00
AlphaX
cd6d7a3a98
Merge pull request #223 from AlphaX-Qortal/master
Set peer connect to a dedicated thread pool for non-blocking I/O (Thanks to RAZ)
2024-11-26 12:34:06 +01:00
AlphaX-Qortal
0a44928e93 Set peer connect to a dedicated thread pool for non-blocking I/O (Thanks to RAZ) 2024-11-26 11:05:46 +01:00
AlphaX
4b037ad13f
Merge pull request #222 from AlphaX-Qortal/master
Fix batch reward
2024-11-26 07:51:06 +01:00
crowetic
1f9a2edca4
Merge pull request #221 from kennycud/master
Minter Group Check Optimizations - Have been tested by 50+ nodes for multiple days. The only thing we have to verify prior to merging the upcoming changes from Alpha, is validate the additional boolean passed in to canMint on line 1521 in current block.java (isMinterValid)
2024-11-25 18:01:13 -08:00
AlphaX-Qortal
c010ab47db Fix batch reward 2024-11-26 00:03:04 +01:00
7803d6c8f5 adjust timeouts for qortalrequests 2024-11-25 09:36:11 +02:00
kennycud
b0d43a1890 minter group check optimizations 2024-11-20 19:12:21 -08:00
kennycud
f277611d31 Merge branch 'master' of https://github.com/kennycud/qortal
 Conflicts:
	src/main/java/org/qortal/account/Account.java
2024-11-20 15:40:11 -08:00
AlphaX
d89f7ad41d
Bump version to 4.6.3 2024-11-20 19:50:14 +01:00
AlphaX
39cc56c4d8
Update minimum peer version 2024-11-20 19:49:17 +01:00
AlphaX
fccd5a7c97
Merge pull request #219 from AlphaX-Qortal/master
Update canMint and HSQLDB
2024-11-20 19:45:18 +01:00
AlphaX-Qortal
46395bf4dc Updare canMint and HSQLDB 2024-11-20 19:35:47 +01:00
AlphaX
0eb551acc1
Merge pull request #214 from Philreact/master2
add connect-src to csp
2024-11-20 01:22:00 +01:00
kennycud
f55efe38c5 Removed logging statements to demonstrate order of operations to others. Added optimizations for the canMint() method. This is a quick fix and a more comprehensive fix will be done in the future. 2024-11-18 15:09:43 -08:00
kennycud
130bb6cf50 Added logging statements to demonstrate order of operations. This will be removed ASAP and should not be included in a PR. 2024-11-17 17:17:00 -08:00
QuickMythril
652c902607 Add missing feature triggers to unit tests 2024-11-17 16:45:39 -05:00
QuickMythril
915bb1ded3
Merge pull request #74 from QuickMythril/4.6.1-unit-test-fix
4.6.1 unit test fix
2024-11-17 13:50:06 -05:00
AlphaX
8319193453
Bump version to 4.6.2 2024-11-17 18:48:32 +01:00
AlphaX
831ed72e56
Update minimum peer version 2024-11-17 18:47:06 +01:00
AlphaX
885133195e
Set timestamps 2024-11-17 18:44:01 +01:00
crowetic
c45d59b389
Merge pull request #216 from AlphaX-Qortal/master
Removed name check and decreased difficulty for online signature calculation
2024-11-17 09:40:35 -08:00
AlphaX-Qortal
30a289baab Update dependencies 2024-11-16 21:22:00 +01:00
AlphaX-Qortal
d79d64f6b0 Removed name check and decreased difficulty for online signature 2024-11-16 21:14:42 +01:00
QuickMythril
3d83a79014 Fix whitespace only 2024-11-13 06:28:23 -05:00
QuickMythril
82d5d25c59 Add logging to block archive unit tests 2024-11-13 06:16:39 -05:00
QuickMythril
1676098abe Add missing feature triggers to unit tests 2024-11-13 03:06:58 -05:00
0a47ca1462 add font-src csp 2024-11-11 16:07:51 +02:00
0cf9b23142 remove log 2024-11-10 18:57:45 +02:00
0850654519 add connect-src to csp 2024-11-10 18:55:32 +02:00
126 changed files with 8264 additions and 1997 deletions

View File

@ -1,3 +1,4 @@
{
"apiDocumentationEnabled": true
"apiDocumentationEnabled": true,
"apiWhitelistEnabled": false
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,9 @@
## Prerequisites
* AdvancedInstaller v19.4 or better, and enterprise licence if translations are required
* AdvancedInstaller v19.4 or better, and enterprise licence.
* Qortal has an open source license, however it currently (as of December 2024) only supports up to version 19. (We may need to reach out to Advanced Installer again to obtain a new license at some point, if needed.
* Reach out to @crowetic for links to the installer install files, and license.
* Installed AdoptOpenJDK v17 64bit, full JDK *not* JRE
## General build instructions
@ -10,6 +12,12 @@
If this is your first time opening the `qortal.aip` file then you might need to adjust
configured paths, or create a dummy `D:` drive with the expected layout.
Opening the aip file from within a clone of the qortal repo also works, if you have a separate windows machine setup to do the build.
You May need to change the location of the 'jre64' files inside Advanced Installer, if it is set to a path that your build machine doesn't have.
The Java Memory Arguments can be set manually, but as of December 2024 they have been reset back to system defaults. This should include G1GC Garbage Collector.
Typical build procedure:
* Place the `qortal.jar` file in `Install-Files\`

12
pom.xml
View File

@ -3,7 +3,7 @@
<modelVersion>4.0.0</modelVersion>
<groupId>org.qortal</groupId>
<artifactId>qortal</artifactId>
<version>4.6.1</version>
<version>4.7.1</version>
<packaging>jar</packaging>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
@ -16,7 +16,7 @@
<ciyam-at.version>1.4.2</ciyam-at.version>
<commons-net.version>3.8.0</commons-net.version>
<commons-text.version>1.12.0</commons-text.version>
<commons-io.version>2.17.0</commons-io.version>
<commons-io.version>2.18.0</commons-io.version>
<commons-compress.version>1.27.1</commons-compress.version>
<commons-lang3.version>3.17.0</commons-lang3.version>
<dagger.version>1.2.2</dagger.version>
@ -26,9 +26,9 @@
<guava.version>33.3.1-jre</guava.version>
<hamcrest-library.version>2.2</hamcrest-library.version>
<homoglyph.version>1.2.1</homoglyph.version>
<hsqldb.version>2.5.1</hsqldb.version>
<hsqldb.version>2.7.4</hsqldb.version>
<icu4j.version>76.1</icu4j.version>
<java-diff-utils.version>4.12</java-diff-utils.version>
<java-diff-utils.version>4.15</java-diff-utils.version>
<javax.servlet-api.version>4.0.1</javax.servlet-api.version>
<jaxb-runtime.version>2.3.9</jaxb-runtime.version>
<jersey.version>2.42</jersey.version>
@ -45,7 +45,7 @@
<maven-dependency-plugin.version>3.6.1</maven-dependency-plugin.version>
<maven-jar-plugin.version>3.4.2</maven-jar-plugin.version>
<maven-package-info-plugin.version>1.1.0</maven-package-info-plugin.version>
<maven-plugin.version>2.17.1</maven-plugin.version>
<maven-plugin.version>2.18.0</maven-plugin.version>
<maven-reproducible-build-plugin.version>0.17</maven-reproducible-build-plugin.version>
<maven-resources-plugin.version>3.3.1</maven-resources-plugin.version>
<maven-shade-plugin.version>3.6.0</maven-shade-plugin.version>
@ -55,7 +55,7 @@
<simplemagic.version>1.17</simplemagic.version>
<slf4j.version>1.7.36</slf4j.version>
<swagger-api.version>2.0.10</swagger-api.version>
<swagger-ui.version>5.17.14</swagger-ui.version>
<swagger-ui.version>5.18.2</swagger-ui.version>
<upnp.version>1.2</upnp.version>
<xz.version>1.10</xz.version>
</properties>

View File

@ -0,0 +1,173 @@
package org.hsqldb.jdbc;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.hsqldb.jdbc.pool.JDBCPooledConnection;
import org.qortal.data.system.DbConnectionInfo;
import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import javax.sql.ConnectionEvent;
import javax.sql.PooledConnection;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.Comparator;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.stream.Collectors;
/**
* Class HSQLDBPoolMonitored
*
* This class uses the same logic as HSQLDBPool. The only difference is it monitors the state of every connection
* to the database. This is used for debugging purposes only.
*/
public class HSQLDBPoolMonitored extends HSQLDBPool {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBRepositoryFactory.class);
private static final String EMPTY = "Empty";
private static final String AVAILABLE = "Available";
private static final String ALLOCATED = "Allocated";
private ConcurrentHashMap<Integer, DbConnectionInfo> infoByIndex;
public HSQLDBPoolMonitored(int poolSize) {
super(poolSize);
this.infoByIndex = new ConcurrentHashMap<>(poolSize);
}
/**
* Tries to retrieve a new connection using the properties that have already been
* set.
*
* @return a connection to the data source, or null if no spare connections in pool
* @exception SQLException if a database access error occurs
*/
public Connection tryConnection() throws SQLException {
for (int i = 0; i < states.length(); i++) {
if (states.compareAndSet(i, RefState.available, RefState.allocated)) {
JDBCPooledConnection pooledConnection = connections[i];
if (pooledConnection == null)
// Probably shutdown situation
return null;
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), ALLOCATED));
return pooledConnection.getConnection();
}
if (states.compareAndSet(i, RefState.empty, RefState.allocated)) {
try {
JDBCPooledConnection pooledConnection = (JDBCPooledConnection) source.getPooledConnection();
if (pooledConnection == null)
// Probably shutdown situation
return null;
pooledConnection.addConnectionEventListener(this);
pooledConnection.addStatementEventListener(this);
connections[i] = pooledConnection;
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), ALLOCATED));
return pooledConnection.getConnection();
} catch (SQLException e) {
states.set(i, RefState.empty);
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), EMPTY));
}
}
}
return null;
}
public Connection getConnection() throws SQLException {
int var1 = 300;
if (this.source.loginTimeout != 0) {
var1 = this.source.loginTimeout * 10;
}
if (this.closed) {
throw new SQLException("connection pool is closed");
} else {
for(int var2 = 0; var2 < var1; ++var2) {
for(int var3 = 0; var3 < this.states.length(); ++var3) {
if (this.states.compareAndSet(var3, 1, 2)) {
infoByIndex.put(var3, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), ALLOCATED));
return this.connections[var3].getConnection();
}
if (this.states.compareAndSet(var3, 0, 2)) {
try {
JDBCPooledConnection var4 = (JDBCPooledConnection)this.source.getPooledConnection();
var4.addConnectionEventListener(this);
var4.addStatementEventListener(this);
this.connections[var3] = var4;
infoByIndex.put(var3, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), ALLOCATED));
return this.connections[var3].getConnection();
} catch (SQLException var6) {
this.states.set(var3, 0);
infoByIndex.put(var3, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), EMPTY));
}
}
}
try {
Thread.sleep(100L);
} catch (InterruptedException var5) {
}
}
throw JDBCUtil.invalidArgument();
}
}
public void connectionClosed(ConnectionEvent event) {
PooledConnection connection = (PooledConnection) event.getSource();
for (int i = 0; i < connections.length; i++) {
if (connections[i] == connection) {
states.set(i, RefState.available);
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), AVAILABLE));
break;
}
}
}
public void connectionErrorOccurred(ConnectionEvent event) {
PooledConnection connection = (PooledConnection) event.getSource();
for (int i = 0; i < connections.length; i++) {
if (connections[i] == connection) {
states.set(i, RefState.allocated);
connections[i] = null;
states.set(i, RefState.empty);
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), EMPTY));
break;
}
}
}
public List<DbConnectionInfo> getDbConnectionsStates() {
return infoByIndex.values().stream()
.sorted(Comparator.comparingLong(DbConnectionInfo::getUpdated))
.collect(Collectors.toList());
}
private int findConnectionIndex(ConnectionEvent connectionEvent) {
PooledConnection pooledConnection = (PooledConnection) connectionEvent.getSource();
for(int i = 0; i < this.connections.length; ++i) {
if (this.connections[i] == pooledConnection) {
return i;
}
}
return -1;
}
}

View File

@ -14,6 +14,7 @@ import org.qortal.repository.NameRepository;
import org.qortal.repository.Repository;
import org.qortal.settings.Settings;
import org.qortal.utils.Base58;
import org.qortal.utils.Groups;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@ -198,66 +199,85 @@ public class Account {
/** Returns whether account can be considered a "minting account".
* <p>
* To be considered a "minting account", the account needs to pass all of these tests:<br>
* To be considered a "minting account", the account needs to pass some of these tests:<br>
* <ul>
* <li>account's level is at least <tt>minAccountLevelToMint</tt> from blockchain config</li>
* <li>account's address have registered a name</li>
* <li>account's address is member of minter group</li>
* <li>account's address has registered a name</li>
* <li>account's address is a member of the minter group</li>
* </ul>
*
* @param isGroupValidated true if this account has already been validated for MINTER Group membership
* @return true if account can be considered "minting account"
* @throws DataException
*/
public boolean canMint() throws DataException {
public boolean canMint(boolean isGroupValidated) throws DataException {
AccountData accountData = this.repository.getAccountRepository().getAccount(this.address);
NameRepository nameRepository = this.repository.getNameRepository();
GroupRepository groupRepository = this.repository.getGroupRepository();
String myAddress = accountData.getAddress();
int blockchainHeight = this.repository.getBlockRepository().getBlockchainHeight();
int nameCheckHeight = BlockChain.getInstance().getOnlyMintWithNameHeight();
int levelToMint = BlockChain.getInstance().getMinAccountLevelToMint();
int levelToMint;
if( blockchainHeight >= BlockChain.getInstance().getIgnoreLevelForRewardShareHeight() ) {
levelToMint = 0;
}
else {
levelToMint = BlockChain.getInstance().getMinAccountLevelToMint();
}
int level = accountData.getLevel();
int groupIdToMint = BlockChain.getInstance().getMintingGroupId();
List<Integer> groupIdsToMint = Groups.getGroupIdsToMint( BlockChain.getInstance(), blockchainHeight );
int nameCheckHeight = BlockChain.getInstance().getOnlyMintWithNameHeight();
int groupCheckHeight = BlockChain.getInstance().getGroupMemberCheckHeight();
int removeNameCheckHeight = BlockChain.getInstance().getRemoveOnlyMintWithNameHeight();
String myAddress = accountData.getAddress();
List<NameData> myName = nameRepository.getNamesByOwner(myAddress);
boolean isMember = groupRepository.memberExists(groupIdToMint, myAddress);
// Can only mint if:
// Account's level is at least minAccountLevelToMint from blockchain config
if (blockchainHeight < nameCheckHeight) {
if (Account.isFounder(accountData.getFlags())) {
return accountData.getBlocksMintedPenalty() == 0;
} else {
return level >= levelToMint;
}
}
if (accountData == null)
return false;
// Can only mint on onlyMintWithNameHeight from blockchain config if:
// Account's level is at least minAccountLevelToMint from blockchain config
// Account's address has registered a name
if (blockchainHeight >= nameCheckHeight && blockchainHeight < groupCheckHeight) {
List<NameData> myName = nameRepository.getNamesByOwner(myAddress);
if (Account.isFounder(accountData.getFlags())) {
return accountData.getBlocksMintedPenalty() == 0 && !myName.isEmpty();
} else {
return level >= levelToMint && !myName.isEmpty();
}
}
// Can only mint if level is at least minAccountLevelToMint< from blockchain config
if (blockchainHeight < nameCheckHeight && level >= levelToMint)
return true;
// Can only mint on groupMemberCheckHeight from blockchain config if:
// Account's level is at least minAccountLevelToMint from blockchain config
// Account's address has registered a name
// Account's address is a member of the minter group
if (blockchainHeight >= groupCheckHeight && blockchainHeight < removeNameCheckHeight) {
List<NameData> myName = nameRepository.getNamesByOwner(myAddress);
if (Account.isFounder(accountData.getFlags())) {
return accountData.getBlocksMintedPenalty() == 0 && !myName.isEmpty() && (isGroupValidated || Groups.memberExistsInAnyGroup(groupRepository, groupIdsToMint, myAddress));
} else {
return level >= levelToMint && !myName.isEmpty() && (isGroupValidated || Groups.memberExistsInAnyGroup(groupRepository, groupIdsToMint, myAddress));
}
}
// Can only mint if have registered a name
if (blockchainHeight >= nameCheckHeight && blockchainHeight < groupCheckHeight && level >= levelToMint && !myName.isEmpty())
return true;
// Can only mint if have registered a name and is member of minter group id
if (blockchainHeight >= groupCheckHeight && level >= levelToMint && !myName.isEmpty() && isMember)
return true;
// Founders needs to pass same tests like minters
if (blockchainHeight < nameCheckHeight &&
Account.isFounder(accountData.getFlags()) &&
accountData.getBlocksMintedPenalty() == 0)
return true;
if (blockchainHeight >= nameCheckHeight &&
blockchainHeight < groupCheckHeight &&
Account.isFounder(accountData.getFlags()) &&
accountData.getBlocksMintedPenalty() == 0 &&
!myName.isEmpty())
return true;
if (blockchainHeight >= groupCheckHeight &&
Account.isFounder(accountData.getFlags()) &&
accountData.getBlocksMintedPenalty() == 0 &&
!myName.isEmpty() &&
isMember)
return true;
// Can only mint on removeOnlyMintWithNameHeight from blockchain config if:
// Account's level is at least minAccountLevelToMint from blockchain config
// Account's address is a member of the minter group
if (blockchainHeight >= removeNameCheckHeight) {
if (Account.isFounder(accountData.getFlags())) {
return accountData.getBlocksMintedPenalty() == 0 && (isGroupValidated || Groups.memberExistsInAnyGroup(groupRepository, groupIdsToMint, myAddress));
} else {
return level >= levelToMint && (isGroupValidated || Groups.memberExistsInAnyGroup(groupRepository, groupIdsToMint, myAddress));
}
}
return false;
}
@ -272,7 +292,6 @@ public class Account {
return this.repository.getAccountRepository().getBlocksMintedPenaltyCount(this.address);
}
/** Returns whether account can build reward-shares.
* <p>
* To be able to create reward-shares, the account needs to pass at least one of these tests:<br>
@ -286,6 +305,7 @@ public class Account {
*/
public boolean canRewardShare() throws DataException {
AccountData accountData = this.repository.getAccountRepository().getAccount(this.address);
if (accountData == null)
return false;
@ -296,6 +316,9 @@ public class Account {
if (Account.isFounder(accountData.getFlags()) && accountData.getBlocksMintedPenalty() == 0)
return true;
if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getIgnoreLevelForRewardShareHeight() )
return true;
return false;
}
@ -339,10 +362,28 @@ public class Account {
}
/**
* Returns 'effective' minting level, or zero if reward-share does not exist.
* Returns reward-share minting address, or unknown if reward-share does not exist.
*
* @param repository
* @param rewardSharePublicKey
* @return address or unknown
* @throws DataException
*/
public static String getRewardShareMintingAddress(Repository repository, byte[] rewardSharePublicKey) throws DataException {
// Find actual minter address
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(rewardSharePublicKey);
if (rewardShareData == null)
return "Unknown";
return rewardShareData.getMinter();
}
/**
* Returns 'effective' minting level, or zero if reward-share does not exist.
*
* @param repository
* @param rewardSharePublicKey
* @return 0+
* @throws DataException
*/
@ -355,6 +396,7 @@ public class Account {
Account rewardShareMinter = new Account(repository, rewardShareData.getMinter());
return rewardShareMinter.getEffectiveMintingLevel();
}
/**
* Returns 'effective' minting level, with a fix for the zero level.
* <p>

View File

@ -194,6 +194,7 @@ public class ApiService {
context.addServlet(AdminStatusWebSocket.class, "/websockets/admin/status");
context.addServlet(BlocksWebSocket.class, "/websockets/blocks");
context.addServlet(DataMonitorSocket.class, "/websockets/datamonitor");
context.addServlet(ActiveChatsWebSocket.class, "/websockets/chat/active/*");
context.addServlet(ChatMessagesWebSocket.class, "/websockets/chat/messages");
context.addServlet(TradeOffersWebSocket.class, "/websockets/crosschain/tradeoffers");

View File

@ -1,7 +1,13 @@
package org.qortal.api.model;
import org.qortal.account.Account;
import org.qortal.repository.DataException;
import org.qortal.repository.RepositoryManager;
import org.qortal.repository.Repository;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.XmlElement;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
@ -47,4 +53,31 @@ public class ApiOnlineAccount {
return this.recipientAddress;
}
public int getMinterLevelFromPublicKey() {
try (final Repository repository = RepositoryManager.getRepository()) {
return Account.getRewardShareEffectiveMintingLevel(repository, this.rewardSharePublicKey);
} catch (DataException e) {
return 0;
}
}
public boolean getIsMember() {
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getGroupRepository().memberExists(694, getMinterAddress());
} catch (DataException e) {
return false;
}
}
// JAXB special
@XmlElement(name = "minterLevel")
protected int getMinterLevel() {
return getMinterLevelFromPublicKey();
}
@XmlElement(name = "isMinterMember")
protected boolean getMinterMember() {
return getIsMember();
}
}

View File

@ -9,6 +9,7 @@ import java.math.BigInteger;
public class BlockMintingInfo {
public byte[] minterPublicKey;
public String minterAddress;
public int minterLevel;
public int onlineAccountsCount;
public BigDecimal maxDistance;
@ -19,5 +20,4 @@ public class BlockMintingInfo {
public BlockMintingInfo() {
}
}

View File

@ -0,0 +1,72 @@
package org.qortal.api.model;
import io.swagger.v3.oas.annotations.media.Schema;
import org.qortal.data.crosschain.CrossChainTradeData;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
public class CrossChainTradeLedgerEntry {
private String market;
private String currency;
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private long quantity;
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private long feeAmount;
private String feeCurrency;
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private long totalPrice;
private long tradeTimestamp;
protected CrossChainTradeLedgerEntry() {
/* For JAXB */
}
public CrossChainTradeLedgerEntry(String market, String currency, long quantity, long feeAmount, String feeCurrency, long totalPrice, long tradeTimestamp) {
this.market = market;
this.currency = currency;
this.quantity = quantity;
this.feeAmount = feeAmount;
this.feeCurrency = feeCurrency;
this.totalPrice = totalPrice;
this.tradeTimestamp = tradeTimestamp;
}
public String getMarket() {
return market;
}
public String getCurrency() {
return currency;
}
public long getQuantity() {
return quantity;
}
public long getFeeAmount() {
return feeAmount;
}
public String getFeeCurrency() {
return feeCurrency;
}
public long getTotalPrice() {
return totalPrice;
}
public long getTradeTimestamp() {
return tradeTimestamp;
}
}

View File

@ -0,0 +1,50 @@
package org.qortal.api.model;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.Objects;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
public class DatasetStatus {
private String name;
private long count;
public DatasetStatus() {}
public DatasetStatus(String name, long count) {
this.name = name;
this.count = count;
}
public String getName() {
return name;
}
public long getCount() {
return count;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
DatasetStatus that = (DatasetStatus) o;
return count == that.count && Objects.equals(name, that.name);
}
@Override
public int hashCode() {
return Objects.hash(name, count);
}
@Override
public String toString() {
return "DatasetStatus{" +
"name='" + name + '\'' +
", count=" + count +
'}';
}
}

View File

@ -33,9 +33,13 @@ import org.qortal.controller.arbitrary.ArbitraryDataStorageManager;
import org.qortal.controller.arbitrary.ArbitraryMetadataManager;
import org.qortal.data.account.AccountData;
import org.qortal.data.arbitrary.ArbitraryCategoryInfo;
import org.qortal.data.arbitrary.ArbitraryDataIndexDetail;
import org.qortal.data.arbitrary.ArbitraryDataIndexScoreKey;
import org.qortal.data.arbitrary.ArbitraryDataIndexScorecard;
import org.qortal.data.arbitrary.ArbitraryResourceData;
import org.qortal.data.arbitrary.ArbitraryResourceMetadata;
import org.qortal.data.arbitrary.ArbitraryResourceStatus;
import org.qortal.data.arbitrary.IndexCache;
import org.qortal.data.naming.NameData;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
@ -69,8 +73,11 @@ import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.stream.Collectors;
@Path("/arbitrary")
@Tag(name = "Arbitrary")
@ -172,6 +179,7 @@ public class ArbitraryResource {
@Parameter(description = "Name (searches name field only)") @QueryParam("name") List<String> names,
@Parameter(description = "Title (searches title metadata field only)") @QueryParam("title") String title,
@Parameter(description = "Description (searches description metadata field only)") @QueryParam("description") String description,
@Parameter(description = "Keyword (searches description metadata field by keywords)") @QueryParam("keywords") List<String> keywords,
@Parameter(description = "Prefix only (if true, only the beginning of fields are matched)") @QueryParam("prefix") Boolean prefixOnly,
@Parameter(description = "Exact match names only (if true, partial name matches are excluded)") @QueryParam("exactmatchnames") Boolean exactMatchNamesOnly,
@Parameter(description = "Default resources (without identifiers) only") @QueryParam("default") Boolean defaultResource,
@ -212,7 +220,7 @@ public class ArbitraryResource {
}
List<ArbitraryResourceData> resources = repository.getArbitraryRepository()
.searchArbitraryResources(service, query, identifier, names, title, description, usePrefixOnly,
.searchArbitraryResources(service, query, identifier, names, title, description, keywords, usePrefixOnly,
exactMatchNames, defaultRes, mode, minLevel, followedOnly, excludeBlocked, includeMetadata, includeStatus,
before, after, limit, offset, reverse);
@ -1185,6 +1193,90 @@ public class ArbitraryResource {
}
}
@GET
@Path("/indices")
@Operation(
summary = "Find matching arbitrary resource indices",
description = "",
responses = {
@ApiResponse(
description = "indices",
content = @Content(
array = @ArraySchema(
schema = @Schema(
implementation = ArbitraryDataIndexScorecard.class
)
)
)
)
}
)
public List<ArbitraryDataIndexScorecard> searchIndices(@QueryParam("terms") String[] terms) {
List<ArbitraryDataIndexDetail> indices = new ArrayList<>();
// get index details for each term
for( String term : terms ) {
List<ArbitraryDataIndexDetail> details = IndexCache.getInstance().getIndicesByTerm().get(term);
if( details != null ) {
indices.addAll(details);
}
}
// sum up the scores for each index with identical attributes
Map<ArbitraryDataIndexScoreKey, Double> scoreForKey
= indices.stream()
.collect(
Collectors.groupingBy(
index -> new ArbitraryDataIndexScoreKey(index.name, index.category, index.link),
Collectors.summingDouble(detail -> 1.0 / detail.rank)
)
);
// create scorecards for each index group and put them in descending order by score
List<ArbitraryDataIndexScorecard> scorecards
= scoreForKey.entrySet().stream().map(
entry
->
new ArbitraryDataIndexScorecard(
entry.getValue(),
entry.getKey().name,
entry.getKey().category,
entry.getKey().link)
)
.sorted(Comparator.comparingDouble(ArbitraryDataIndexScorecard::getScore).reversed())
.collect(Collectors.toList());
return scorecards;
}
@GET
@Path("/indices/{name}/{idPrefix}")
@Operation(
summary = "Find matching arbitrary resource indices for a registered name and identifier prefix",
description = "",
responses = {
@ApiResponse(
description = "indices",
content = @Content(
array = @ArraySchema(
schema = @Schema(
implementation = ArbitraryDataIndexDetail.class
)
)
)
)
}
)
public List<ArbitraryDataIndexDetail> searchIndicesByName(@PathParam("name") String name, @PathParam("idPrefix") String idPrefix) {
return
IndexCache.getInstance().getIndicesByIssuer()
.getOrDefault(name, new ArrayList<>(0)).stream()
.filter( indexDetail -> indexDetail.indexIdentifer.startsWith(idPrefix))
.collect(Collectors.toList());
}
// Shared methods

View File

@ -16,9 +16,13 @@ import org.qortal.api.model.AggregatedOrder;
import org.qortal.api.model.TradeWithOrderInfo;
import org.qortal.api.resource.TransactionsResource.ConfirmationStatus;
import org.qortal.asset.Asset;
import org.qortal.controller.hsqldb.HSQLDBBalanceRecorder;
import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.AccountData;
import org.qortal.data.account.AddressAmountData;
import org.qortal.data.account.BlockHeightRange;
import org.qortal.data.account.BlockHeightRangeAddressAmounts;
import org.qortal.data.asset.AssetData;
import org.qortal.data.asset.OrderData;
import org.qortal.data.asset.RecentTradeData;
@ -33,6 +37,7 @@ import org.qortal.transaction.Transaction;
import org.qortal.transaction.Transaction.ValidationResult;
import org.qortal.transform.TransformationException;
import org.qortal.transform.transaction.*;
import org.qortal.utils.BalanceRecorderUtils;
import org.qortal.utils.Base58;
import javax.servlet.http.HttpServletRequest;
@ -42,6 +47,7 @@ import javax.ws.rs.core.MediaType;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Optional;
import java.util.stream.Collectors;
@Path("/assets")
@ -179,6 +185,122 @@ public class AssetsResource {
}
}
@GET
@Path("/balancedynamicranges")
@Operation(
summary = "Get balance dynamic ranges listed.",
description = ".",
responses = {
@ApiResponse(
content = @Content(
array = @ArraySchema(
schema = @Schema(
implementation = BlockHeightRange.class
)
)
)
)
}
)
public List<BlockHeightRange> getBalanceDynamicRanges(
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
@Parameter(ref = "limit") @QueryParam("limit") Integer limit,
@Parameter(ref = "reverse") @QueryParam("reverse") Boolean reverse) {
Optional<HSQLDBBalanceRecorder> recorder = HSQLDBBalanceRecorder.getInstance();
if( recorder.isPresent()) {
return recorder.get().getRanges(offset, limit, reverse);
}
else {
return new ArrayList<>(0);
}
}
@GET
@Path("/balancedynamicrange/{height}")
@Operation(
summary = "Get balance dynamic range for a given height.",
description = ".",
responses = {
@ApiResponse(
content = @Content(
schema = @Schema(
implementation = BlockHeightRange.class
)
)
)
}
)
@ApiErrors({
ApiError.INVALID_CRITERIA, ApiError.INVALID_DATA
})
public BlockHeightRange getBalanceDynamicRange(@PathParam("height") int height) {
Optional<HSQLDBBalanceRecorder> recorder = HSQLDBBalanceRecorder.getInstance();
if( recorder.isPresent()) {
Optional<BlockHeightRange> range = recorder.get().getRange(height);
if( range.isPresent() ) {
return range.get();
}
else {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
}
else {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
}
}
@GET
@Path("/balancedynamicamounts/{begin}/{end}")
@Operation(
summary = "Get balance dynamic ranges address amounts listed.",
description = ".",
responses = {
@ApiResponse(
content = @Content(
array = @ArraySchema(
schema = @Schema(
implementation = AddressAmountData.class
)
)
)
)
}
)
@ApiErrors({
ApiError.INVALID_CRITERIA, ApiError.INVALID_DATA
})
public List<AddressAmountData> getBalanceDynamicAddressAmounts(
@PathParam("begin") int begin,
@PathParam("end") int end,
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
@Parameter(ref = "limit") @QueryParam("limit") Integer limit) {
Optional<HSQLDBBalanceRecorder> recorder = HSQLDBBalanceRecorder.getInstance();
if( recorder.isPresent()) {
Optional<BlockHeightRangeAddressAmounts> addressAmounts = recorder.get().getAddressAmounts(new BlockHeightRange(begin, end, false));
if( addressAmounts.isPresent() ) {
return addressAmounts.get().getAmounts().stream()
.sorted(BalanceRecorderUtils.ADDRESS_AMOUNT_DATA_COMPARATOR.reversed())
.skip(offset)
.limit(limit)
.collect(Collectors.toList());
}
else {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
}
}
else {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
}
}
@GET
@Path("/openorders/{assetid}/{otherassetid}")
@Operation(

View File

@ -19,6 +19,8 @@ import org.qortal.crypto.Crypto;
import org.qortal.data.account.AccountData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.block.DecodedOnlineAccountData;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.BlockArchiveReader;
import org.qortal.repository.DataException;
@ -27,6 +29,7 @@ import org.qortal.repository.RepositoryManager;
import org.qortal.transform.TransformationException;
import org.qortal.transform.block.BlockTransformer;
import org.qortal.utils.Base58;
import org.qortal.utils.Blocks;
import org.qortal.utils.Triple;
import javax.servlet.http.HttpServletRequest;
@ -45,6 +48,7 @@ import java.util.ArrayList;
import java.util.Arrays;
import java.util.Comparator;
import java.util.List;
import java.util.Set;
@Path("/blocks")
@Tag(name = "Blocks")
@ -542,6 +546,7 @@ public class BlocksResource {
}
}
String minterAddress = Account.getRewardShareMintingAddress(repository, blockData.getMinterPublicKey());
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
if (minterLevel == 0)
// This may be unavailable when requesting a trimmed block
@ -554,6 +559,7 @@ public class BlocksResource {
BlockMintingInfo blockMintingInfo = new BlockMintingInfo();
blockMintingInfo.minterPublicKey = blockData.getMinterPublicKey();
blockMintingInfo.minterAddress = minterAddress;
blockMintingInfo.minterLevel = minterLevel;
blockMintingInfo.onlineAccountsCount = blockData.getOnlineAccountsCount();
blockMintingInfo.maxDistance = new BigDecimal(block.MAX_DISTANCE);
@ -888,4 +894,49 @@ public class BlocksResource {
}
}
}
@GET
@Path("/onlineaccounts/{height}")
@Operation(
summary = "Get online accounts for block",
description = "Returns the online accounts who submitted signatures for this block",
responses = {
@ApiResponse(
description = "online accounts",
content = @Content(
array = @ArraySchema(
schema = @Schema(
implementation = DecodedOnlineAccountData.class
)
)
)
)
}
)
@ApiErrors({
ApiError.BLOCK_UNKNOWN, ApiError.REPOSITORY_ISSUE
})
public Set<DecodedOnlineAccountData> getOnlineAccounts(@PathParam("height") int height) {
try (final Repository repository = RepositoryManager.getRepository()) {
// get block from database
BlockData blockData = repository.getBlockRepository().fromHeight(height);
// if block data is not in the database, then try the archive
if (blockData == null) {
blockData = repository.getBlockArchiveRepository().fromHeight(height);
// if the block is not in the database or the archive, then the block is unknown
if( blockData == null ) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
}
}
Set<DecodedOnlineAccountData> onlineAccounts = Blocks.getDecodedOnlineAccountsForBlock(repository, blockData);
return onlineAccounts;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE);
}
}
}

View File

@ -234,17 +234,21 @@ public class ChatResource {
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.REPOSITORY_ISSUE})
public ActiveChats getActiveChats(@PathParam("address") String address, @QueryParam("encoding") Encoding encoding) {
public ActiveChats getActiveChats(
@PathParam("address") String address,
@QueryParam("encoding") Encoding encoding,
@QueryParam("haschatreference") Boolean hasChatReference
) {
if (address == null || !Crypto.isValidAddress(address))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
try (final Repository repository = RepositoryManager.getRepository()) {
return repository.getChatRepository().getActiveChats(address, encoding);
return repository.getChatRepository().getActiveChats(address, encoding, hasChatReference);
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
}
}
@POST
@Operation(
summary = "Build raw, unsigned, CHAT transaction",

View File

@ -10,11 +10,13 @@ import io.swagger.v3.oas.annotations.parameters.RequestBody;
import io.swagger.v3.oas.annotations.responses.ApiResponse;
import io.swagger.v3.oas.annotations.security.SecurityRequirement;
import io.swagger.v3.oas.annotations.tags.Tag;
import org.glassfish.jersey.media.multipart.ContentDisposition;
import org.qortal.api.ApiError;
import org.qortal.api.ApiErrors;
import org.qortal.api.ApiExceptionFactory;
import org.qortal.api.Security;
import org.qortal.api.model.CrossChainCancelRequest;
import org.qortal.api.model.CrossChainTradeLedgerEntry;
import org.qortal.api.model.CrossChainTradeSummary;
import org.qortal.controller.tradebot.TradeBot;
import org.qortal.crosschain.ACCT;
@ -44,14 +46,20 @@ import org.qortal.utils.Base58;
import org.qortal.utils.ByteArray;
import org.qortal.utils.NTP;
import javax.servlet.ServletContext;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.ws.rs.*;
import javax.ws.rs.core.Context;
import javax.ws.rs.core.HttpHeaders;
import javax.ws.rs.core.MediaType;
import java.io.IOException;
import java.util.*;
import java.util.function.Supplier;
import java.util.stream.Collectors;
@Path("/crosschain")
@Tag(name = "Cross-Chain")
public class CrossChainResource {
@ -59,6 +67,13 @@ public class CrossChainResource {
@Context
HttpServletRequest request;
@Context
HttpServletResponse response;
@Context
ServletContext context;
@GET
@Path("/tradeoffers")
@Operation(
@ -255,6 +270,12 @@ public class CrossChainResource {
description = "Only return trades that completed on/after this timestamp (milliseconds since epoch)",
example = "1597310000000"
) @QueryParam("minimumTimestamp") Long minimumTimestamp,
@Parameter(
description = "Optionally filter by buyer Qortal public key"
) @QueryParam("buyerPublicKey") String buyerPublicKey58,
@Parameter(
description = "Optionally filter by seller Qortal public key"
) @QueryParam("sellerPublicKey") String sellerPublicKey58,
@Parameter( ref = "limit") @QueryParam("limit") Integer limit,
@Parameter( ref = "offset" ) @QueryParam("offset") Integer offset,
@Parameter( ref = "reverse" ) @QueryParam("reverse") Boolean reverse) {
@ -266,6 +287,10 @@ public class CrossChainResource {
if (minimumTimestamp != null && minimumTimestamp <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
// Decode public keys
byte[] buyerPublicKey = decodePublicKey(buyerPublicKey58);
byte[] sellerPublicKey = decodePublicKey(sellerPublicKey58);
final Boolean isFinished = Boolean.TRUE;
try (final Repository repository = RepositoryManager.getRepository()) {
@ -296,7 +321,7 @@ public class CrossChainResource {
byte[] codeHash = acctInfo.getKey().value;
ACCT acct = acctInfo.getValue().get();
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStates(codeHash,
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStates(codeHash, buyerPublicKey, sellerPublicKey,
isFinished, acct.getModeByteOffset(), (long) AcctMode.REDEEMED.value, minimumFinalHeight,
limit, offset, reverse);
@ -335,6 +360,120 @@ public class CrossChainResource {
}
}
/**
* Decode Public Key
*
* @param publicKey58 the public key in a string
*
* @return the public key in bytes
*/
private byte[] decodePublicKey(String publicKey58) {
if( publicKey58 == null ) return null;
if( publicKey58.isEmpty() ) return new byte[0];
byte[] publicKey;
try {
publicKey = Base58.decode(publicKey58);
} catch (NumberFormatException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PUBLIC_KEY, e);
}
// Correct size for public key?
if (publicKey.length != Transformer.PUBLIC_KEY_LENGTH)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PUBLIC_KEY);
return publicKey;
}
@GET
@Path("/ledger/{publicKey}")
@Operation(
summary = "Accounting entries for all trades.",
description = "Returns accounting entries for all completed cross-chain trades",
responses = {
@ApiResponse(
content = @Content(
schema = @Schema(
type = "string",
format = "byte"
)
)
)
}
)
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.REPOSITORY_ISSUE})
public HttpServletResponse getLedgerEntries(
@PathParam("publicKey") String publicKey58,
@Parameter(
description = "Only return trades that completed on/after this timestamp (milliseconds since epoch)",
example = "1597310000000"
) @QueryParam("minimumTimestamp") Long minimumTimestamp) {
byte[] publicKey = decodePublicKey(publicKey58);
// minimumTimestamp (if given) needs to be positive
if (minimumTimestamp != null && minimumTimestamp <= 0)
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
try (final Repository repository = RepositoryManager.getRepository()) {
Integer minimumFinalHeight = null;
if (minimumTimestamp != null) {
minimumFinalHeight = repository.getBlockRepository().getHeightFromTimestamp(minimumTimestamp);
// If not found in the block repository it will return either 0 or 1
if (minimumFinalHeight == 0 || minimumFinalHeight == 1) {
// Try the archive
minimumFinalHeight = repository.getBlockArchiveRepository().getHeightFromTimestamp(minimumTimestamp);
}
if (minimumFinalHeight == 0)
// We don't have any blocks since minimumTimestamp, let alone trades, so nothing to return
return response;
// height returned from repository is for block BEFORE timestamp
// but we want trades AFTER timestamp so bump height accordingly
minimumFinalHeight++;
}
List<CrossChainTradeLedgerEntry> crossChainTradeLedgerEntries = new ArrayList<>();
Map<ByteArray, Supplier<ACCT>> acctsByCodeHash = SupportedBlockchain.getAcctMap();
// collect ledger entries for each ACCT
for (Map.Entry<ByteArray, Supplier<ACCT>> acctInfo : acctsByCodeHash.entrySet()) {
byte[] codeHash = acctInfo.getKey().value;
ACCT acct = acctInfo.getValue().get();
// collect buys and sells
CrossChainUtils.collectLedgerEntries(publicKey, repository, minimumFinalHeight, crossChainTradeLedgerEntries, codeHash, acct, true);
CrossChainUtils.collectLedgerEntries(publicKey, repository, minimumFinalHeight, crossChainTradeLedgerEntries, codeHash, acct, false);
}
crossChainTradeLedgerEntries.sort((a, b) -> Longs.compare(a.getTradeTimestamp(), b.getTradeTimestamp()));
response.setStatus(HttpServletResponse.SC_OK);
response.setContentType("text/csv");
response.setHeader(
HttpHeaders.CONTENT_DISPOSITION,
ContentDisposition
.type("attachment")
.fileName(CrossChainUtils.createLedgerFileName(Crypto.toAddress(publicKey)))
.build()
.toString()
);
CrossChainUtils.writeToLedger( response.getWriter(), crossChainTradeLedgerEntries);
return response;
} catch (DataException e) {
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
} catch (IOException e) {
response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
return response;
}
}
@GET
@Path("/price/{blockchain}")
@Operation(

View File

@ -10,21 +10,36 @@ import org.bitcoinj.script.ScriptBuilder;
import org.bouncycastle.util.Strings;
import org.json.simple.JSONObject;
import org.qortal.api.model.CrossChainTradeLedgerEntry;
import org.qortal.api.model.crosschain.BitcoinyTBDRequest;
import org.qortal.crosschain.*;
import org.qortal.data.at.ATData;
import org.qortal.data.at.ATStateData;
import org.qortal.data.crosschain.*;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.utils.Amounts;
import org.qortal.utils.BitTwiddling;
import java.io.BufferedWriter;
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import java.io.Writer;
import java.text.DateFormat;
import java.text.SimpleDateFormat;
import java.time.Instant;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.util.*;
import java.util.stream.Collectors;
public class CrossChainUtils {
public static final String QORT_CURRENCY_CODE = "QORT";
private static final Logger LOGGER = LogManager.getLogger(CrossChainUtils.class);
public static final String CORE_API_CALL = "Core API Call";
public static final String QORTAL_EXCHANGE_LABEL = "Qortal";
public static ServerConfigurationInfo buildServerConfigurationInfo(Bitcoiny blockchain) {
@ -632,4 +647,128 @@ public class CrossChainUtils {
byte[] lockTimeABytes = BitTwiddling.toBEByteArray((long) lockTimeA);
return Bytes.concat(partnerBitcoinPKH, hashOfSecretA, lockTimeABytes);
}
/**
* Write To Ledger
*
* @param writer the writer to the ledger
* @param entries the entries to write to the ledger
*
* @throws IOException
*/
public static void writeToLedger(Writer writer, List<CrossChainTradeLedgerEntry> entries) throws IOException {
BufferedWriter bufferedWriter = new BufferedWriter(writer);
StringJoiner header = new StringJoiner(",");
header.add("Market");
header.add("Currency");
header.add("Quantity");
header.add("Commission Paid");
header.add("Commission Currency");
header.add("Total Price");
header.add("Date Time");
header.add("Exchange");
bufferedWriter.append(header.toString());
DateFormat dateFormatter = new SimpleDateFormat("yyyyMMdd HH:mm");
dateFormatter.setTimeZone(TimeZone.getTimeZone("UTC"));
for( CrossChainTradeLedgerEntry entry : entries ) {
StringJoiner joiner = new StringJoiner(",");
joiner.add(entry.getMarket());
joiner.add(entry.getCurrency());
joiner.add(String.valueOf(Amounts.prettyAmount(entry.getQuantity())));
joiner.add(String.valueOf(Amounts.prettyAmount(entry.getFeeAmount())));
joiner.add(entry.getFeeCurrency());
joiner.add(String.valueOf(Amounts.prettyAmount(entry.getTotalPrice())));
joiner.add(dateFormatter.format(new Date(entry.getTradeTimestamp())));
joiner.add(QORTAL_EXCHANGE_LABEL);
bufferedWriter.newLine();
bufferedWriter.append(joiner.toString());
}
bufferedWriter.newLine();
bufferedWriter.flush();
}
/**
* Create Ledger File Name
*
* Create a file name the includes timestamp and address.
*
* @param address the address
*
* @return the file name created
*/
public static String createLedgerFileName(String address) {
DateFormat dateFormatter = new SimpleDateFormat("yyyyMMddHHmmss");
String fileName = "ledger-" + address + "-" + dateFormatter.format(new Date());
return fileName;
}
/**
* Collect Ledger Entries
*
* @param publicKey the public key for the ledger entries, buy and sell
* @param repository the data repository
* @param minimumFinalHeight the minimum block height for entries to be collected
* @param entries the ledger entries to add to
* @param codeHash code hash for the entry blockchain
* @param acct the ACCT for the entry blockchain
* @param isBuy true collecting entries for a buy, otherwise false
*
* @throws DataException
*/
public static void collectLedgerEntries(
byte[] publicKey,
Repository repository,
Integer minimumFinalHeight,
List<CrossChainTradeLedgerEntry> entries,
byte[] codeHash,
ACCT acct,
boolean isBuy) throws DataException {
// get all the final AT states for the code hash (foreign coin)
List<ATStateData> atStates
= repository.getATRepository().getMatchingFinalATStates(
codeHash,
isBuy ? publicKey : null,
!isBuy ? publicKey : null,
Boolean.TRUE, acct.getModeByteOffset(),
(long) AcctMode.REDEEMED.value,
minimumFinalHeight,
null, null, false
);
String foreignBlockchainCurrencyCode = acct.getBlockchain().getCurrencyCode();
// for each trade, build ledger entry, collect ledger entry
for (ATStateData atState : atStates) {
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atState);
// We also need block timestamp for use as trade timestamp
long localTimestamp = repository.getBlockRepository().getTimestampFromHeight(atState.getHeight());
if (localTimestamp == 0) {
// Try the archive
localTimestamp = repository.getBlockArchiveRepository().getTimestampFromHeight(atState.getHeight());
}
CrossChainTradeLedgerEntry ledgerEntry
= new CrossChainTradeLedgerEntry(
isBuy ? QORT_CURRENCY_CODE : foreignBlockchainCurrencyCode,
isBuy ? foreignBlockchainCurrencyCode : QORT_CURRENCY_CODE,
isBuy ? crossChainTradeData.qortAmount : crossChainTradeData.expectedForeignAmount,
0,
foreignBlockchainCurrencyCode,
isBuy ? crossChainTradeData.expectedForeignAmount : crossChainTradeData.qortAmount,
localTimestamp);
entries.add(ledgerEntry);
}
}
}

View File

@ -32,6 +32,7 @@ import org.qortal.controller.Synchronizer.SynchronizationResult;
import org.qortal.controller.repository.BlockArchiveRebuilder;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.account.RewardShareData;
import org.qortal.data.system.DbConnectionInfo;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.network.PeerAddress;
@ -40,6 +41,7 @@ import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.data.system.SystemInfo;
import org.qortal.utils.Base58;
import org.qortal.utils.NTP;
@ -52,6 +54,7 @@ import java.net.InetSocketAddress;
import java.net.UnknownHostException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
@ -459,7 +462,7 @@ public class AdminResource {
// Qortal: check reward-share's minting account is still allowed to mint
Account rewardShareMintingAccount = new Account(repository, rewardShareData.getMinter());
if (!rewardShareMintingAccount.canMint())
if (!rewardShareMintingAccount.canMint(false))
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.CANNOT_MINT);
MintingAccountData mintingAccountData = new MintingAccountData(mintingAccount.getPrivateKey(), mintingAccount.getPublicKey());
@ -1064,4 +1067,50 @@ public class AdminResource {
return "true";
}
}
@GET
@Path("/systeminfo")
@Operation(
summary = "System Information",
description = "System memory usage and available processors.",
responses = {
@ApiResponse(
description = "memory usage and available processors",
content = @Content(mediaType = MediaType.APPLICATION_JSON, schema = @Schema(implementation = SystemInfo.class))
)
}
)
@ApiErrors({ApiError.REPOSITORY_ISSUE})
public SystemInfo getSystemInformation() {
SystemInfo info
= new SystemInfo(
Runtime.getRuntime().freeMemory(),
Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(),
Runtime.getRuntime().totalMemory(),
Runtime.getRuntime().maxMemory(),
Runtime.getRuntime().availableProcessors());
return info;
}
@GET
@Path("/dbstates")
@Operation(
summary = "Get DB States",
description = "Get DB States",
responses = {
@ApiResponse(
content = @Content(mediaType = MediaType.APPLICATION_JSON, array = @ArraySchema(schema = @Schema(implementation = DbConnectionInfo.class)))
)
}
)
public List<DbConnectionInfo> getDbConnectionsStates() {
try {
return Controller.REPOSITORY_FACTORY.getDbConnectionsStates();
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
return new ArrayList<>(0);
}
}
}

View File

@ -77,7 +77,9 @@ public class ActiveChatsWebSocket extends ApiWebSocket {
}
try (final Repository repository = RepositoryManager.getRepository()) {
ActiveChats activeChats = repository.getChatRepository().getActiveChats(ourAddress, getTargetEncoding(session));
Boolean hasChatReference = getHasChatReference(session);
ActiveChats activeChats = repository.getChatRepository().getActiveChats(ourAddress, getTargetEncoding(session), hasChatReference);
StringWriter stringWriter = new StringWriter();
@ -103,4 +105,20 @@ public class ActiveChatsWebSocket extends ApiWebSocket {
return Encoding.valueOf(encoding);
}
private Boolean getHasChatReference(Session session) {
Map<String, List<String>> queryParams = session.getUpgradeRequest().getParameterMap();
List<String> hasChatReferenceList = queryParams.get("haschatreference");
// Return null if not specified
if (hasChatReferenceList != null && hasChatReferenceList.size() == 1) {
String value = hasChatReferenceList.get(0).toLowerCase();
if (value.equals("true")) {
return true;
} else if (value.equals("false")) {
return false;
}
}
return null; // Ignored if not present
}
}

View File

@ -0,0 +1,102 @@
package org.qortal.api.websocket;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.eclipse.jetty.websocket.api.Session;
import org.eclipse.jetty.websocket.api.WebSocketException;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketClose;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketConnect;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketError;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketMessage;
import org.eclipse.jetty.websocket.api.annotations.WebSocket;
import org.eclipse.jetty.websocket.servlet.WebSocketServletFactory;
import org.qortal.api.ApiError;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.DataMonitorInfo;
import org.qortal.event.DataMonitorEvent;
import org.qortal.event.Event;
import org.qortal.event.EventBus;
import org.qortal.event.Listener;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.utils.Base58;
import java.io.IOException;
import java.io.StringWriter;
import java.util.List;
@WebSocket
@SuppressWarnings("serial")
public class DataMonitorSocket extends ApiWebSocket implements Listener {
private static final Logger LOGGER = LogManager.getLogger(DataMonitorSocket.class);
@Override
public void configure(WebSocketServletFactory factory) {
LOGGER.info("configure");
factory.register(DataMonitorSocket.class);
EventBus.INSTANCE.addListener(this);
}
@Override
public void listen(Event event) {
if (!(event instanceof DataMonitorEvent))
return;
DataMonitorEvent dataMonitorEvent = (DataMonitorEvent) event;
for (Session session : getSessions())
sendDataEventSummary(session, buildInfo(dataMonitorEvent));
}
private DataMonitorInfo buildInfo(DataMonitorEvent dataMonitorEvent) {
return new DataMonitorInfo(
dataMonitorEvent.getTimestamp(),
dataMonitorEvent.getIdentifier(),
dataMonitorEvent.getName(),
dataMonitorEvent.getService(),
dataMonitorEvent.getDescription(),
dataMonitorEvent.getTransactionTimestamp(),
dataMonitorEvent.getLatestPutTimestamp()
);
}
@OnWebSocketConnect
@Override
public void onWebSocketConnect(Session session) {
super.onWebSocketConnect(session);
}
@OnWebSocketClose
@Override
public void onWebSocketClose(Session session, int statusCode, String reason) {
super.onWebSocketClose(session, statusCode, reason);
}
@OnWebSocketError
public void onWebSocketError(Session session, Throwable throwable) {
/* We ignore errors for now, but method here to silence log spam */
}
@OnWebSocketMessage
public void onWebSocketMessage(Session session, String message) {
LOGGER.info("onWebSocketMessage: message = " + message);
}
private void sendDataEventSummary(Session session, DataMonitorInfo dataMonitorInfo) {
StringWriter stringWriter = new StringWriter();
try {
marshall(stringWriter, dataMonitorInfo);
session.getRemote().sendStringByFuture(stringWriter.toString());
} catch (IOException | WebSocketException e) {
// No output this time
}
}
}

View File

@ -98,7 +98,7 @@ public class TradeOffersWebSocket extends ApiWebSocket implements Listener {
byte[] codeHash = acctInfo.getKey().value;
ACCT acct = acctInfo.getValue().get();
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStates(codeHash,
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStates(codeHash, null, null,
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
null, null, null);
@ -259,7 +259,7 @@ public class TradeOffersWebSocket extends ApiWebSocket implements Listener {
ACCT acct = acctInfo.getValue().get();
Integer dataByteOffset = acct.getModeByteOffset();
List<ATStateData> initialAtStates = repository.getATRepository().getMatchingFinalATStates(codeHash,
List<ATStateData> initialAtStates = repository.getATRepository().getMatchingFinalATStates(codeHash, null, null,
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
null, null, null);
@ -298,7 +298,7 @@ public class TradeOffersWebSocket extends ApiWebSocket implements Listener {
byte[] codeHash = acctInfo.getKey().value;
ACCT acct = acctInfo.getValue().get();
List<ATStateData> historicAtStates = repository.getATRepository().getMatchingFinalATStates(codeHash,
List<ATStateData> historicAtStates = repository.getATRepository().getMatchingFinalATStates(codeHash, null, null,
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
null, null, null);

View File

@ -439,7 +439,15 @@ public class ArbitraryDataReader {
// Ensure the complete hash matches the joined chunks
if (!Arrays.equals(arbitraryDataFile.digest(), transactionData.getData())) {
// Delete the invalid file
arbitraryDataFile.delete();
LOGGER.info("Deleting invalid file: path = " + arbitraryDataFile.getFilePath());
if( arbitraryDataFile.delete() ) {
LOGGER.info("Deleted invalid file successfully: path = " + arbitraryDataFile.getFilePath());
}
else {
LOGGER.warn("Could not delete invalid file: path = " + arbitraryDataFile.getFilePath());
}
throw new DataException("Unable to validate complete file hash");
}
}

View File

@ -168,7 +168,7 @@ public class ArbitraryDataRenderer {
byte[] data = Files.readAllBytes(filePath); // TODO: limit file size that can be read into memory
HTMLParser htmlParser = new HTMLParser(resourceId, inPath, prefix, includeResourceIdInPrefix, data, qdnContext, service, identifier, theme, usingCustomRouting);
htmlParser.addAdditionalHeaderTags();
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' data: blob:; img-src 'self' data: blob:;");
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline' 'unsafe-eval'; font-src 'self' data:; media-src 'self' data: blob:; img-src 'self' data: blob:; connect-src 'self' wss:;");
response.setContentType(context.getMimeType(filename));
response.setContentLength(htmlParser.getData().length);
response.getOutputStream().write(htmlParser.getData());

View File

@ -23,12 +23,11 @@ import org.qortal.data.at.ATStateData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.BlockSummaryData;
import org.qortal.data.block.BlockTransactionData;
import org.qortal.data.group.GroupAdminData;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.ATRepository;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.TransactionRepository;
import org.qortal.group.Group;
import org.qortal.repository.*;
import org.qortal.settings.Settings;
import org.qortal.transaction.AtTransaction;
import org.qortal.transaction.Transaction;
@ -40,6 +39,7 @@ import org.qortal.transform.block.BlockTransformer;
import org.qortal.transform.transaction.TransactionTransformer;
import org.qortal.utils.Amounts;
import org.qortal.utils.Base58;
import org.qortal.utils.Groups;
import org.qortal.utils.NTP;
import java.io.ByteArrayOutputStream;
@ -144,11 +144,14 @@ public class Block {
private final Account mintingAccount;
private final AccountData mintingAccountData;
private final boolean isMinterFounder;
private final boolean isMinterMember;
private final Account recipientAccount;
private final AccountData recipientAccountData;
ExpandedAccount(Repository repository, RewardShareData rewardShareData) throws DataException {
final BlockChain blockChain = BlockChain.getInstance();
ExpandedAccount(Repository repository, RewardShareData rewardShareData, int blockHeight) throws DataException {
this.rewardShareData = rewardShareData;
this.sharePercent = this.rewardShareData.getSharePercent();
@ -157,6 +160,12 @@ public class Block {
this.isMinterFounder = Account.isFounder(mintingAccountData.getFlags());
this.isRecipientAlsoMinter = this.rewardShareData.getRecipient().equals(this.mintingAccount.getAddress());
this.isMinterMember
= Groups.memberExistsInAnyGroup(
repository.getGroupRepository(),
Groups.getGroupIdsToMint(BlockChain.getInstance(), blockHeight),
this.mintingAccount.getAddress()
);
if (this.isRecipientAlsoMinter) {
// Self-share: minter is also recipient
@ -169,6 +178,19 @@ public class Block {
}
}
/**
* Get Effective Minting Level
*
* @return the effective minting level, if a data exception is thrown, it catches the exception and returns a zero
*/
public int getEffectiveMintingLevel() {
try {
return this.mintingAccount.getEffectiveMintingLevel();
} catch (DataException e) {
return 0;
}
}
public Account getMintingAccount() {
return this.mintingAccount;
}
@ -181,19 +203,23 @@ public class Block {
* <p>
* This is a method, not a final variable, because account's level can change between construction and call,
* e.g. during Block.process() where account levels are bumped right before Block.distributeBlockReward().
*
*
* @return account-level share "bin" from blockchain config, or null if founder / none found
*/
public AccountLevelShareBin getShareBin(int blockHeight) {
if (this.isMinterFounder)
if (this.isMinterFounder && blockHeight < BlockChain.getInstance().getAdminsReplaceFoundersHeight())
return null;
final int accountLevel = this.mintingAccountData.getLevel();
if (accountLevel <= 0)
return null; // level 0 isn't included in any share bins
if (blockHeight >= blockChain.getFixBatchRewardHeight()) {
if (!this.isMinterMember)
return null; // not member of minter group isn't included in any share bins
}
// Select the correct set of share bins based on block height
final BlockChain blockChain = BlockChain.getInstance();
final AccountLevelShareBin[] shareBinsByLevel = (blockHeight >= blockChain.getSharesByLevelV2Height()) ?
blockChain.getShareBinsByAccountLevelV2() : blockChain.getShareBinsByAccountLevelV1();
@ -262,7 +288,7 @@ public class Block {
* Constructs new Block without loading transactions and AT states.
* <p>
* Transactions and AT states are loaded on first call to getTransactions() or getATStates() respectively.
*
*
* @param repository
* @param blockData
*/
@ -333,7 +359,7 @@ public class Block {
/**
* Constructs new Block with empty transaction list, using passed minter account.
*
*
* @param repository
* @param blockData
* @param minter
@ -351,7 +377,7 @@ public class Block {
* This constructor typically used when minting a new block.
* <p>
* Note that CIYAM ATs will be executed and AT-Transactions prepended to this block, along with AT state data and fees.
*
*
* @param repository
* @param parentBlockData
* @param minter
@ -377,7 +403,7 @@ public class Block {
byte[] encodedOnlineAccounts = new byte[0];
int onlineAccountsCount = 0;
byte[] onlineAccountsSignatures = null;
if (isBatchRewardDistributionBlock(height)) {
// Batch reward distribution block - copy online accounts from recent block with highest online accounts count
@ -398,7 +424,9 @@ public class Block {
onlineAccounts.removeIf(a -> a.getNonce() == null || a.getNonce() < 0);
// After feature trigger, remove any online accounts that are level 0
if (height >= BlockChain.getInstance().getOnlineAccountMinterLevelValidationHeight()) {
// but only if they are before the ignore level feature trigger
if (height < BlockChain.getInstance().getIgnoreLevelForRewardShareHeight() &&
height >= BlockChain.getInstance().getOnlineAccountMinterLevelValidationHeight()) {
onlineAccounts.removeIf(a -> {
try {
return Account.getRewardShareEffectiveMintingLevel(repository, a.getPublicKey()) == 0;
@ -409,6 +437,21 @@ public class Block {
});
}
// After feature trigger, remove any online accounts that are not minter group member
if (height >= BlockChain.getInstance().getGroupMemberCheckHeight()) {
onlineAccounts.removeIf(a -> {
try {
List<Integer> groupIdsToMint = Groups.getGroupIdsToMint(BlockChain.getInstance(), height);
String address = Account.getRewardShareMintingAddress(repository, a.getPublicKey());
boolean isMinterGroupMember = Groups.memberExistsInAnyGroup(repository.getGroupRepository(), groupIdsToMint, address);
return !isMinterGroupMember;
} catch (DataException e) {
// Something went wrong, so remove the account
return true;
}
});
}
if (onlineAccounts.isEmpty()) {
LOGGER.debug("No online accounts - not even our own?");
return null;
@ -512,7 +555,7 @@ public class Block {
* Mints new block using this block as template, but with different minting account.
* <p>
* NOTE: uses the same transactions list, AT states, etc.
*
*
* @param minter
* @return
* @throws DataException
@ -598,7 +641,7 @@ public class Block {
/**
* Return composite block signature (minterSignature + transactionsSignature).
*
*
* @return byte[], or null if either component signature is null.
*/
public byte[] getSignature() {
@ -613,7 +656,7 @@ public class Block {
* <p>
* We're starting with version 4 as a nod to being newer than successor Qora,
* whose latest block version was 3.
*
*
* @return 1, 2, 3 or 4
*/
public int getNextBlockVersion() {
@ -627,7 +670,7 @@ public class Block {
* Return block's transactions.
* <p>
* If the block was loaded from repository then it's possible this method will call the repository to fetch the transactions if not done already.
*
*
* @return
* @throws DataException
*/
@ -661,7 +704,7 @@ public class Block {
* If the block was loaded from repository then it's possible this method will call the repository to fetch the AT states if not done already.
* <p>
* <b>Note:</b> AT states fetched from repository only contain summary info, not actual data like serialized state data or AT creation timestamps!
*
*
* @return
* @throws DataException
*/
@ -697,7 +740,7 @@ public class Block {
* <p>
* Typically called as part of Block.process() or Block.orphan()
* so ideally after any calls to Block.isValid().
*
*
* @throws DataException
*/
public List<ExpandedAccount> getExpandedAccounts() throws DataException {
@ -715,10 +758,12 @@ public class Block {
List<ExpandedAccount> expandedAccounts = new ArrayList<>();
for (RewardShareData rewardShare : this.cachedOnlineRewardShares)
expandedAccounts.add(new ExpandedAccount(repository, rewardShare));
for (RewardShareData rewardShare : this.cachedOnlineRewardShares) {
expandedAccounts.add(new ExpandedAccount(repository, rewardShare, this.blockData.getHeight()));
}
this.cachedExpandedAccounts = expandedAccounts;
LOGGER.trace(() -> String.format("Online reward-shares after expanded accounts %s", this.cachedOnlineRewardShares));
return this.cachedExpandedAccounts;
}
@ -727,7 +772,7 @@ public class Block {
/**
* Load parent block's data from repository via this block's reference.
*
*
* @return parent's BlockData, or null if no parent found
* @throws DataException
*/
@ -741,7 +786,7 @@ public class Block {
/**
* Load child block's data from repository via this block's signature.
*
*
* @return child's BlockData, or null if no parent found
* @throws DataException
*/
@ -761,7 +806,7 @@ public class Block {
* Used when constructing a new block during minting.
* <p>
* Requires block's {@code minter} being a {@code PrivateKeyAccount} so block's transactions signature can be recalculated.
*
*
* @param transactionData
* @return true if transaction successfully added to block, false otherwise
* @throws IllegalStateException
@ -814,7 +859,7 @@ public class Block {
* Used when constructing a new block during minting.
* <p>
* Requires block's {@code minter} being a {@code PrivateKeyAccount} so block's transactions signature can be recalculated.
*
*
* @param transactionData
* @throws IllegalStateException
* if block's {@code minter} is not a {@code PrivateKeyAccount}.
@ -859,7 +904,7 @@ public class Block {
* previous block's minter signature + minter's public key + (encoded) online-accounts data
* <p>
* (Previous block's minter signature is extracted from this block's reference).
*
*
* @throws IllegalStateException
* if block's {@code minter} is not a {@code PrivateKeyAccount}.
* @throws RuntimeException
@ -876,7 +921,7 @@ public class Block {
* Recalculate block's transactions signature.
* <p>
* Requires block's {@code minter} being a {@code PrivateKeyAccount}.
*
*
* @throws IllegalStateException
* if block's {@code minter} is not a {@code PrivateKeyAccount}.
* @throws RuntimeException
@ -998,7 +1043,7 @@ public class Block {
* Recalculate block's minter and transactions signatures, thus giving block full signature.
* <p>
* Note: Block instance must have been constructed with a <tt>PrivateKeyAccount</tt> minter or this call will throw an <tt>IllegalStateException</tt>.
*
*
* @throws IllegalStateException
* if block's {@code minter} is not a {@code PrivateKeyAccount}.
*/
@ -1011,7 +1056,7 @@ public class Block {
/**
* Returns whether this block's signatures are valid.
*
*
* @return true if both minter and transaction signatures are valid, false otherwise
*/
public boolean isSignatureValid() {
@ -1035,7 +1080,7 @@ public class Block {
* <p>
* Used by BlockMinter to check whether it's time to mint a new block,
* and also used by Block.isValid for checks (if not a testchain).
*
*
* @return ValidationResult.OK if timestamp valid, or some other ValidationResult otherwise.
* @throws DataException
*/
@ -1124,14 +1169,32 @@ public class Block {
if (onlineRewardShares == null)
return ValidationResult.ONLINE_ACCOUNT_UNKNOWN;
// After feature trigger, require all online account minters to be greater than level 0
if (this.getBlockData().getHeight() >= BlockChain.getInstance().getOnlineAccountMinterLevelValidationHeight()) {
List<ExpandedAccount> expandedAccounts = this.getExpandedAccounts();
// After feature trigger, require all online account minters to be greater than level 0,
// but only if it is before the feature trigger where we ignore level again
if (this.blockData.getHeight() < BlockChain.getInstance().getIgnoreLevelForRewardShareHeight() &&
this.getBlockData().getHeight() >= BlockChain.getInstance().getOnlineAccountMinterLevelValidationHeight()) {
List<ExpandedAccount> expandedAccounts
= this.getExpandedAccounts().stream()
.filter(expandedAccount -> expandedAccount.isMinterMember)
.collect(Collectors.toList());
for (ExpandedAccount account : expandedAccounts) {
if (account.getMintingAccount().getEffectiveMintingLevel() == 0)
return ValidationResult.ONLINE_ACCOUNTS_INVALID;
if (this.getBlockData().getHeight() >= BlockChain.getInstance().getFixBatchRewardHeight()) {
if (!account.isMinterMember)
return ValidationResult.ONLINE_ACCOUNTS_INVALID;
}
}
}
else if (this.blockData.getHeight() >= BlockChain.getInstance().getIgnoreLevelForRewardShareHeight()){
Optional<ExpandedAccount> anyInvalidAccount
= this.getExpandedAccounts().stream()
.filter(account -> !account.isMinterMember)
.findAny();
if( anyInvalidAccount.isPresent() ) return ValidationResult.ONLINE_ACCOUNTS_INVALID;
}
// If block is past a certain age then we simply assume the signatures were correct
long signatureRequirementThreshold = NTP.getTime() - BlockChain.getInstance().getOnlineAccountSignaturesMinLifetime();
@ -1215,7 +1278,7 @@ public class Block {
* <p>
* Checks block's transactions by testing their validity then processing them.<br>
* Hence uses a repository savepoint during execution.
*
*
* @return ValidationResult.OK if block is valid, or some other ValidationResult otherwise.
* @throws DataException
*/
@ -1258,6 +1321,7 @@ public class Block {
// Online Accounts
ValidationResult onlineAccountsResult = this.areOnlineAccountsValid();
LOGGER.trace("Accounts valid = {}", onlineAccountsResult);
if (onlineAccountsResult != ValidationResult.OK)
return onlineAccountsResult;
@ -1346,7 +1410,7 @@ public class Block {
// Check transaction can even be processed
validationResult = transaction.isProcessable();
if (validationResult != Transaction.ValidationResult.OK) {
LOGGER.info(String.format("Error during transaction validation, tx %s: %s", Base58.encode(transactionData.getSignature()), validationResult.name()));
LOGGER.debug(String.format("Error during transaction validation, tx %s: %s", Base58.encode(transactionData.getSignature()), validationResult.name()));
return ValidationResult.TRANSACTION_INVALID;
}
@ -1386,7 +1450,7 @@ public class Block {
* <p>
* NOTE: will execute ATs locally if not already done.<br>
* This is so we have locally-generated AT states for comparison.
*
*
* @return OK, or some AT-related validation result
* @throws DataException
*/
@ -1462,11 +1526,11 @@ public class Block {
* Note: this method does not store new AT state data into repository - that is handled by <tt>process()</tt>.
* <p>
* This method is not needed if fetching an existing block from the repository as AT state data will be loaded from repository as well.
*
*
* @see #isValid()
*
*
* @throws DataException
*
*
*/
private void executeATs() throws DataException {
// We're expecting a lack of AT state data at this point.
@ -1518,7 +1582,7 @@ public class Block {
return false;
Account mintingAccount = new PublicKeyAccount(this.repository, rewardShareData.getMinterPublicKey());
return mintingAccount.canMint();
return mintingAccount.canMint(false);
}
/**
@ -1538,7 +1602,7 @@ public class Block {
/**
* Process block, and its transactions, adding them to the blockchain.
*
*
* @throws DataException
*/
public void process() throws DataException {
@ -1547,6 +1611,7 @@ public class Block {
this.blockData.setHeight(blockchainHeight + 1);
LOGGER.trace(() -> String.format("Processing block %d", this.blockData.getHeight()));
LOGGER.trace(() -> String.format("Online Reward Shares in process %s", this.cachedOnlineRewardShares));
if (this.blockData.getHeight() > 1) {
@ -1618,7 +1683,17 @@ public class Block {
final List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
final int maximumLevel = cumulativeBlocksByLevel.size() - 1;
final List<ExpandedAccount> expandedAccounts = this.getExpandedAccounts();
final List<ExpandedAccount> expandedAccounts;
if (this.getBlockData().getHeight() < BlockChain.getInstance().getFixBatchRewardHeight()) {
expandedAccounts = this.getExpandedAccounts().stream().collect(Collectors.toList());
}
else {
expandedAccounts
= this.getExpandedAccounts().stream()
.filter(expandedAccount -> expandedAccount.isMinterMember)
.collect(Collectors.toList());
}
Set<AccountData> allUniqueExpandedAccounts = new HashSet<>();
for (ExpandedAccount expandedAccount : expandedAccounts) {
@ -1839,7 +1914,7 @@ public class Block {
/**
* Removes block from blockchain undoing transactions and adding them to unconfirmed pile.
*
*
* @throws DataException
*/
public void orphan() throws DataException {
@ -1879,7 +1954,7 @@ public class Block {
SelfSponsorshipAlgoV3Block.orphanAccountPenalties(this);
}
}
// Account levels and block rewards are only processed/orphaned on block reward distribution blocks
if (this.isRewardDistributionBlock()) {
// Block rewards, including transaction fees, removed after transactions undone
@ -2018,7 +2093,17 @@ public class Block {
final List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
final int maximumLevel = cumulativeBlocksByLevel.size() - 1;
final List<ExpandedAccount> expandedAccounts = this.getExpandedAccounts();
final List<ExpandedAccount> expandedAccounts;
if (this.getBlockData().getHeight() < BlockChain.getInstance().getFixBatchRewardHeight()) {
expandedAccounts = this.getExpandedAccounts().stream().collect(Collectors.toList());
}
else {
expandedAccounts
= this.getExpandedAccounts().stream()
.filter(expandedAccount -> expandedAccount.isMinterMember)
.collect(Collectors.toList());
}
Set<AccountData> allUniqueExpandedAccounts = new HashSet<>();
for (ExpandedAccount expandedAccount : expandedAccounts) {
@ -2213,6 +2298,7 @@ public class Block {
List<AccountBalanceData> accountBalanceDeltas = balanceChanges.entrySet().stream()
.map(entry -> new AccountBalanceData(entry.getKey(), Asset.QORT, entry.getValue()))
.collect(Collectors.toList());
LOGGER.trace("Account Balance Deltas: {}", accountBalanceDeltas);
this.repository.getAccountRepository().modifyAssetBalances(accountBalanceDeltas);
}
@ -2221,34 +2307,44 @@ public class Block {
List<BlockRewardCandidate> rewardCandidates = new ArrayList<>();
// All online accounts
final List<ExpandedAccount> expandedAccounts = this.getExpandedAccounts();
final List<ExpandedAccount> expandedAccounts;
if (this.getBlockData().getHeight() < BlockChain.getInstance().getFixBatchRewardHeight()) {
expandedAccounts = this.getExpandedAccounts().stream().collect(Collectors.toList());
}
else {
expandedAccounts
= this.getExpandedAccounts().stream()
.filter(expandedAccount -> expandedAccount.isMinterMember)
.collect(Collectors.toList());
}
/*
* Distribution rules:
*
*
* Distribution is based on the minting account of 'online' reward-shares.
*
*
* If ANY founders are online, then they receive the leftover non-distributed reward.
* If NO founders are online, then account-level-based rewards are scaled up so 100% of reward is allocated.
*
*
* If ANY non-maxxed legacy QORA holders exist then they are always allocated their fixed share (e.g. 20%).
*
*
* There has to be either at least one 'online' account for blocks to be minted
* so there is always either one account-level-based or founder reward candidate.
*
*
* Examples:
*
*
* With at least one founder online:
* Level 1/2 accounts: 5%
* Legacy QORA holders: 20%
* Founders: ~75%
*
*
* No online founders:
* Level 1/2 accounts: 5%
* Level 5/6 accounts: 15%
* Legacy QORA holders: 20%
* Total: 40%
*
*
* After scaling account-level-based shares to fill 100%:
* Level 1/2 accounts: 20%
* Level 5/6 accounts: 60%
@ -2264,7 +2360,6 @@ public class Block {
// Select the correct set of share bins based on block height
List<AccountLevelShareBin> accountLevelShareBinsForBlock = (this.blockData.getHeight() >= BlockChain.getInstance().getSharesByLevelV2Height()) ?
BlockChain.getInstance().getAccountLevelShareBinsV2() : BlockChain.getInstance().getAccountLevelShareBinsV1();
// Determine reward candidates based on account level
// This needs a deep copy, so the shares can be modified when tiers aren't activated yet
List<AccountLevelShareBin> accountLevelShareBins = new ArrayList<>();
@ -2347,7 +2442,7 @@ public class Block {
final long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShareAtHeight(this.blockData.getHeight());
// Perform account-level-based reward scaling if appropriate
if (!haveFounders) {
if (!haveFounders && this.blockData.getHeight() < BlockChain.getInstance().getAdminsReplaceFoundersHeight() ) {
// Recalculate distribution ratios based on candidates
// Nothing shared? This shouldn't happen
@ -2383,18 +2478,103 @@ public class Block {
}
// Add founders as reward candidate if appropriate
if (haveFounders) {
if (haveFounders && this.blockData.getHeight() < BlockChain.getInstance().getAdminsReplaceFoundersHeight()) {
// Yes: add to reward candidates list
BlockRewardDistributor founderDistributor = (distributionAmount, balanceChanges) -> distributeBlockRewardShare(distributionAmount, onlineFounderAccounts, balanceChanges);
final long foundersShare = 1_00000000 - totalShares;
BlockRewardCandidate rewardCandidate = new BlockRewardCandidate("Founders", foundersShare, founderDistributor);
rewardCandidates.add(rewardCandidate);
LOGGER.info("logging foundersShare prior to reward modifications {}",foundersShare);
}
else if (this.blockData.getHeight() >= BlockChain.getInstance().getAdminsReplaceFoundersHeight()) {
try (final Repository repository = RepositoryManager.getRepository()) {
GroupRepository groupRepository = repository.getGroupRepository();
List<Integer> mintingGroupIds = Groups.getGroupIdsToMint(BlockChain.getInstance(), this.blockData.getHeight());
// all minter admins
List<String> minterAdmins = Groups.getAllAdmins(groupRepository, mintingGroupIds);
// all minter admins that are online
List<ExpandedAccount> onlineMinterAdminAccounts
= expandedAccounts.stream()
.filter(expandedAccount -> minterAdmins.contains(expandedAccount.getMintingAccount().getAddress()))
.collect(Collectors.toList());
long minterAdminShare;
if( onlineMinterAdminAccounts.isEmpty() ) {
minterAdminShare = 0;
}
else {
BlockRewardDistributor minterAdminDistributor
= (distributionAmount, balanceChanges)
->
distributeBlockRewardShare(distributionAmount, onlineMinterAdminAccounts, balanceChanges);
long adminShare = 1_00000000 - totalShares;
LOGGER.info("initial total Shares: {}", totalShares);
LOGGER.info("logging adminShare after hardfork, this is the primary reward that will be split {}", adminShare);
minterAdminShare = adminShare / 2;
BlockRewardCandidate minterAdminRewardCandidate
= new BlockRewardCandidate("Minter Admins", minterAdminShare, minterAdminDistributor);
rewardCandidates.add(minterAdminRewardCandidate);
totalShares += minterAdminShare;
}
LOGGER.info("MINTER ADMIN SHARE: {}",minterAdminShare);
// all dev admins
List<String> devAdminAddresses
= groupRepository.getGroupAdmins(1).stream()
.map(GroupAdminData::getAdmin)
.collect(Collectors.toList());
LOGGER.info("Removing NULL Account Address, Dev Admin Count = {}", devAdminAddresses.size());
devAdminAddresses.removeIf( address -> Group.NULL_OWNER_ADDRESS.equals(address) );
LOGGER.info("Removed NULL Account Address, Dev Admin Count = {}", devAdminAddresses.size());
BlockRewardDistributor devAdminDistributor
= (distributionAmount, balanceChanges) -> distributeToAccounts(distributionAmount, devAdminAddresses, balanceChanges);
long devAdminShare = 1_00000000 - totalShares;
LOGGER.info("DEV ADMIN SHARE: {}",devAdminShare);
BlockRewardCandidate devAdminRewardCandidate
= new BlockRewardCandidate("Dev Admins", devAdminShare,devAdminDistributor);
rewardCandidates.add(devAdminRewardCandidate);
}
}
return rewardCandidates;
}
/**
* Distribute To Accounts
*
* Merges distribute shares to a map of distribution shares.
*
* @param distributionAmount the amount to distribute
* @param accountAddressess the addresses to distribute to
* @param balanceChanges the map of distribution shares, this gets appended to
*
* @return the total amount mapped to addresses for distribution
*/
public static long distributeToAccounts(long distributionAmount, List<String> accountAddressess, Map<String, Long> balanceChanges) {
if( accountAddressess.isEmpty() ) return 0;
long distibutionShare = distributionAmount / accountAddressess.size();
for(String accountAddress : accountAddressess ) {
balanceChanges.merge(accountAddress, distibutionShare, Long::sum);
}
return distibutionShare * accountAddressess.size();
}
private static long distributeBlockRewardShare(long distributionAmount, List<ExpandedAccount> accounts, Map<String, Long> balanceChanges) {
// Collate all expanded accounts by minting account
Map<String, List<ExpandedAccount>> accountsByMinter = new HashMap<>();
@ -2554,9 +2734,11 @@ public class Block {
return;
int minterLevel = Account.getRewardShareEffectiveMintingLevel(this.repository, this.getMinter().getPublicKey());
String minterAddress = Account.getRewardShareMintingAddress(this.repository, this.getMinter().getPublicKey());
LOGGER.debug(String.format("======= BLOCK %d (%.8s) =======", this.getBlockData().getHeight(), Base58.encode(this.getSignature())));
LOGGER.debug(String.format("Timestamp: %d", this.getBlockData().getTimestamp()));
LOGGER.debug(String.format("Minter address: %s", minterAddress));
LOGGER.debug(String.format("Minter level: %d", minterLevel));
LOGGER.debug(String.format("Online accounts: %d", this.getBlockData().getOnlineAccountsCount()));
LOGGER.debug(String.format("AT count: %d", this.getBlockData().getATCount()));

View File

@ -71,6 +71,7 @@ public class BlockChain {
transactionV6Timestamp,
disableReferenceTimestamp,
increaseOnlineAccountsDifficultyTimestamp,
decreaseOnlineAccountsDifficultyTimestamp,
onlineAccountMinterLevelValidationHeight,
selfSponsorshipAlgoV1Height,
selfSponsorshipAlgoV2Height,
@ -85,7 +86,13 @@ public class BlockChain {
disableRewardshareHeight,
enableRewardshareHeight,
onlyMintWithNameHeight,
groupMemberCheckHeight
removeOnlyMintWithNameHeight,
groupMemberCheckHeight,
fixBatchRewardHeight,
adminsReplaceFoundersHeight,
nullGroupMembershipHeight,
ignoreLevelForRewardShareHeight,
adminQueryFixHeight
}
// Custom transaction fees
@ -205,7 +212,13 @@ public class BlockChain {
private int minAccountLevelToRewardShare;
private int maxRewardSharesPerFounderMintingAccount;
private int founderEffectiveMintingLevel;
private int mintingGroupId;
public static class IdsForHeight {
public int height;
public List<Integer> ids;
}
private List<IdsForHeight> mintingGroupIds;
/** Minimum time to retain online account signatures (ms) for block validity checks. */
private long onlineAccountSignaturesMinLifetime;
@ -217,6 +230,10 @@ public class BlockChain {
* featureTriggers because unit tests need to set this value via Reflection. */
private long onlineAccountsModulusV2Timestamp;
/** Feature trigger timestamp for ONLINE_ACCOUNTS_MODULUS time interval decrease. Can't use
* featureTriggers because unit tests need to set this value via Reflection. */
private long onlineAccountsModulusV3Timestamp;
/** Snapshot timestamp for self sponsorship algo V1 */
private long selfSponsorshipAlgoV1SnapshotTimestamp;
@ -403,6 +420,10 @@ public class BlockChain {
return this.onlineAccountsModulusV2Timestamp;
}
public long getOnlineAccountsModulusV3Timestamp() {
return this.onlineAccountsModulusV3Timestamp;
}
/* Block reward batching */
public long getBlockRewardBatchStartHeight() {
return this.blockRewardBatchStartHeight;
@ -529,8 +550,8 @@ public class BlockChain {
return this.onlineAccountSignaturesMaxLifetime;
}
public int getMintingGroupId() {
return this.mintingGroupId;
public List<IdsForHeight> getMintingGroupIds() {
return mintingGroupIds;
}
public CiyamAtSettings getCiyamAtSettings() {
@ -579,6 +600,10 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.increaseOnlineAccountsDifficultyTimestamp.name()).longValue();
}
public long getDecreaseOnlineAccountsDifficultyTimestamp() {
return this.featureTriggers.get(FeatureTrigger.decreaseOnlineAccountsDifficultyTimestamp.name()).longValue();
}
public int getSelfSponsorshipAlgoV1Height() {
return this.featureTriggers.get(FeatureTrigger.selfSponsorshipAlgoV1Height.name()).intValue();
}
@ -635,10 +660,34 @@ public class BlockChain {
return this.featureTriggers.get(FeatureTrigger.onlyMintWithNameHeight.name()).intValue();
}
public int getRemoveOnlyMintWithNameHeight() {
return this.featureTriggers.get(FeatureTrigger.removeOnlyMintWithNameHeight.name()).intValue();
}
public int getGroupMemberCheckHeight() {
return this.featureTriggers.get(FeatureTrigger.groupMemberCheckHeight.name()).intValue();
}
public int getFixBatchRewardHeight() {
return this.featureTriggers.get(FeatureTrigger.fixBatchRewardHeight.name()).intValue();
}
public int getAdminsReplaceFoundersHeight() {
return this.featureTriggers.get(FeatureTrigger.adminsReplaceFoundersHeight.name()).intValue();
}
public int getNullGroupMembershipHeight() {
return this.featureTriggers.get(FeatureTrigger.nullGroupMembershipHeight.name()).intValue();
}
public int getIgnoreLevelForRewardShareHeight() {
return this.featureTriggers.get(FeatureTrigger.ignoreLevelForRewardShareHeight.name()).intValue();
}
public int getAdminQueryFixHeight() {
return this.featureTriggers.get(FeatureTrigger.adminQueryFixHeight.name()).intValue();
}
// More complex getters for aspects that change by height or timestamp
public long getRewardAtHeight(int ourHeight) {

View File

@ -97,364 +97,375 @@ public class BlockMinter extends Thread {
final boolean isSingleNodeTestnet = Settings.getInstance().isSingleNodeTestnet();
try (final Repository repository = RepositoryManager.getRepository()) {
// Going to need this a lot...
BlockRepository blockRepository = repository.getBlockRepository();
// Flags for tracking change in whether minting is possible,
// so we can notify Controller, and further update SysTray, etc.
boolean isMintingPossible = false;
boolean wasMintingPossible = isMintingPossible;
// Flags for tracking change in whether minting is possible,
// so we can notify Controller, and further update SysTray, etc.
boolean isMintingPossible = false;
boolean wasMintingPossible = isMintingPossible;
try {
while (running) {
if (isMintingPossible != wasMintingPossible)
Controller.getInstance().onMintingPossibleChange(isMintingPossible);
// recreate repository for new loop iteration
try (final Repository repository = RepositoryManager.getRepository()) {
wasMintingPossible = isMintingPossible;
// Going to need this a lot...
BlockRepository blockRepository = repository.getBlockRepository();
try {
// Free up any repository locks
repository.discardChanges();
if (isMintingPossible != wasMintingPossible)
Controller.getInstance().onMintingPossibleChange(isMintingPossible);
// Sleep for a while.
// It's faster on single node testnets, to allow lots of blocks to be minted quickly.
Thread.sleep(isSingleNodeTestnet ? 50 : 1000);
isMintingPossible = false;
final Long now = NTP.getTime();
if (now == null)
continue;
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
continue;
List<MintingAccountData> mintingAccountsData = repository.getAccountRepository().getMintingAccounts();
// No minting accounts?
if (mintingAccountsData.isEmpty())
continue;
// Disregard minting accounts that are no longer valid, e.g. by transfer/loss of founder flag or account level
// Note that minting accounts are actually reward-shares in Qortal
Iterator<MintingAccountData> madi = mintingAccountsData.iterator();
while (madi.hasNext()) {
MintingAccountData mintingAccountData = madi.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't exist - probably cancelled but not yet removed from node's list of minting accounts
madi.remove();
continue;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
// Minting-account component of reward-share can no longer mint - disregard
madi.remove();
continue;
}
// Optional (non-validated) prevention of block submissions below a defined level.
// This is an unvalidated version of Blockchain.minAccountLevelToMint
// and exists only to reduce block candidates by default.
int level = mintingAccount.getEffectiveMintingLevel();
if (level < BlockChain.getInstance().getMinAccountLevelForBlockSubmissions()) {
madi.remove();
}
}
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
BlockData lastBlockData = blockRepository.getLastBlock();
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
// Disregard peers that don't have a recent block, but only if we're not in recovery mode.
// In that mode, we want to allow minting on top of older blocks, to recover stalled networks.
if (!Synchronizer.getInstance().getRecoveryMode())
peers.removeIf(Controller.hasNoRecentBlock);
// Don't mint if we don't have enough up-to-date peers as where would the transactions/consensus come from?
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
continue;
// If we are stuck on an invalid block, we should allow an alternative to be minted
boolean recoverInvalidBlock = false;
if (Synchronizer.getInstance().timeInvalidBlockLastReceived != null) {
// We've had at least one invalid block
long timeSinceLastValidBlock = NTP.getTime() - Synchronizer.getInstance().timeValidBlockLastReceived;
long timeSinceLastInvalidBlock = NTP.getTime() - Synchronizer.getInstance().timeInvalidBlockLastReceived;
if (timeSinceLastValidBlock > INVALID_BLOCK_RECOVERY_TIMEOUT) {
if (timeSinceLastInvalidBlock < INVALID_BLOCK_RECOVERY_TIMEOUT) {
// Last valid block was more than 10 mins ago, but we've had an invalid block since then
// Assume that the chain has stalled because there is no alternative valid candidate
// Enter recovery mode to allow alternative, valid candidates to be minted
recoverInvalidBlock = true;
}
}
}
// If our latest block isn't recent then we need to synchronize instead of minting, unless we're in recovery mode.
if (!peers.isEmpty() && lastBlockData.getTimestamp() < minLatestBlockTimestamp)
if (!Synchronizer.getInstance().getRecoveryMode() && !recoverInvalidBlock)
continue;
// There are enough peers with a recent block and our latest block is recent
// so go ahead and mint a block if possible.
isMintingPossible = true;
// Check blockchain hasn't changed
if (previousBlockData == null || !Arrays.equals(previousBlockData.getSignature(), lastBlockData.getSignature())) {
previousBlockData = lastBlockData;
newBlocks.clear();
// Reduce log timeout
logTimeout = 10 * 1000L;
// Last low weight block is no longer valid
parentSignatureForLastLowWeightBlock = null;
}
// Discard accounts we have already built blocks with
mintingAccountsData.removeIf(mintingAccountData -> newBlocks.stream().anyMatch(newBlock -> Arrays.equals(newBlock.getBlockData().getMinterPublicKey(), mintingAccountData.getPublicKey())));
// Do we need to build any potential new blocks?
List<PrivateKeyAccount> newBlocksMintingAccounts = mintingAccountsData.stream().map(accountData -> new PrivateKeyAccount(repository, accountData.getPrivateKey())).collect(Collectors.toList());
// We might need to sit the next block out, if one of our minting accounts signed the previous one
// Skip this check for single node testnets, since they definitely need to mint every block
byte[] previousBlockMinter = previousBlockData.getMinterPublicKey();
boolean mintedLastBlock = mintingAccountsData.stream().anyMatch(mintingAccount -> Arrays.equals(mintingAccount.getPublicKey(), previousBlockMinter));
if (mintedLastBlock && !isSingleNodeTestnet) {
LOGGER.trace(String.format("One of our keys signed the last block, so we won't sign the next one"));
continue;
}
if (parentSignatureForLastLowWeightBlock != null) {
// The last iteration found a higher weight block in the network, so sleep for a while
// to allow is to sync the higher weight chain. We are sleeping here rather than when
// detected as we don't want to hold the blockchain lock open.
LOGGER.info("Sleeping for 10 seconds...");
Thread.sleep(10 * 1000L);
}
for (PrivateKeyAccount mintingAccount : newBlocksMintingAccounts) {
// First block does the AT heavy-lifting
if (newBlocks.isEmpty()) {
Block newBlock = Block.mint(repository, previousBlockData, mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.info("Couldn't build a to-be-minted block"));
continue;
}
newBlocks.add(newBlock);
} else {
// The blocks for other minters require less effort...
Block newBlock = newBlocks.get(0).remint(mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't rebuild a to-be-minted block"));
continue;
}
newBlocks.add(newBlock);
}
}
// No potential block candidates?
if (newBlocks.isEmpty())
continue;
// Make sure we're the only thread modifying the blockchain
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(30, TimeUnit.SECONDS)) {
LOGGER.debug("Couldn't acquire blockchain lock even after waiting 30 seconds");
continue;
}
boolean newBlockMinted = false;
Block newBlock = null;
wasMintingPossible = isMintingPossible;
try {
// Clear repository session state so we have latest view of data
// reset the repository, to the repository recreated for this loop iteration
for( Block newBlock : newBlocks ) newBlock.setRepository(repository);
// Free up any repository locks
repository.discardChanges();
// Now that we have blockchain lock, do final check that chain hasn't changed
BlockData latestBlockData = blockRepository.getLastBlock();
if (!Arrays.equals(lastBlockData.getSignature(), latestBlockData.getSignature()))
// Sleep for a while.
// It's faster on single node testnets, to allow lots of blocks to be minted quickly.
Thread.sleep(isSingleNodeTestnet ? 50 : 1000);
isMintingPossible = false;
final Long now = NTP.getTime();
if (now == null)
continue;
List<Block> goodBlocks = new ArrayList<>();
boolean wasInvalidBlockDiscarded = false;
Iterator<Block> newBlocksIterator = newBlocks.iterator();
final Long minLatestBlockTimestamp = Controller.getMinimumLatestBlockTimestamp();
if (minLatestBlockTimestamp == null)
continue;
while (newBlocksIterator.hasNext()) {
Block testBlock = newBlocksIterator.next();
List<MintingAccountData> mintingAccountsData = repository.getAccountRepository().getMintingAccounts();
// No minting accounts?
if (mintingAccountsData.isEmpty())
continue;
// Is new block's timestamp valid yet?
// We do a separate check as some timestamp checks are skipped for testchains
if (testBlock.isTimestampValid() != ValidationResult.OK)
// Disregard minting accounts that are no longer valid, e.g. by transfer/loss of founder flag or account level
// Note that minting accounts are actually reward-shares in Qortal
Iterator<MintingAccountData> madi = mintingAccountsData.iterator();
while (madi.hasNext()) {
MintingAccountData mintingAccountData = madi.next();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(mintingAccountData.getPublicKey());
if (rewardShareData == null) {
// Reward-share doesn't exist - probably cancelled but not yet removed from node's list of minting accounts
madi.remove();
continue;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint(true)) {
// Minting-account component of reward-share can no longer mint - disregard
madi.remove();
continue;
}
// Optional (non-validated) prevention of block submissions below a defined level.
// This is an unvalidated version of Blockchain.minAccountLevelToMint
// and exists only to reduce block candidates by default.
int level = mintingAccount.getEffectiveMintingLevel();
if (level < BlockChain.getInstance().getMinAccountLevelForBlockSubmissions()) {
madi.remove();
}
}
// Needs a mutable copy of the unmodifiableList
List<Peer> peers = new ArrayList<>(Network.getInstance().getImmutableHandshakedPeers());
BlockData lastBlockData = blockRepository.getLastBlock();
// Disregard peers that have "misbehaved" recently
peers.removeIf(Controller.hasMisbehaved);
// Disregard peers that don't have a recent block, but only if we're not in recovery mode.
// In that mode, we want to allow minting on top of older blocks, to recover stalled networks.
if (!Synchronizer.getInstance().getRecoveryMode())
peers.removeIf(Controller.hasNoRecentBlock);
// Don't mint if we don't have enough up-to-date peers as where would the transactions/consensus come from?
if (peers.size() < Settings.getInstance().getMinBlockchainPeers())
continue;
// If we are stuck on an invalid block, we should allow an alternative to be minted
boolean recoverInvalidBlock = false;
if (Synchronizer.getInstance().timeInvalidBlockLastReceived != null) {
// We've had at least one invalid block
long timeSinceLastValidBlock = NTP.getTime() - Synchronizer.getInstance().timeValidBlockLastReceived;
long timeSinceLastInvalidBlock = NTP.getTime() - Synchronizer.getInstance().timeInvalidBlockLastReceived;
if (timeSinceLastValidBlock > INVALID_BLOCK_RECOVERY_TIMEOUT) {
if (timeSinceLastInvalidBlock < INVALID_BLOCK_RECOVERY_TIMEOUT) {
// Last valid block was more than 10 mins ago, but we've had an invalid block since then
// Assume that the chain has stalled because there is no alternative valid candidate
// Enter recovery mode to allow alternative, valid candidates to be minted
recoverInvalidBlock = true;
}
}
}
// If our latest block isn't recent then we need to synchronize instead of minting, unless we're in recovery mode.
if (!peers.isEmpty() && lastBlockData.getTimestamp() < minLatestBlockTimestamp)
if (!Synchronizer.getInstance().getRecoveryMode() && !recoverInvalidBlock)
continue;
testBlock.preProcess();
// There are enough peers with a recent block and our latest block is recent
// so go ahead and mint a block if possible.
isMintingPossible = true;
// Is new block valid yet? (Before adding unconfirmed transactions)
ValidationResult result = testBlock.isValid();
if (result != ValidationResult.OK) {
moderatedLog(() -> LOGGER.error(String.format("To-be-minted block invalid '%s' before adding transactions?", result.name())));
// Check blockchain hasn't changed
if (previousBlockData == null || !Arrays.equals(previousBlockData.getSignature(), lastBlockData.getSignature())) {
previousBlockData = lastBlockData;
newBlocks.clear();
newBlocksIterator.remove();
wasInvalidBlockDiscarded = true;
/*
* Bail out fast so that we loop around from the top again.
* This gives BlockMinter the possibility to remint this candidate block using another block from newBlocks,
* via the Blocks.remint() method, which avoids having to re-process Block ATs all over again.
* Particularly useful if some aspect of Blocks changes due a timestamp-based feature-trigger (see BlockChain class).
*/
break;
}
// Reduce log timeout
logTimeout = 10 * 1000L;
goodBlocks.add(testBlock);
// Last low weight block is no longer valid
parentSignatureForLastLowWeightBlock = null;
}
if (wasInvalidBlockDiscarded || goodBlocks.isEmpty())
// Discard accounts we have already built blocks with
mintingAccountsData.removeIf(mintingAccountData -> newBlocks.stream().anyMatch(newBlock -> Arrays.equals(newBlock.getBlockData().getMinterPublicKey(), mintingAccountData.getPublicKey())));
// Do we need to build any potential new blocks?
List<PrivateKeyAccount> newBlocksMintingAccounts = mintingAccountsData.stream().map(accountData -> new PrivateKeyAccount(repository, accountData.getPrivateKey())).collect(Collectors.toList());
// We might need to sit the next block out, if one of our minting accounts signed the previous one
// Skip this check for single node testnets, since they definitely need to mint every block
byte[] previousBlockMinter = previousBlockData.getMinterPublicKey();
boolean mintedLastBlock = mintingAccountsData.stream().anyMatch(mintingAccount -> Arrays.equals(mintingAccount.getPublicKey(), previousBlockMinter));
if (mintedLastBlock && !isSingleNodeTestnet) {
LOGGER.trace(String.format("One of our keys signed the last block, so we won't sign the next one"));
continue;
// Pick best block
final int parentHeight = previousBlockData.getHeight();
final byte[] parentBlockSignature = previousBlockData.getSignature();
BigInteger bestWeight = null;
for (int bi = 0; bi < goodBlocks.size(); ++bi) {
BlockData blockData = goodBlocks.get(bi).getBlockData();
BlockSummaryData blockSummaryData = new BlockSummaryData(blockData);
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
blockSummaryData.setMinterLevel(minterLevel);
BigInteger blockWeight = Block.calcBlockWeight(parentHeight, parentBlockSignature, blockSummaryData);
if (bestWeight == null || blockWeight.compareTo(bestWeight) < 0) {
newBlock = goodBlocks.get(bi);
bestWeight = blockWeight;
}
}
try {
if (this.higherWeightChainExists(repository, bestWeight)) {
if (parentSignatureForLastLowWeightBlock != null) {
// The last iteration found a higher weight block in the network, so sleep for a while
// to allow is to sync the higher weight chain. We are sleeping here rather than when
// detected as we don't want to hold the blockchain lock open.
LOGGER.info("Sleeping for 10 seconds...");
Thread.sleep(10 * 1000L);
}
// Check if the base block has updated since the last time we were here
if (parentSignatureForLastLowWeightBlock == null || timeOfLastLowWeightBlock == null ||
!Arrays.equals(parentSignatureForLastLowWeightBlock, previousBlockData.getSignature())) {
// We've switched to a different chain, so reset the timer
timeOfLastLowWeightBlock = NTP.getTime();
}
parentSignatureForLastLowWeightBlock = previousBlockData.getSignature();
// If less than 30 seconds has passed since first detection the higher weight chain,
// we should skip our block submission to give us the opportunity to sync to the better chain
if (NTP.getTime() - timeOfLastLowWeightBlock < 30 * 1000L) {
LOGGER.info("Higher weight chain found in peers, so not signing a block this round");
LOGGER.info("Time since detected: {}", NTP.getTime() - timeOfLastLowWeightBlock);
for (PrivateKeyAccount mintingAccount : newBlocksMintingAccounts) {
// First block does the AT heavy-lifting
if (newBlocks.isEmpty()) {
Block newBlock = Block.mint(repository, previousBlockData, mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.info("Couldn't build a to-be-minted block"));
continue;
} else {
// More than 30 seconds have passed, so we should submit our block candidate anyway.
LOGGER.info("More than 30 seconds passed, so proceeding to submit block candidate...");
}
newBlocks.add(newBlock);
} else {
LOGGER.debug("No higher weight chain found in peers");
// The blocks for other minters require less effort...
Block newBlock = newBlocks.get(0).remint(mintingAccount);
if (newBlock == null) {
// For some reason we can't mint right now
moderatedLog(() -> LOGGER.error("Couldn't rebuild a to-be-minted block"));
continue;
}
newBlocks.add(newBlock);
}
} catch (DataException e) {
LOGGER.debug("Unable to check for a higher weight chain. Proceeding anyway...");
}
// Discard any uncommitted changes as a result of the higher weight chain detection
repository.discardChanges();
// No potential block candidates?
if (newBlocks.isEmpty())
continue;
// Clear variables that track low weight blocks
parentSignatureForLastLowWeightBlock = null;
timeOfLastLowWeightBlock = null;
Long unconfirmedStartTime = NTP.getTime();
// Add unconfirmed transactions
addUnconfirmedTransactions(repository, newBlock);
LOGGER.info(String.format("Adding %d unconfirmed transactions took %d ms", newBlock.getTransactions().size(), (NTP.getTime()-unconfirmedStartTime)));
// Sign to create block's signature
newBlock.sign();
// Is newBlock still valid?
ValidationResult validationResult = newBlock.isValid();
if (validationResult != ValidationResult.OK) {
// No longer valid? Report and discard
LOGGER.error(String.format("To-be-minted block now invalid '%s' after adding unconfirmed transactions?", validationResult.name()));
// Rebuild block candidates, just to be sure
newBlocks.clear();
// Make sure we're the only thread modifying the blockchain
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
if (!blockchainLock.tryLock(30, TimeUnit.SECONDS)) {
LOGGER.debug("Couldn't acquire blockchain lock even after waiting 30 seconds");
continue;
}
// Add to blockchain - something else will notice and broadcast new block to network
boolean newBlockMinted = false;
Block newBlock = null;
try {
newBlock.process();
// Clear repository session state so we have latest view of data
repository.discardChanges();
repository.saveChanges();
// Now that we have blockchain lock, do final check that chain hasn't changed
BlockData latestBlockData = blockRepository.getLastBlock();
if (!Arrays.equals(lastBlockData.getSignature(), latestBlockData.getSignature()))
continue;
LOGGER.info(String.format("Minted new block: %d", newBlock.getBlockData().getHeight()));
List<Block> goodBlocks = new ArrayList<>();
boolean wasInvalidBlockDiscarded = false;
Iterator<Block> newBlocksIterator = newBlocks.iterator();
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(newBlock.getBlockData().getMinterPublicKey());
while (newBlocksIterator.hasNext()) {
Block testBlock = newBlocksIterator.next();
if (rewardShareData != null) {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s on behalf of %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
rewardShareData.getMinter(),
rewardShareData.getRecipient()));
} else {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
newBlock.getMinter().getAddress()));
// Is new block's timestamp valid yet?
// We do a separate check as some timestamp checks are skipped for testchains
if (testBlock.isTimestampValid() != ValidationResult.OK)
continue;
testBlock.preProcess();
// Is new block valid yet? (Before adding unconfirmed transactions)
ValidationResult result = testBlock.isValid();
if (result != ValidationResult.OK) {
moderatedLog(() -> LOGGER.error(String.format("To-be-minted block invalid '%s' before adding transactions?", result.name())));
newBlocksIterator.remove();
wasInvalidBlockDiscarded = true;
/*
* Bail out fast so that we loop around from the top again.
* This gives BlockMinter the possibility to remint this candidate block using another block from newBlocks,
* via the Blocks.remint() method, which avoids having to re-process Block ATs all over again.
* Particularly useful if some aspect of Blocks changes due a timestamp-based feature-trigger (see BlockChain class).
*/
break;
}
goodBlocks.add(testBlock);
}
// Notify network after we're released blockchain lock
newBlockMinted = true;
if (wasInvalidBlockDiscarded || goodBlocks.isEmpty())
continue;
// Notify Controller
repository.discardChanges(); // clear transaction status to prevent deadlocks
Controller.getInstance().onNewBlock(newBlock.getBlockData());
} catch (DataException e) {
// Unable to process block - report and discard
LOGGER.error("Unable to process newly minted block?", e);
newBlocks.clear();
} catch (ArithmeticException e) {
// Unable to process block - report and discard
LOGGER.error("Unable to process newly minted block?", e);
newBlocks.clear();
// Pick best block
final int parentHeight = previousBlockData.getHeight();
final byte[] parentBlockSignature = previousBlockData.getSignature();
BigInteger bestWeight = null;
for (int bi = 0; bi < goodBlocks.size(); ++bi) {
BlockData blockData = goodBlocks.get(bi).getBlockData();
BlockSummaryData blockSummaryData = new BlockSummaryData(blockData);
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
blockSummaryData.setMinterLevel(minterLevel);
BigInteger blockWeight = Block.calcBlockWeight(parentHeight, parentBlockSignature, blockSummaryData);
if (bestWeight == null || blockWeight.compareTo(bestWeight) < 0) {
newBlock = goodBlocks.get(bi);
bestWeight = blockWeight;
}
}
try {
if (this.higherWeightChainExists(repository, bestWeight)) {
// Check if the base block has updated since the last time we were here
if (parentSignatureForLastLowWeightBlock == null || timeOfLastLowWeightBlock == null ||
!Arrays.equals(parentSignatureForLastLowWeightBlock, previousBlockData.getSignature())) {
// We've switched to a different chain, so reset the timer
timeOfLastLowWeightBlock = NTP.getTime();
}
parentSignatureForLastLowWeightBlock = previousBlockData.getSignature();
// If less than 30 seconds has passed since first detection the higher weight chain,
// we should skip our block submission to give us the opportunity to sync to the better chain
if (NTP.getTime() - timeOfLastLowWeightBlock < 30 * 1000L) {
LOGGER.info("Higher weight chain found in peers, so not signing a block this round");
LOGGER.info("Time since detected: {}", NTP.getTime() - timeOfLastLowWeightBlock);
continue;
} else {
// More than 30 seconds have passed, so we should submit our block candidate anyway.
LOGGER.info("More than 30 seconds passed, so proceeding to submit block candidate...");
}
} else {
LOGGER.debug("No higher weight chain found in peers");
}
} catch (DataException e) {
LOGGER.debug("Unable to check for a higher weight chain. Proceeding anyway...");
}
// Discard any uncommitted changes as a result of the higher weight chain detection
repository.discardChanges();
// Clear variables that track low weight blocks
parentSignatureForLastLowWeightBlock = null;
timeOfLastLowWeightBlock = null;
Long unconfirmedStartTime = NTP.getTime();
// Add unconfirmed transactions
addUnconfirmedTransactions(repository, newBlock);
LOGGER.info(String.format("Adding %d unconfirmed transactions took %d ms", newBlock.getTransactions().size(), (NTP.getTime() - unconfirmedStartTime)));
// Sign to create block's signature
newBlock.sign();
// Is newBlock still valid?
ValidationResult validationResult = newBlock.isValid();
if (validationResult != ValidationResult.OK) {
// No longer valid? Report and discard
LOGGER.error(String.format("To-be-minted block now invalid '%s' after adding unconfirmed transactions?", validationResult.name()));
// Rebuild block candidates, just to be sure
newBlocks.clear();
continue;
}
// Add to blockchain - something else will notice and broadcast new block to network
try {
newBlock.process();
repository.saveChanges();
LOGGER.info(String.format("Minted new block: %d", newBlock.getBlockData().getHeight()));
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(newBlock.getBlockData().getMinterPublicKey());
if (rewardShareData != null) {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s on behalf of %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
rewardShareData.getMinter(),
rewardShareData.getRecipient()));
} else {
LOGGER.info(String.format("Minted block %d, sig %.8s, parent sig: %.8s by %s",
newBlock.getBlockData().getHeight(),
Base58.encode(newBlock.getBlockData().getSignature()),
Base58.encode(newBlock.getParent().getSignature()),
newBlock.getMinter().getAddress()));
}
// Notify network after we're released blockchain lock
newBlockMinted = true;
// Notify Controller
repository.discardChanges(); // clear transaction status to prevent deadlocks
Controller.getInstance().onNewBlock(newBlock.getBlockData());
} catch (DataException e) {
// Unable to process block - report and discard
LOGGER.error("Unable to process newly minted block?", e);
newBlocks.clear();
} catch (ArithmeticException e) {
// Unable to process block - report and discard
LOGGER.error("Unable to process newly minted block?", e);
newBlocks.clear();
}
} finally {
blockchainLock.unlock();
}
} finally {
blockchainLock.unlock();
}
if (newBlockMinted) {
// Broadcast our new chain to network
Network.getInstance().broadcastOurChain();
}
if (newBlockMinted) {
// Broadcast our new chain to network
Network.getInstance().broadcastOurChain();
}
} catch (InterruptedException e) {
// We've been interrupted - time to exit
return;
} catch (InterruptedException e) {
// We've been interrupted - time to exit
return;
}
} catch (DataException e) {
LOGGER.warn("Repository issue while running block minter - NO LONGER MINTING", e);
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
} catch (DataException e) {
LOGGER.warn("Repository issue while running block minter - NO LONGER MINTING", e);
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}

View File

@ -13,6 +13,7 @@ import org.qortal.block.Block;
import org.qortal.block.BlockChain;
import org.qortal.block.BlockChain.BlockTimingByHeight;
import org.qortal.controller.arbitrary.*;
import org.qortal.controller.hsqldb.HSQLDBBalanceRecorder;
import org.qortal.controller.hsqldb.HSQLDBDataCacheManager;
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
import org.qortal.controller.repository.PruneManager;
@ -36,7 +37,6 @@ import org.qortal.network.Peer;
import org.qortal.network.PeerAddress;
import org.qortal.network.message.*;
import org.qortal.repository.*;
import org.qortal.repository.hsqldb.HSQLDBRepository;
import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
import org.qortal.settings.Settings;
import org.qortal.transaction.Transaction;
@ -73,6 +73,8 @@ import java.util.stream.Collectors;
public class Controller extends Thread {
public static HSQLDBRepositoryFactory REPOSITORY_FACTORY;
static {
// This must go before any calls to LogManager/Logger
System.setProperty("log4j2.formatMsgNoLookups", "true");
@ -403,23 +405,44 @@ public class Controller extends Thread {
LOGGER.info("Starting repository");
try {
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(getRepositoryUrl());
RepositoryManager.setRepositoryFactory(repositoryFactory);
REPOSITORY_FACTORY = new HSQLDBRepositoryFactory(getRepositoryUrl());
RepositoryManager.setRepositoryFactory(REPOSITORY_FACTORY);
RepositoryManager.setRequestedCheckpoint(Boolean.TRUE);
try (final Repository repository = RepositoryManager.getRepository()) {
// RepositoryManager.rebuildTransactionSequences(repository);
ArbitraryDataCacheManager.getInstance().buildArbitraryResourcesCache(repository, false);
}
if( Settings.getInstance().isDbCacheEnabled() ) {
LOGGER.info("Db Cache Starting ...");
HSQLDBDataCacheManager hsqldbDataCacheManager = new HSQLDBDataCacheManager((HSQLDBRepository) repositoryFactory.getRepository());
hsqldbDataCacheManager.start();
if( Settings.getInstance().isDbCacheEnabled() ) {
LOGGER.info("Db Cache Starting ...");
HSQLDBDataCacheManager hsqldbDataCacheManager = new HSQLDBDataCacheManager();
hsqldbDataCacheManager.start();
}
else {
LOGGER.info("Db Cache Disabled");
}
LOGGER.info("Arbitrary Indexing Starting ...");
ArbitraryIndexUtils.startCaching(
Settings.getInstance().getArbitraryIndexingPriority(),
Settings.getInstance().getArbitraryIndexingFrequency()
);
if( Settings.getInstance().isBalanceRecorderEnabled() ) {
Optional<HSQLDBBalanceRecorder> recorder = HSQLDBBalanceRecorder.getInstance();
if( recorder.isPresent() ) {
LOGGER.info("Balance Recorder Starting ...");
recorder.get().start();
}
else {
LOGGER.info("Db Cache Disabled");
LOGGER.info("Balance Recorder won't start.");
}
}
else {
LOGGER.info("Balance Recorder Disabled");
}
} catch (DataException e) {
// If exception has no cause or message then repository is in use by some other process.
if (e.getCause() == null && e.getMessage() == null) {
@ -524,6 +547,16 @@ public class Controller extends Thread {
ArbitraryDataStorageManager.getInstance().start();
ArbitraryDataRenderManager.getInstance().start();
// start rebuild arbitrary resource cache timer task
if( Settings.getInstance().isRebuildArbitraryResourceCacheTaskEnabled() ) {
new Timer().schedule(
new RebuildArbitraryResourceCacheTask(),
Settings.getInstance().getRebuildArbitraryResourceCacheTaskDelay() * RebuildArbitraryResourceCacheTask.MILLIS_IN_MINUTE,
Settings.getInstance().getRebuildArbitraryResourceCacheTaskPeriod() * RebuildArbitraryResourceCacheTask.MILLIS_IN_HOUR
);
}
LOGGER.info("Starting online accounts manager");
OnlineAccountsManager.getInstance().start();
@ -639,10 +672,8 @@ public class Controller extends Thread {
boolean canBootstrap = Settings.getInstance().getBootstrap();
boolean needsArchiveRebuild = false;
int checkHeight = 0;
Repository repository = null;
try {
repository = RepositoryManager.getRepository();
try (final Repository repository = RepositoryManager.getRepository()){
needsArchiveRebuild = (repository.getBlockArchiveRepository().fromHeight(2) == null);
checkHeight = repository.getBlockRepository().getBlockchainHeight();
} catch (DataException e) {

View File

@ -13,6 +13,7 @@ import org.qortal.crypto.MemoryPoW;
import org.qortal.crypto.Qortal25519Extras;
import org.qortal.data.account.MintingAccountData;
import org.qortal.data.account.RewardShareData;
import org.qortal.data.group.GroupMemberData;
import org.qortal.data.network.OnlineAccountData;
import org.qortal.network.Network;
import org.qortal.network.Peer;
@ -24,6 +25,7 @@ import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.Base58;
import org.qortal.utils.Groups;
import org.qortal.utils.NTP;
import org.qortal.utils.NamedThreadFactory;
@ -44,6 +46,7 @@ public class OnlineAccountsManager {
*/
private static final long ONLINE_TIMESTAMP_MODULUS_V1 = 5 * 60 * 1000L;
private static final long ONLINE_TIMESTAMP_MODULUS_V2 = 30 * 60 * 1000L;
private static final long ONLINE_TIMESTAMP_MODULUS_V3 = 10 * 60 * 1000L;
/**
* How many 'current' timestamp-sets of online accounts we cache.
@ -67,12 +70,13 @@ public class OnlineAccountsManager {
private static final long ONLINE_ACCOUNTS_COMPUTE_INITIAL_SLEEP_INTERVAL = 30 * 1000L; // ms
// MemoryPoW - mainnet
public static final int POW_BUFFER_SIZE = 1 * 1024 * 1024; // bytes
public static final int POW_BUFFER_SIZE = 1024 * 1024; // bytes
public static final int POW_DIFFICULTY_V1 = 18; // leading zero bits
public static final int POW_DIFFICULTY_V2 = 19; // leading zero bits
public static final int POW_DIFFICULTY_V3 = 6; // leading zero bits
// MemoryPoW - testnet
public static final int POW_BUFFER_SIZE_TESTNET = 1 * 1024 * 1024; // bytes
public static final int POW_BUFFER_SIZE_TESTNET = 1024 * 1024; // bytes
public static final int POW_DIFFICULTY_TESTNET = 5; // leading zero bits
// IMPORTANT: if we ever need to dynamically modify the buffer size using a feature trigger, the
@ -106,11 +110,15 @@ public class OnlineAccountsManager {
public static long getOnlineTimestampModulus() {
Long now = NTP.getTime();
if (now != null && now >= BlockChain.getInstance().getOnlineAccountsModulusV2Timestamp()) {
if (now != null && now >= BlockChain.getInstance().getOnlineAccountsModulusV2Timestamp() && now < BlockChain.getInstance().getOnlineAccountsModulusV3Timestamp()) {
return ONLINE_TIMESTAMP_MODULUS_V2;
}
if (now != null && now >= BlockChain.getInstance().getOnlineAccountsModulusV3Timestamp()) {
return ONLINE_TIMESTAMP_MODULUS_V3;
}
return ONLINE_TIMESTAMP_MODULUS_V1;
}
public static Long getCurrentOnlineAccountTimestamp() {
Long now = NTP.getTime();
if (now == null)
@ -135,9 +143,12 @@ public class OnlineAccountsManager {
if (Settings.getInstance().isTestNet())
return POW_DIFFICULTY_TESTNET;
if (timestamp >= BlockChain.getInstance().getIncreaseOnlineAccountsDifficultyTimestamp())
if (timestamp >= BlockChain.getInstance().getIncreaseOnlineAccountsDifficultyTimestamp() && timestamp < BlockChain.getInstance().getDecreaseOnlineAccountsDifficultyTimestamp())
return POW_DIFFICULTY_V2;
if (timestamp >= BlockChain.getInstance().getDecreaseOnlineAccountsDifficultyTimestamp())
return POW_DIFFICULTY_V3;
return POW_DIFFICULTY_V1;
}
@ -215,6 +226,15 @@ public class OnlineAccountsManager {
Set<OnlineAccountData> onlineAccountsToAdd = new HashSet<>();
Set<OnlineAccountData> onlineAccountsToRemove = new HashSet<>();
try (final Repository repository = RepositoryManager.getRepository()) {
int blockHeight = repository.getBlockRepository().getBlockchainHeight();
List<String> mintingGroupMemberAddresses
= Groups.getAllMembers(
repository.getGroupRepository(),
Groups.getGroupIdsToMint(BlockChain.getInstance(), blockHeight)
);
for (OnlineAccountData onlineAccountData : this.onlineAccountsImportQueue) {
if (isStopping)
return;
@ -227,7 +247,7 @@ public class OnlineAccountsManager {
continue;
}
boolean isValid = this.isValidCurrentAccount(repository, onlineAccountData);
boolean isValid = this.isValidCurrentAccount(repository, mintingGroupMemberAddresses, onlineAccountData);
if (isValid)
onlineAccountsToAdd.add(onlineAccountData);
@ -306,7 +326,7 @@ public class OnlineAccountsManager {
return inplaceArray;
}
private static boolean isValidCurrentAccount(Repository repository, OnlineAccountData onlineAccountData) throws DataException {
private static boolean isValidCurrentAccount(Repository repository, List<String> mintingGroupMemberAddresses, OnlineAccountData onlineAccountData) throws DataException {
final Long now = NTP.getTime();
if (now == null)
return false;
@ -341,9 +361,14 @@ public class OnlineAccountsManager {
LOGGER.trace(() -> String.format("Rejecting unknown online reward-share public key %s", Base58.encode(rewardSharePublicKey)));
return false;
}
// reject account address that are not in the MINTER Group
else if( !mintingGroupMemberAddresses.contains(rewardShareData.getMinter())) {
LOGGER.trace(() -> String.format("Rejecting online reward-share that is not in MINTER Group, account %s", rewardShareData.getMinter()));
return false;
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
if (!mintingAccount.canMint(true)) { // group validation is a few lines above
// Minting-account component of reward-share can no longer mint - disregard
LOGGER.trace(() -> String.format("Rejecting online reward-share with non-minting account %s", mintingAccount.getAddress()));
return false;
@ -530,7 +555,7 @@ public class OnlineAccountsManager {
}
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
if (!mintingAccount.canMint()) {
if (!mintingAccount.canMint(true)) {
// Minting-account component of reward-share can no longer mint - disregard
iterator.remove();
continue;

View File

@ -2,22 +2,30 @@ package org.qortal.controller.arbitrary;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.api.resource.TransactionsResource;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryResourceData;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.event.DataMonitorEvent;
import org.qortal.event.EventBus;
import org.qortal.gui.SplashFrame;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.transaction.ArbitraryTransaction;
import org.qortal.transaction.Transaction;
import org.qortal.utils.Base58;
import java.text.NumberFormat;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.function.Function;
import java.util.stream.Collectors;
public class ArbitraryDataCacheManager extends Thread {
@ -29,6 +37,11 @@ public class ArbitraryDataCacheManager extends Thread {
/** Queue of arbitrary transactions that require cache updates */
private final List<ArbitraryTransactionData> updateQueue = Collections.synchronizedList(new ArrayList<>());
private static final NumberFormat FORMATTER = NumberFormat.getNumberInstance();
static {
FORMATTER.setGroupingUsed(true);
}
public static synchronized ArbitraryDataCacheManager getInstance() {
if (instance == null) {
@ -45,17 +58,22 @@ public class ArbitraryDataCacheManager extends Thread {
try {
while (!Controller.isStopping()) {
Thread.sleep(500L);
try {
Thread.sleep(500L);
// Process queue
processResourceQueue();
// Process queue
processResourceQueue();
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
Thread.sleep(600_000L); // wait 10 minutes to continue
}
}
} catch (InterruptedException e) {
// Fall through to exit thread
}
// Clear queue before terminating thread
processResourceQueue();
// Clear queue before terminating thread
processResourceQueue();
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
public void shutdown() {
@ -85,14 +103,25 @@ public class ArbitraryDataCacheManager extends Thread {
// Update arbitrary resource caches
try {
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
arbitraryTransaction.updateArbitraryResourceCache(repository);
arbitraryTransaction.updateArbitraryMetadataCache(repository);
arbitraryTransaction.updateArbitraryResourceCacheIncludingMetadata(repository, new HashSet<>(0), new HashMap<>(0));
repository.saveChanges();
// Update status as separate commit, as this is more prone to failure
arbitraryTransaction.updateArbitraryResourceStatus(repository);
repository.saveChanges();
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
transactionData.getIdentifier(),
transactionData.getName(),
transactionData.getService().name(),
"updated resource cache and status, queue",
transactionData.getTimestamp(),
transactionData.getTimestamp()
)
);
LOGGER.debug(() -> String.format("Finished processing transaction %.8s in arbitrary resource queue...", Base58.encode(transactionData.getSignature())));
} catch (DataException e) {
@ -103,6 +132,9 @@ public class ArbitraryDataCacheManager extends Thread {
} catch (DataException e) {
LOGGER.error("Repository issue while processing arbitrary resource cache updates", e);
}
catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
public void addToUpdateQueue(ArbitraryTransactionData transactionData) {
@ -148,34 +180,66 @@ public class ArbitraryDataCacheManager extends Thread {
LOGGER.info("Building arbitrary resources cache...");
SplashFrame.getInstance().updateStatus("Building QDN cache - please wait...");
final int batchSize = 100;
final int batchSize = Settings.getInstance().getBuildArbitraryResourcesBatchSize();
int offset = 0;
List<ArbitraryTransactionData> allArbitraryTransactionsInDescendingOrder
= repository.getArbitraryRepository().getLatestArbitraryTransactions();
LOGGER.info("arbitrary transactions: count = " + allArbitraryTransactionsInDescendingOrder.size());
List<ArbitraryResourceData> resources = repository.getArbitraryRepository().getArbitraryResources(null, null, true);
Map<ArbitraryTransactionDataHashWrapper, ArbitraryResourceData> resourceByWrapper = new HashMap<>(resources.size());
for( ArbitraryResourceData resource : resources ) {
resourceByWrapper.put(
new ArbitraryTransactionDataHashWrapper(resource.service.value, resource.name, resource.identifier),
resource
);
}
LOGGER.info("arbitrary resources: count = " + resourceByWrapper.size());
Set<ArbitraryTransactionDataHashWrapper> latestTransactionsWrapped = new HashSet<>(allArbitraryTransactionsInDescendingOrder.size());
// Loop through all ARBITRARY transactions, and determine latest state
while (!Controller.isStopping()) {
LOGGER.info("Fetching arbitrary transactions {} - {}", offset, offset+batchSize-1);
LOGGER.info(
"Fetching arbitrary transactions {} - {} / {} Total",
FORMATTER.format(offset),
FORMATTER.format(offset+batchSize-1),
FORMATTER.format(allArbitraryTransactionsInDescendingOrder.size())
);
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, List.of(Transaction.TransactionType.ARBITRARY), null, null, null, TransactionsResource.ConfirmationStatus.BOTH, batchSize, offset, false);
if (signatures.isEmpty()) {
List<ArbitraryTransactionData> transactionsToProcess
= allArbitraryTransactionsInDescendingOrder.stream()
.skip(offset)
.limit(batchSize)
.collect(Collectors.toList());
if (transactionsToProcess.isEmpty()) {
// Complete
break;
}
// Expand signatures to transactions
for (byte[] signature : signatures) {
ArbitraryTransactionData transactionData = (ArbitraryTransactionData) repository
.getTransactionRepository().fromSignature(signature);
try {
for( ArbitraryTransactionData transactionData : transactionsToProcess) {
if (transactionData.getService() == null) {
// Unsupported service - ignore this resource
continue;
}
if (transactionData.getService() == null) {
// Unsupported service - ignore this resource
continue;
latestTransactionsWrapped.add(new ArbitraryTransactionDataHashWrapper(transactionData));
// Update arbitrary resource caches
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
arbitraryTransaction.updateArbitraryResourceCacheIncludingMetadata(repository, latestTransactionsWrapped, resourceByWrapper);
}
// Update arbitrary resource caches
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
arbitraryTransaction.updateArbitraryResourceCache(repository);
arbitraryTransaction.updateArbitraryMetadataCache(repository);
repository.saveChanges();
} catch (DataException e) {
repository.discardChanges();
LOGGER.error(e.getMessage(), e);
}
offset += batchSize;
}
@ -193,6 +257,11 @@ public class ArbitraryDataCacheManager extends Thread {
repository.discardChanges();
throw new DataException("Build of arbitrary resources cache failed.");
}
catch (Exception e) {
LOGGER.error(e.getMessage(), e);
return false;
}
}
private boolean refreshArbitraryStatuses(Repository repository) throws DataException {
@ -200,27 +269,48 @@ public class ArbitraryDataCacheManager extends Thread {
LOGGER.info("Refreshing arbitrary resource statuses for locally hosted transactions...");
SplashFrame.getInstance().updateStatus("Refreshing statuses - please wait...");
final int batchSize = 100;
final int batchSize = Settings.getInstance().getBuildArbitraryResourcesBatchSize();
int offset = 0;
List<ArbitraryTransactionData> allHostedTransactions
= ArbitraryDataStorageManager.getInstance()
.listAllHostedTransactions(repository, null, null);
// Loop through all ARBITRARY transactions, and determine latest state
while (!Controller.isStopping()) {
LOGGER.info("Fetching hosted transactions {} - {}", offset, offset+batchSize-1);
LOGGER.info(
"Fetching hosted transactions {} - {} / {} Total",
FORMATTER.format(offset),
FORMATTER.format(offset+batchSize-1),
FORMATTER.format(allHostedTransactions.size())
);
List<ArbitraryTransactionData> hostedTransactions
= allHostedTransactions.stream()
.skip(offset)
.limit(batchSize)
.collect(Collectors.toList());
List<ArbitraryTransactionData> hostedTransactions = ArbitraryDataStorageManager.getInstance().listAllHostedTransactions(repository, batchSize, offset);
if (hostedTransactions.isEmpty()) {
// Complete
break;
}
// Loop through hosted transactions
for (ArbitraryTransactionData transactionData : hostedTransactions) {
try {
// Loop through hosted transactions
for (ArbitraryTransactionData transactionData : hostedTransactions) {
// Determine status and update cache
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
arbitraryTransaction.updateArbitraryResourceStatus(repository);
// Determine status and update cache
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
arbitraryTransaction.updateArbitraryResourceStatus(repository);
}
repository.saveChanges();
} catch (DataException e) {
repository.discardChanges();
LOGGER.error(e.getMessage(), e);
}
offset += batchSize;
}
@ -234,6 +324,11 @@ public class ArbitraryDataCacheManager extends Thread {
repository.discardChanges();
throw new DataException("Refresh of arbitrary resource statuses failed.");
}
catch (Exception e) {
LOGGER.error(e.getMessage(), e);
return false;
}
}
}

View File

@ -2,9 +2,10 @@ package org.qortal.controller.arbitrary;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.api.resource.TransactionsResource.ConfirmationStatus;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.event.DataMonitorEvent;
import org.qortal.event.EventBus;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
@ -21,8 +22,12 @@ import java.nio.file.Paths;
import java.security.SecureRandom;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashSet;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;
import static org.qortal.controller.arbitrary.ArbitraryDataStorageManager.DELETION_THRESHOLD;
@ -77,6 +82,19 @@ public class ArbitraryDataCleanupManager extends Thread {
final int limit = 100;
int offset = 0;
List<ArbitraryTransactionData> allArbitraryTransactionsInDescendingOrder;
try (final Repository repository = RepositoryManager.getRepository()) {
allArbitraryTransactionsInDescendingOrder
= repository.getArbitraryRepository()
.getLatestArbitraryTransactions();
} catch( Exception e) {
LOGGER.error(e.getMessage(), e);
allArbitraryTransactionsInDescendingOrder = new ArrayList<>(0);
}
Set<ArbitraryTransactionData> processedTransactions = new HashSet<>();
try {
while (!isStopping) {
Thread.sleep(30000);
@ -107,27 +125,31 @@ public class ArbitraryDataCleanupManager extends Thread {
// Any arbitrary transactions we want to fetch data for?
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, ARBITRARY_TX_TYPE, null, null, null, ConfirmationStatus.BOTH, limit, offset, true);
// LOGGER.info("Found {} arbitrary transactions at offset: {}, limit: {}", signatures.size(), offset, limit);
List<ArbitraryTransactionData> transactions = allArbitraryTransactionsInDescendingOrder.stream().skip(offset).limit(limit).collect(Collectors.toList());
if (isStopping) {
return;
}
if (signatures == null || signatures.isEmpty()) {
if (transactions == null || transactions.isEmpty()) {
offset = 0;
continue;
allArbitraryTransactionsInDescendingOrder
= repository.getArbitraryRepository()
.getLatestArbitraryTransactions();
transactions = allArbitraryTransactionsInDescendingOrder.stream().limit(limit).collect(Collectors.toList());
processedTransactions.clear();
}
offset += limit;
now = NTP.getTime();
// Loop through the signatures in this batch
for (int i=0; i<signatures.size(); i++) {
for (int i=0; i<transactions.size(); i++) {
if (isStopping) {
return;
}
byte[] signature = signatures.get(i);
if (signature == null) {
ArbitraryTransactionData arbitraryTransactionData = transactions.get(i);
if (arbitraryTransactionData == null) {
continue;
}
@ -136,9 +158,7 @@ public class ArbitraryDataCleanupManager extends Thread {
Thread.sleep(5000);
}
// Fetch the transaction data
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
if (arbitraryTransactionData == null || arbitraryTransactionData.getService() == null) {
if (arbitraryTransactionData.getService() == null) {
continue;
}
@ -147,6 +167,8 @@ public class ArbitraryDataCleanupManager extends Thread {
continue;
}
boolean mostRecentTransaction = processedTransactions.add(arbitraryTransactionData);
// Check if we have the complete file
boolean completeFileExists = ArbitraryTransactionUtils.completeFileExists(arbitraryTransactionData);
@ -167,20 +189,54 @@ public class ArbitraryDataCleanupManager extends Thread {
LOGGER.info("Deleting transaction {} because we can't host its data",
Base58.encode(arbitraryTransactionData.getSignature()));
ArbitraryTransactionUtils.deleteCompleteFileAndChunks(arbitraryTransactionData);
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
"can't store data, deleting",
arbitraryTransactionData.getTimestamp(),
arbitraryTransactionData.getTimestamp()
)
);
continue;
}
// Check to see if we have had a more recent PUT
boolean hasMoreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
if (hasMoreRecentPutTransaction) {
if (!mostRecentTransaction) {
// There is a more recent PUT transaction than the one we are currently processing.
// When a PUT is issued, it replaces any layers that would have been there before.
// Therefore any data relating to this older transaction is no longer needed.
LOGGER.info(String.format("Newer PUT found for %s %s since transaction %s. " +
"Deleting all files associated with the earlier transaction.", arbitraryTransactionData.getService(),
arbitraryTransactionData.getName(), Base58.encode(signature)));
arbitraryTransactionData.getName(), Base58.encode(arbitraryTransactionData.getSignature())));
ArbitraryTransactionUtils.deleteCompleteFileAndChunks(arbitraryTransactionData);
Optional<ArbitraryTransactionData> moreRecentPutTransaction
= processedTransactions.stream()
.filter(data -> data.equals(arbitraryTransactionData))
.findAny();
if( moreRecentPutTransaction.isPresent() ) {
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
"deleting data due to replacement",
arbitraryTransactionData.getTimestamp(),
moreRecentPutTransaction.get().getTimestamp()
)
);
}
else {
LOGGER.warn("Something went wrong with the most recent put transaction determination!");
}
continue;
}
@ -199,7 +255,21 @@ public class ArbitraryDataCleanupManager extends Thread {
LOGGER.debug(String.format("Transaction %s has complete file and all chunks",
Base58.encode(arbitraryTransactionData.getSignature())));
ArbitraryTransactionUtils.deleteCompleteFile(arbitraryTransactionData, now, STALE_FILE_TIMEOUT);
boolean wasDeleted = ArbitraryTransactionUtils.deleteCompleteFile(arbitraryTransactionData, now, STALE_FILE_TIMEOUT);
if( wasDeleted ) {
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
"deleting file, retaining chunks",
arbitraryTransactionData.getTimestamp(),
arbitraryTransactionData.getTimestamp()
)
);
}
continue;
}
@ -237,17 +307,6 @@ public class ArbitraryDataCleanupManager extends Thread {
this.storageLimitReached(repository);
}
// Delete random data associated with name if we're over our storage limit for this name
// Use the DELETION_THRESHOLD, for the same reasons as above
for (String followedName : ListUtils.followedNames()) {
if (isStopping) {
return;
}
if (!storageManager.isStorageSpaceAvailableForName(repository, followedName, DELETION_THRESHOLD)) {
this.storageLimitReachedForName(repository, followedName);
}
}
} catch (DataException e) {
LOGGER.error("Repository issue when cleaning up arbitrary transaction data", e);
}
@ -326,25 +385,6 @@ public class ArbitraryDataCleanupManager extends Thread {
// FUTURE: consider reducing the expiry time of the reader cache
}
public void storageLimitReachedForName(Repository repository, String name) throws InterruptedException {
// We think that the storage limit has been reached for supplied name - but we should double check
if (ArbitraryDataStorageManager.getInstance().isStorageSpaceAvailableForName(repository, name, DELETION_THRESHOLD)) {
// We have space available for this name, so don't delete anything
return;
}
// Delete a batch of random chunks associated with this name
// This reduces the chance of too many nodes deleting the same chunk
// when they reach their storage limit
Path dataPath = Paths.get(Settings.getInstance().getDataPath());
for (int i=0; i<CHUNK_DELETION_BATCH_SIZE; i++) {
if (isStopping) {
return;
}
this.deleteRandomFile(repository, dataPath.toFile(), name);
}
}
/**
* Iteratively walk through given directory and delete a single random file
*
@ -423,6 +463,7 @@ public class ArbitraryDataCleanupManager extends Thread {
}
LOGGER.info("Deleting random file {} because we have reached max storage capacity...", randomItem.toString());
fireRandomItemDeletionNotification(randomItem, repository, "Deleting random file, because we have reached max storage capacity");
boolean success = randomItem.delete();
if (success) {
try {
@ -437,6 +478,35 @@ public class ArbitraryDataCleanupManager extends Thread {
return false;
}
private void fireRandomItemDeletionNotification(File randomItem, Repository repository, String reason) {
try {
Path parentFileNamePath = randomItem.toPath().toAbsolutePath().getParent().getFileName();
if (parentFileNamePath != null) {
String signature58 = parentFileNamePath.toString();
byte[] signature = Base58.decode(signature58);
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
if (transactionData != null && transactionData.getType() == Transaction.TransactionType.ARBITRARY) {
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) transactionData;
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
reason,
arbitraryTransactionData.getTimestamp(),
arbitraryTransactionData.getTimestamp()
)
);
}
}
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
private void cleanupTempDirectory(String folder, long now, long minAge) {
String baseDir = Settings.getInstance().getTempDataPath();
Path tempDir = Paths.get(baseDir, folder);

View File

@ -0,0 +1,21 @@
package org.qortal.controller.arbitrary;
public class ArbitraryDataExamination {
private boolean pass;
private String notes;
public ArbitraryDataExamination(boolean pass, String notes) {
this.pass = pass;
this.notes = notes;
}
public boolean isPass() {
return pass;
}
public String getNotes() {
return notes;
}
}

View File

@ -5,6 +5,8 @@ import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.event.DataMonitorEvent;
import org.qortal.event.EventBus;
import org.qortal.network.Peer;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;

View File

@ -10,6 +10,8 @@ import org.qortal.arbitrary.misc.Service;
import org.qortal.controller.Controller;
import org.qortal.data.transaction.ArbitraryTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.event.DataMonitorEvent;
import org.qortal.event.EventBus;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.repository.DataException;
@ -28,6 +30,7 @@ import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.*;
import java.util.stream.Collectors;
public class ArbitraryDataManager extends Thread {
@ -195,13 +198,35 @@ public class ArbitraryDataManager extends Thread {
final int limit = 100;
int offset = 0;
List<ArbitraryTransactionData> allArbitraryTransactionsInDescendingOrder;
try (final Repository repository = RepositoryManager.getRepository()) {
if( name == null ) {
allArbitraryTransactionsInDescendingOrder
= repository.getArbitraryRepository()
.getLatestArbitraryTransactions();
}
else {
allArbitraryTransactionsInDescendingOrder
= repository.getArbitraryRepository()
.getLatestArbitraryTransactionsByName(name);
}
} catch( Exception e) {
LOGGER.error(e.getMessage(), e);
allArbitraryTransactionsInDescendingOrder = new ArrayList<>(0);
}
// collect processed transactions in a set to ensure outdated data transactions do not get fetched
Set<ArbitraryTransactionDataHashWrapper> processedTransactions = new HashSet<>();
while (!isStopping) {
Thread.sleep(1000L);
// Any arbitrary transactions we want to fetch data for?
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, ARBITRARY_TX_TYPE, null, name, null, ConfirmationStatus.BOTH, limit, offset, true);
// LOGGER.trace("Found {} arbitrary transactions at offset: {}, limit: {}", signatures.size(), offset, limit);
List<byte[]> signatures = processTransactionsForSignatures(limit, offset, allArbitraryTransactionsInDescendingOrder, processedTransactions);
if (signatures == null || signatures.isEmpty()) {
offset = 0;
break;
@ -223,14 +248,38 @@ public class ArbitraryDataManager extends Thread {
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) arbitraryTransaction.getTransactionData();
// Skip transactions that we don't need to proactively store data for
if (!storageManager.shouldPreFetchData(repository, arbitraryTransactionData)) {
ArbitraryDataExamination arbitraryDataExamination = storageManager.shouldPreFetchData(repository, arbitraryTransactionData);
if (!arbitraryDataExamination.isPass()) {
iterator.remove();
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
arbitraryDataExamination.getNotes(),
arbitraryTransactionData.getTimestamp(),
arbitraryTransactionData.getTimestamp()
)
);
continue;
}
// Remove transactions that we already have local data for
if (hasLocalData(arbitraryTransaction)) {
iterator.remove();
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
"already have local data, skipping",
arbitraryTransactionData.getTimestamp(),
arbitraryTransactionData.getTimestamp()
)
);
}
}
@ -248,8 +297,21 @@ public class ArbitraryDataManager extends Thread {
// Check to see if we have had a more recent PUT
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
boolean hasMoreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
if (hasMoreRecentPutTransaction) {
Optional<ArbitraryTransactionData> moreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
if (moreRecentPutTransaction.isPresent()) {
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
"not fetching old data",
arbitraryTransactionData.getTimestamp(),
moreRecentPutTransaction.get().getTimestamp()
)
);
// There is a more recent PUT transaction than the one we are currently processing.
// When a PUT is issued, it replaces any layers that would have been there before.
// Therefore any data relating to this older transaction is no longer needed and we
@ -257,10 +319,34 @@ public class ArbitraryDataManager extends Thread {
continue;
}
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
"fetching data",
arbitraryTransactionData.getTimestamp(),
arbitraryTransactionData.getTimestamp()
)
);
// Ask our connected peers if they have files for this signature
// This process automatically then fetches the files themselves if a peer is found
fetchData(arbitraryTransactionData);
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
"fetched data",
arbitraryTransactionData.getTimestamp(),
arbitraryTransactionData.getTimestamp()
)
);
} catch (DataException e) {
LOGGER.error("Repository issue when fetching arbitrary transaction data", e);
}
@ -274,6 +360,20 @@ public class ArbitraryDataManager extends Thread {
final int limit = 100;
int offset = 0;
List<ArbitraryTransactionData> allArbitraryTransactionsInDescendingOrder;
try (final Repository repository = RepositoryManager.getRepository()) {
allArbitraryTransactionsInDescendingOrder
= repository.getArbitraryRepository()
.getLatestArbitraryTransactions();
} catch( Exception e) {
LOGGER.error(e.getMessage(), e);
allArbitraryTransactionsInDescendingOrder = new ArrayList<>(0);
}
// collect processed transactions in a set to ensure outdated data transactions do not get fetched
Set<ArbitraryTransactionDataHashWrapper> processedTransactions = new HashSet<>();
while (!isStopping) {
final int minSeconds = 3;
final int maxSeconds = 10;
@ -282,8 +382,8 @@ public class ArbitraryDataManager extends Thread {
// Any arbitrary transactions we want to fetch data for?
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, ARBITRARY_TX_TYPE, null, null, null, ConfirmationStatus.BOTH, limit, offset, true);
// LOGGER.trace("Found {} arbitrary transactions at offset: {}, limit: {}", signatures.size(), offset, limit);
List<byte[]> signatures = processTransactionsForSignatures(limit, offset, allArbitraryTransactionsInDescendingOrder, processedTransactions);
if (signatures == null || signatures.isEmpty()) {
offset = 0;
break;
@ -328,26 +428,74 @@ public class ArbitraryDataManager extends Thread {
continue;
}
// Check to see if we have had a more recent PUT
// No longer need to see if we have had a more recent PUT since we compared the transactions to process
// to the transactions previously processed, so we can fetch the transactiondata, notify the event bus,
// fetch the metadata and notify the event bus again
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
boolean hasMoreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
if (hasMoreRecentPutTransaction) {
// There is a more recent PUT transaction than the one we are currently processing.
// When a PUT is issued, it replaces any layers that would have been there before.
// Therefore any data relating to this older transaction is no longer needed and we
// shouldn't fetch it from the network.
continue;
}
// Ask our connected peers if they have metadata for this signature
fetchMetadata(arbitraryTransactionData);
EventBus.INSTANCE.notify(
new DataMonitorEvent(
System.currentTimeMillis(),
arbitraryTransactionData.getIdentifier(),
arbitraryTransactionData.getName(),
arbitraryTransactionData.getService().name(),
"fetched metadata",
arbitraryTransactionData.getTimestamp(),
arbitraryTransactionData.getTimestamp()
)
);
} catch (DataException e) {
LOGGER.error("Repository issue when fetching arbitrary transaction data", e);
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
}
}
private static List<byte[]> processTransactionsForSignatures(
int limit,
int offset,
List<ArbitraryTransactionData> transactionsInDescendingOrder,
Set<ArbitraryTransactionDataHashWrapper> processedTransactions) {
// these transactions are in descending order, latest transactions come first
List<ArbitraryTransactionData> transactions
= transactionsInDescendingOrder.stream()
.skip(offset)
.limit(limit)
.collect(Collectors.toList());
// wrap the transactions, so they can be used for hashing and comparing
// Class ArbitraryTransactionDataHashWrapper supports hashCode() and equals(...) for this purpose
List<ArbitraryTransactionDataHashWrapper> wrappedTransactions
= transactions.stream()
.map(transaction -> new ArbitraryTransactionDataHashWrapper(transaction))
.collect(Collectors.toList());
// create a set of wrappers and populate it first to last, so that all outdated transactions get rejected
Set<ArbitraryTransactionDataHashWrapper> transactionsToProcess = new HashSet<>(wrappedTransactions.size());
for(ArbitraryTransactionDataHashWrapper wrappedTransaction : wrappedTransactions) {
transactionsToProcess.add(wrappedTransaction);
}
// remove the matches for previously processed transactions,
// because these transactions have had updates that have already been processed
transactionsToProcess.removeAll(processedTransactions);
// add to processed transactions to compare and remove matches from future processing iterations
processedTransactions.addAll(transactionsToProcess);
List<byte[]> signatures
= transactionsToProcess.stream()
.map(transactionToProcess -> transactionToProcess.getData()
.getSignature())
.collect(Collectors.toList());
return signatures;
}
private ArbitraryTransaction fetchTransaction(final Repository repository, byte[] signature) {
try {
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);

View File

@ -155,31 +155,24 @@ public class ArbitraryDataStorageManager extends Thread {
* @param arbitraryTransactionData - the transaction
* @return boolean - whether to prefetch or not
*/
public boolean shouldPreFetchData(Repository repository, ArbitraryTransactionData arbitraryTransactionData) {
public ArbitraryDataExamination shouldPreFetchData(Repository repository, ArbitraryTransactionData arbitraryTransactionData) {
String name = arbitraryTransactionData.getName();
// Only fetch data associated with hashes, as we already have RAW_DATA
if (arbitraryTransactionData.getDataType() != ArbitraryTransactionData.DataType.DATA_HASH) {
return false;
return new ArbitraryDataExamination(false, "Only fetch data associated with hashes");
}
// Don't fetch anything more if we're (nearly) out of space
// Make sure to keep STORAGE_FULL_THRESHOLD considerably less than 1, to
// avoid a fetch/delete loop
if (!this.isStorageSpaceAvailable(STORAGE_FULL_THRESHOLD)) {
return false;
}
// Don't fetch anything if we're (nearly) out of space for this name
// Again, make sure to keep STORAGE_FULL_THRESHOLD considerably less than 1, to
// avoid a fetch/delete loop
if (!this.isStorageSpaceAvailableForName(repository, arbitraryTransactionData.getName(), STORAGE_FULL_THRESHOLD)) {
return false;
return new ArbitraryDataExamination(false,"Don't fetch anything more if we're (nearly) out of space");
}
// Don't store data unless it's an allowed type (public/private)
if (!this.isDataTypeAllowed(arbitraryTransactionData)) {
return false;
return new ArbitraryDataExamination(false, "Don't store data unless it's an allowed type (public/private)");
}
// Handle transactions without names differently
@ -189,21 +182,21 @@ public class ArbitraryDataStorageManager extends Thread {
// Never fetch data from blocked names, even if they are followed
if (ListUtils.isNameBlocked(name)) {
return false;
return new ArbitraryDataExamination(false, "blocked name");
}
switch (Settings.getInstance().getStoragePolicy()) {
case FOLLOWED:
case FOLLOWED_OR_VIEWED:
return ListUtils.isFollowingName(name);
return new ArbitraryDataExamination(ListUtils.isFollowingName(name), Settings.getInstance().getStoragePolicy().name());
case ALL:
return true;
return new ArbitraryDataExamination(true, Settings.getInstance().getStoragePolicy().name());
case NONE:
case VIEWED:
default:
return false;
return new ArbitraryDataExamination(false, Settings.getInstance().getStoragePolicy().name());
}
}
@ -214,17 +207,17 @@ public class ArbitraryDataStorageManager extends Thread {
*
* @return boolean - whether the storage policy allows for unnamed data
*/
private boolean shouldPreFetchDataWithoutName() {
private ArbitraryDataExamination shouldPreFetchDataWithoutName() {
switch (Settings.getInstance().getStoragePolicy()) {
case ALL:
return true;
return new ArbitraryDataExamination(true, "Fetching all data");
case NONE:
case VIEWED:
case FOLLOWED:
case FOLLOWED_OR_VIEWED:
default:
return false;
return new ArbitraryDataExamination(false, Settings.getInstance().getStoragePolicy().name());
}
}
@ -484,51 +477,6 @@ public class ArbitraryDataStorageManager extends Thread {
return true;
}
public boolean isStorageSpaceAvailableForName(Repository repository, String name, double threshold) {
if (!this.isStorageSpaceAvailable(threshold)) {
// No storage space available at all, so no need to check this name
return false;
}
if (Settings.getInstance().getStoragePolicy() == StoragePolicy.ALL) {
// Using storage policy ALL, so don't limit anything per name
return true;
}
if (name == null) {
// This transaction doesn't have a name, so fall back to total space limitations
return true;
}
int followedNamesCount = ListUtils.followedNamesCount();
if (followedNamesCount == 0) {
// Not following any names, so we have space
return true;
}
long totalSizeForName = 0;
long maxStoragePerName = this.storageCapacityPerName(threshold);
// Fetch all hosted transactions
List<ArbitraryTransactionData> hostedTransactions = this.listAllHostedTransactions(repository, null, null);
for (ArbitraryTransactionData transactionData : hostedTransactions) {
String transactionName = transactionData.getName();
if (!Objects.equals(name, transactionName)) {
// Transaction relates to a different name
continue;
}
totalSizeForName += transactionData.getSize();
}
// Have we reached the limit for this name?
if (totalSizeForName > maxStoragePerName) {
return false;
}
return true;
}
public long storageCapacityPerName(double threshold) {
int followedNamesCount = ListUtils.followedNamesCount();
if (followedNamesCount == 0) {

View File

@ -0,0 +1,48 @@
package org.qortal.controller.arbitrary;
import org.qortal.arbitrary.misc.Service;
import org.qortal.data.transaction.ArbitraryTransactionData;
import java.util.Objects;
public class ArbitraryTransactionDataHashWrapper {
private ArbitraryTransactionData data;
private int service;
private String name;
private String identifier;
public ArbitraryTransactionDataHashWrapper(ArbitraryTransactionData data) {
this.data = data;
this.service = data.getService().value;
this.name = data.getName();
this.identifier = data.getIdentifier();
}
public ArbitraryTransactionDataHashWrapper(int service, String name, String identifier) {
this.service = service;
this.name = name;
this.identifier = identifier;
}
public ArbitraryTransactionData getData() {
return data;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ArbitraryTransactionDataHashWrapper that = (ArbitraryTransactionDataHashWrapper) o;
return service == that.service && name.equals(that.name) && Objects.equals(identifier, that.identifier);
}
@Override
public int hashCode() {
return Objects.hash(service, name, identifier);
}
}

View File

@ -0,0 +1,33 @@
package org.qortal.controller.arbitrary;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import java.util.TimerTask;
public class RebuildArbitraryResourceCacheTask extends TimerTask {
private static final Logger LOGGER = LogManager.getLogger(RebuildArbitraryResourceCacheTask.class);
public static final long MILLIS_IN_HOUR = 60 * 60 * 1000;
public static final long MILLIS_IN_MINUTE = 60 * 1000;
private static final String REBUILD_ARBITRARY_RESOURCE_CACHE_TASK = "Rebuild Arbitrary Resource Cache Task";
@Override
public void run() {
Thread.currentThread().setName(REBUILD_ARBITRARY_RESOURCE_CACHE_TASK);
try (final Repository repository = RepositoryManager.getRepository()) {
ArbitraryDataCacheManager.getInstance().buildArbitraryResourcesCache(repository, true);
}
catch( DataException e ) {
LOGGER.error(e.getMessage(), e);
}
}
}

View File

@ -0,0 +1,139 @@
package org.qortal.controller.hsqldb;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.apache.logging.log4j.util.PropertySource;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.BlockHeightRange;
import org.qortal.data.account.BlockHeightRangeAddressAmounts;
import org.qortal.repository.hsqldb.HSQLDBCacheUtils;
import org.qortal.settings.Settings;
import org.qortal.utils.BalanceRecorderUtils;
import java.util.Comparator;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.stream.Collectors;
public class HSQLDBBalanceRecorder extends Thread{
private static final Logger LOGGER = LogManager.getLogger(HSQLDBBalanceRecorder.class);
private static HSQLDBBalanceRecorder SINGLETON = null;
private ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight = new ConcurrentHashMap<>();
private ConcurrentHashMap<String, List<AccountBalanceData>> balancesByAddress = new ConcurrentHashMap<>();
private CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics = new CopyOnWriteArrayList<>();
private int priorityRequested;
private int frequency;
private int capacity;
private HSQLDBBalanceRecorder( int priorityRequested, int frequency, int capacity) {
super("Balance Recorder");
this.priorityRequested = priorityRequested;
this.frequency = frequency;
this.capacity = capacity;
}
public static Optional<HSQLDBBalanceRecorder> getInstance() {
if( SINGLETON == null ) {
SINGLETON
= new HSQLDBBalanceRecorder(
Settings.getInstance().getBalanceRecorderPriority(),
Settings.getInstance().getBalanceRecorderFrequency(),
Settings.getInstance().getBalanceRecorderCapacity()
);
}
else if( SINGLETON == null ) {
return Optional.empty();
}
return Optional.of(SINGLETON);
}
@Override
public void run() {
Thread.currentThread().setName("Balance Recorder");
HSQLDBCacheUtils.startRecordingBalances(this.balancesByHeight, this.balanceDynamics, this.priorityRequested, this.frequency, this.capacity);
}
public List<BlockHeightRangeAddressAmounts> getLatestDynamics(int limit, long offset) {
List<BlockHeightRangeAddressAmounts> latest = this.balanceDynamics.stream()
.sorted(BalanceRecorderUtils.BLOCK_HEIGHT_RANGE_ADDRESS_AMOUNTS_COMPARATOR.reversed())
.skip(offset)
.limit(limit)
.collect(Collectors.toList());
return latest;
}
public List<BlockHeightRange> getRanges(Integer offset, Integer limit, Boolean reverse) {
if( reverse ) {
return this.balanceDynamics.stream()
.map(BlockHeightRangeAddressAmounts::getRange)
.sorted(BalanceRecorderUtils.BLOCK_HEIGHT_RANGE_COMPARATOR.reversed())
.skip(offset)
.limit(limit)
.collect(Collectors.toList());
}
else {
return this.balanceDynamics.stream()
.map(BlockHeightRangeAddressAmounts::getRange)
.sorted(BalanceRecorderUtils.BLOCK_HEIGHT_RANGE_COMPARATOR)
.skip(offset)
.limit(limit)
.collect(Collectors.toList());
}
}
public Optional<BlockHeightRangeAddressAmounts> getAddressAmounts(BlockHeightRange range) {
return this.balanceDynamics.stream()
.filter( dynamic -> dynamic.getRange().equals(range))
.findAny();
}
public Optional<BlockHeightRange> getRange( int height ) {
return this.balanceDynamics.stream()
.map(BlockHeightRangeAddressAmounts::getRange)
.filter( range -> range.getBegin() < height && range.getEnd() >= height )
.findAny();
}
private Optional<Integer> getLastHeight() {
return this.balancesByHeight.keySet().stream().sorted(Comparator.reverseOrder()).findFirst();
}
public List<Integer> getBlocksRecorded() {
return this.balancesByHeight.keySet().stream().collect(Collectors.toList());
}
public List<AccountBalanceData> getAccountBalanceRecordings(String address) {
return this.balancesByAddress.get(address);
}
@Override
public String toString() {
return "HSQLDBBalanceRecorder{" +
"priorityRequested=" + priorityRequested +
", frequency=" + frequency +
", capacity=" + capacity +
'}';
}
}

View File

@ -8,11 +8,7 @@ import org.qortal.settings.Settings;
public class HSQLDBDataCacheManager extends Thread{
private HSQLDBRepository respository;
public HSQLDBDataCacheManager(HSQLDBRepository respository) {
this.respository = respository;
}
public HSQLDBDataCacheManager() {}
@Override
public void run() {
@ -20,8 +16,7 @@ public class HSQLDBDataCacheManager extends Thread{
HSQLDBCacheUtils.startCaching(
Settings.getInstance().getDbCacheThreadPriority(),
Settings.getInstance().getDbCacheFrequency(),
this.respository
Settings.getInstance().getDbCacheFrequency()
);
}
}

View File

@ -39,15 +39,24 @@ public class AtStatesPruner implements Runnable {
}
}
int pruneStartHeight;
int maxLatestAtStatesHeight;
try (final Repository repository = RepositoryManager.getRepository()) {
int pruneStartHeight = repository.getATRepository().getAtPruneHeight();
int maxLatestAtStatesHeight = PruneManager.getMaxHeightForLatestAtStates(repository);
pruneStartHeight = repository.getATRepository().getAtPruneHeight();
maxLatestAtStatesHeight = PruneManager.getMaxHeightForLatestAtStates(repository);
repository.discardChanges();
repository.getATRepository().rebuildLatestAtStates(maxLatestAtStatesHeight);
repository.saveChanges();
} catch (Exception e) {
LOGGER.error("AT States Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
return;
}
while (!Controller.isStopping()) {
try (final Repository repository = RepositoryManager.getRepository()) {
while (!Controller.isStopping()) {
try {
repository.discardChanges();
@ -102,28 +111,25 @@ public class AtStatesPruner implements Runnable {
final int finalPruneStartHeight = pruneStartHeight;
LOGGER.info(() -> String.format("Bumping AT state base prune height to %d", finalPruneStartHeight));
}
else {
} else {
// We've pruned up to the upper prunable height
// Back off for a while to save CPU for syncing
repository.discardChanges();
Thread.sleep(5*60*1000L);
Thread.sleep(5 * 60 * 1000L);
}
}
} catch (InterruptedException e) {
if(Controller.isStopping()) {
if (Controller.isStopping()) {
LOGGER.info("AT States Pruning Shutting Down");
}
else {
} else {
LOGGER.warn("AT States Pruning interrupted. Trying again. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.warn("AT States Pruning stopped working. Trying again. Report this error immediately to the developers.", e);
}
} catch(Exception e){
LOGGER.error("AT States Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.error("AT States Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
}
}

View File

@ -26,15 +26,23 @@ public class AtStatesTrimmer implements Runnable {
return;
}
int trimStartHeight;
int maxLatestAtStatesHeight;
try (final Repository repository = RepositoryManager.getRepository()) {
int trimStartHeight = repository.getATRepository().getAtTrimHeight();
int maxLatestAtStatesHeight = PruneManager.getMaxHeightForLatestAtStates(repository);
trimStartHeight = repository.getATRepository().getAtTrimHeight();
maxLatestAtStatesHeight = PruneManager.getMaxHeightForLatestAtStates(repository);
repository.discardChanges();
repository.getATRepository().rebuildLatestAtStates(maxLatestAtStatesHeight);
repository.saveChanges();
} catch (Exception e) {
LOGGER.error("AT States Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
return;
}
while (!Controller.isStopping()) {
while (!Controller.isStopping()) {
try (final Repository repository = RepositoryManager.getRepository()) {
try {
repository.discardChanges();
@ -92,9 +100,9 @@ public class AtStatesTrimmer implements Runnable {
} catch (Exception e) {
LOGGER.warn("AT States Trimming stopped working. Trying again. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.error("AT States Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.error("AT States Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
}

View File

@ -30,11 +30,13 @@ public class BlockArchiver implements Runnable {
return;
}
int startHeight;
try (final Repository repository = RepositoryManager.getRepository()) {
// Don't even start building until initial rush has ended
Thread.sleep(INITIAL_SLEEP_PERIOD);
int startHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
startHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
// Don't attempt to archive if we have no ATStatesHeightIndex, as it will be too slow
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
@ -43,10 +45,16 @@ public class BlockArchiver implements Runnable {
repository.discardChanges();
return;
}
} catch (Exception e) {
LOGGER.error("Block Archiving is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
return;
}
LOGGER.info("Starting block archiver from height {}...", startHeight);
LOGGER.info("Starting block archiver from height {}...", startHeight);
while (!Controller.isStopping()) {
try (final Repository repository = RepositoryManager.getRepository()) {
while (!Controller.isStopping()) {
try {
repository.discardChanges();
@ -107,20 +115,17 @@ public class BlockArchiver implements Runnable {
LOGGER.info("Caught exception when creating block cache", e);
}
} catch (InterruptedException e) {
if(Controller.isStopping()) {
if (Controller.isStopping()) {
LOGGER.info("Block Archiving Shutting Down");
}
else {
} else {
LOGGER.warn("Block Archiving interrupted. Trying again. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.warn("Block Archiving stopped working. Trying again. Report this error immediately to the developers.", e);
}
} catch(Exception e){
LOGGER.error("Block Archiving is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.error("Block Archiving is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
}
}

View File

@ -39,8 +39,10 @@ public class BlockPruner implements Runnable {
}
}
int pruneStartHeight;
try (final Repository repository = RepositoryManager.getRepository()) {
int pruneStartHeight = repository.getBlockRepository().getBlockPruneHeight();
pruneStartHeight = repository.getBlockRepository().getBlockPruneHeight();
// Don't attempt to prune if we have no ATStatesHeightIndex, as it will be too slow
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
@ -48,8 +50,15 @@ public class BlockPruner implements Runnable {
LOGGER.info("Unable to start block pruner due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
return;
}
} catch (Exception e) {
LOGGER.error("Block Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
return;
}
while (!Controller.isStopping()) {
try (final Repository repository = RepositoryManager.getRepository()) {
while (!Controller.isStopping()) {
try {
repository.discardChanges();
@ -122,10 +131,9 @@ public class BlockPruner implements Runnable {
} catch (Exception e) {
LOGGER.warn("Block Pruning stopped working. Trying again. Report this error immediately to the developers.", e);
}
} catch(Exception e){
LOGGER.error("Block Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.error("Block Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
}
}

View File

@ -28,13 +28,21 @@ public class OnlineAccountsSignaturesTrimmer implements Runnable {
return;
}
int trimStartHeight;
try (final Repository repository = RepositoryManager.getRepository()) {
// Don't even start trimming until initial rush has ended
Thread.sleep(INITIAL_SLEEP_PERIOD);
int trimStartHeight = repository.getBlockRepository().getOnlineAccountsSignaturesTrimHeight();
trimStartHeight = repository.getBlockRepository().getOnlineAccountsSignaturesTrimHeight();
} catch (Exception e) {
LOGGER.error("Online Accounts Signatures Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
return;
}
while (!Controller.isStopping()) {
try (final Repository repository = RepositoryManager.getRepository()) {
while (!Controller.isStopping()) {
try {
repository.discardChanges();
@ -88,10 +96,9 @@ public class OnlineAccountsSignaturesTrimmer implements Runnable {
} catch (Exception e) {
LOGGER.warn("Online Accounts Signatures Trimming stopped working. Trying again. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.error("Online Accounts Signatures Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
} catch (Exception e) {
LOGGER.error("Online Accounts Signatures Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
}
}
}

View File

@ -83,6 +83,7 @@ public abstract class Bitcoiny implements ForeignBlockchain {
return this.bitcoinjContext;
}
@Override
public String getCurrencyCode() {
return this.currencyCode;
}

View File

@ -2,6 +2,8 @@ package org.qortal.crosschain;
public interface ForeignBlockchain {
public String getCurrencyCode();
public boolean isValidAddress(String address);
public boolean isValidWalletKey(String walletKey);

View File

@ -0,0 +1,54 @@
package org.qortal.data.account;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
import java.util.Objects;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
public class AddressAmountData {
private String address;
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
private long amount;
public AddressAmountData() {
}
public AddressAmountData(String address, long amount) {
this.address = address;
this.amount = amount;
}
public String getAddress() {
return address;
}
public long getAmount() {
return amount;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
AddressAmountData that = (AddressAmountData) o;
return amount == that.amount && Objects.equals(address, that.address);
}
@Override
public int hashCode() {
return Objects.hash(address, amount);
}
@Override
public String toString() {
return "AddressAmountData{" +
"address='" + address + '\'' +
", amount=" + amount +
'}';
}
}

View File

@ -0,0 +1,59 @@
package org.qortal.data.account;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.Objects;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
public class BlockHeightRange {
private int begin;
private int end;
private boolean isRewardDistribution;
public BlockHeightRange() {
}
public BlockHeightRange(int begin, int end, boolean isRewardDistribution) {
this.begin = begin;
this.end = end;
this.isRewardDistribution = isRewardDistribution;
}
public int getBegin() {
return begin;
}
public int getEnd() {
return end;
}
public boolean isRewardDistribution() {
return isRewardDistribution;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
BlockHeightRange that = (BlockHeightRange) o;
return begin == that.begin && end == that.end;
}
@Override
public int hashCode() {
return Objects.hash(begin, end);
}
@Override
public String toString() {
return "BlockHeightRange{" +
"begin=" + begin +
", end=" + end +
", isRewardDistribution=" + isRewardDistribution +
'}';
}
}

View File

@ -0,0 +1,52 @@
package org.qortal.data.account;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.List;
import java.util.Objects;
// All properties to be converted to JSON via JAXB
@XmlAccessorType(XmlAccessType.FIELD)
public class BlockHeightRangeAddressAmounts {
private BlockHeightRange range;
private List<AddressAmountData> amounts;
public BlockHeightRangeAddressAmounts() {
}
public BlockHeightRangeAddressAmounts(BlockHeightRange range, List<AddressAmountData> amounts) {
this.range = range;
this.amounts = amounts;
}
public BlockHeightRange getRange() {
return range;
}
public List<AddressAmountData> getAmounts() {
return amounts;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
BlockHeightRangeAddressAmounts that = (BlockHeightRangeAddressAmounts) o;
return Objects.equals(range, that.range) && Objects.equals(amounts, that.amounts);
}
@Override
public int hashCode() {
return Objects.hash(range, amounts);
}
@Override
public String toString() {
return "BlockHeightRangeAddressAmounts{" +
"range=" + range +
", amounts=" + amounts +
'}';
}
}

View File

@ -0,0 +1,34 @@
package org.qortal.data.arbitrary;
import org.qortal.arbitrary.misc.Service;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class ArbitraryDataIndex {
public String t;
public String n;
public int c;
public String l;
public ArbitraryDataIndex() {}
public ArbitraryDataIndex(String t, String n, int c, String l) {
this.t = t;
this.n = n;
this.c = c;
this.l = l;
}
@Override
public String toString() {
return "ArbitraryDataIndex{" +
"t='" + t + '\'' +
", n='" + n + '\'' +
", c=" + c +
", l='" + l + '\'' +
'}';
}
}

View File

@ -0,0 +1,41 @@
package org.qortal.data.arbitrary;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class ArbitraryDataIndexDetail {
public String issuer;
public int rank;
public String term;
public String name;
public int category;
public String link;
public String indexIdentifer;
public ArbitraryDataIndexDetail() {}
public ArbitraryDataIndexDetail(String issuer, int rank, ArbitraryDataIndex index, String indexIdentifer) {
this.issuer = issuer;
this.rank = rank;
this.term = index.t;
this.name = index.n;
this.category = index.c;
this.link = index.l;
this.indexIdentifer = indexIdentifer;
}
@Override
public String toString() {
return "ArbitraryDataIndexDetail{" +
"issuer='" + issuer + '\'' +
", rank=" + rank +
", term='" + term + '\'' +
", name='" + name + '\'' +
", category=" + category +
", link='" + link + '\'' +
", indexIdentifer='" + indexIdentifer + '\'' +
'}';
}
}

View File

@ -0,0 +1,38 @@
package org.qortal.data.arbitrary;
import org.qortal.arbitrary.misc.Service;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.Objects;
@XmlAccessorType(XmlAccessType.FIELD)
public class ArbitraryDataIndexScoreKey {
public String name;
public int category;
public String link;
public ArbitraryDataIndexScoreKey() {}
public ArbitraryDataIndexScoreKey(String name, int category, String link) {
this.name = name;
this.category = category;
this.link = link;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
ArbitraryDataIndexScoreKey that = (ArbitraryDataIndexScoreKey) o;
return category == that.category && Objects.equals(name, that.name) && Objects.equals(link, that.link);
}
@Override
public int hashCode() {
return Objects.hash(name, category, link);
}
}

View File

@ -0,0 +1,38 @@
package org.qortal.data.arbitrary;
import org.qortal.arbitrary.misc.Service;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class ArbitraryDataIndexScorecard {
public double score;
public String name;
public int category;
public String link;
public ArbitraryDataIndexScorecard() {}
public ArbitraryDataIndexScorecard(double score, String name, int category, String link) {
this.score = score;
this.name = name;
this.category = category;
this.link = link;
}
public double getScore() {
return score;
}
@Override
public String toString() {
return "ArbitraryDataIndexScorecard{" +
"score=" + score +
", name='" + name + '\'' +
", category=" + category +
", link='" + link + '\'' +
'}';
}
}

View File

@ -0,0 +1,57 @@
package org.qortal.data.arbitrary;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class DataMonitorInfo {
private long timestamp;
private String identifier;
private String name;
private String service;
private String description;
private long transactionTimestamp;
private long latestPutTimestamp;
public DataMonitorInfo() {
}
public DataMonitorInfo(long timestamp, String identifier, String name, String service, String description, long transactionTimestamp, long latestPutTimestamp) {
this.timestamp = timestamp;
this.identifier = identifier;
this.name = name;
this.service = service;
this.description = description;
this.transactionTimestamp = transactionTimestamp;
this.latestPutTimestamp = latestPutTimestamp;
}
public long getTimestamp() {
return timestamp;
}
public String getIdentifier() {
return identifier;
}
public String getName() {
return name;
}
public String getService() {
return service;
}
public String getDescription() {
return description;
}
public long getTransactionTimestamp() {
return transactionTimestamp;
}
public long getLatestPutTimestamp() {
return latestPutTimestamp;
}
}

View File

@ -0,0 +1,23 @@
package org.qortal.data.arbitrary;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
public class IndexCache {
public static final IndexCache SINGLETON = new IndexCache();
private ConcurrentHashMap<String, List<ArbitraryDataIndexDetail>> indicesByTerm = new ConcurrentHashMap<>();
private ConcurrentHashMap<String, List<ArbitraryDataIndexDetail>> indicesByIssuer = new ConcurrentHashMap<>();
public static IndexCache getInstance() {
return SINGLETON;
}
public ConcurrentHashMap<String, List<ArbitraryDataIndexDetail>> getIndicesByTerm() {
return indicesByTerm;
}
public ConcurrentHashMap<String, List<ArbitraryDataIndexDetail>> getIndicesByIssuer() {
return indicesByIssuer;
}
}

View File

@ -1,8 +1,11 @@
package org.qortal.data.block;
import com.google.common.primitives.Bytes;
import org.qortal.account.Account;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.NTP;
@ -224,7 +227,7 @@ public class BlockData implements Serializable {
}
return 0;
}
public boolean isTrimmed() {
long onlineAccountSignaturesTrimmedTimestamp = NTP.getTime() - BlockChain.getInstance().getOnlineAccountSignaturesMaxLifetime();
long currentTrimmableTimestamp = NTP.getTime() - Settings.getInstance().getAtStatesMaxLifetime();
@ -232,11 +235,31 @@ public class BlockData implements Serializable {
return blockTimestamp < onlineAccountSignaturesTrimmedTimestamp && blockTimestamp < currentTrimmableTimestamp;
}
public String getMinterAddressFromPublicKey() {
try (final Repository repository = RepositoryManager.getRepository()) {
return Account.getRewardShareMintingAddress(repository, this.minterPublicKey);
} catch (DataException e) {
return "Unknown";
}
}
public int getMinterLevelFromPublicKey() {
try (final Repository repository = RepositoryManager.getRepository()) {
return Account.getRewardShareEffectiveMintingLevel(repository, this.minterPublicKey);
} catch (DataException e) {
return 0;
}
}
// JAXB special
@XmlElement(name = "minterAddress")
protected String getMinterAddress() {
return Crypto.toAddress(this.minterPublicKey);
return getMinterAddressFromPublicKey();
}
@XmlElement(name = "minterLevel")
protected int getMinterLevel() {
return getMinterLevelFromPublicKey();
}
}

View File

@ -0,0 +1,85 @@
package org.qortal.data.block;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
import java.util.Objects;
// All properties to be converted to JSON via JAX-RS
@XmlAccessorType(XmlAccessType.FIELD)
public class DecodedOnlineAccountData {
private long onlineTimestamp;
private String minter;
private String recipient;
private int sharePercent;
private boolean minterGroupMember;
private String name;
private int level;
public DecodedOnlineAccountData() {
}
public DecodedOnlineAccountData(long onlineTimestamp, String minter, String recipient, int sharePercent, boolean minterGroupMember, String name, int level) {
this.onlineTimestamp = onlineTimestamp;
this.minter = minter;
this.recipient = recipient;
this.sharePercent = sharePercent;
this.minterGroupMember = minterGroupMember;
this.name = name;
this.level = level;
}
public long getOnlineTimestamp() {
return onlineTimestamp;
}
public String getMinter() {
return minter;
}
public String getRecipient() {
return recipient;
}
public int getSharePercent() {
return sharePercent;
}
public boolean isMinterGroupMember() {
return minterGroupMember;
}
public String getName() {
return name;
}
public int getLevel() {
return level;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
DecodedOnlineAccountData that = (DecodedOnlineAccountData) o;
return onlineTimestamp == that.onlineTimestamp && sharePercent == that.sharePercent && minterGroupMember == that.minterGroupMember && level == that.level && Objects.equals(minter, that.minter) && Objects.equals(recipient, that.recipient) && Objects.equals(name, that.name);
}
@Override
public int hashCode() {
return Objects.hash(onlineTimestamp, minter, recipient, sharePercent, minterGroupMember, name, level);
}
@Override
public String toString() {
return "DecodedOnlineAccountData{" +
"onlineTimestamp=" + onlineTimestamp +
", minter='" + minter + '\'' +
", recipient='" + recipient + '\'' +
", sharePercent=" + sharePercent +
", minterGroupMember=" + minterGroupMember +
", name='" + name + '\'' +
", level=" + level +
'}';
}
}

View File

@ -0,0 +1,35 @@
package org.qortal.data.system;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class DbConnectionInfo {
private long updated;
private String owner;
private String state;
public DbConnectionInfo() {
}
public DbConnectionInfo(long timeOpened, String owner, String state) {
this.updated = timeOpened;
this.owner = owner;
this.state = state;
}
public long getUpdated() {
return updated;
}
public String getOwner() {
return owner;
}
public String getState() {
return state;
}
}

View File

@ -0,0 +1,49 @@
package org.qortal.data.system;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class SystemInfo {
private long freeMemory;
private long memoryInUse;
private long totalMemory;
private long maxMemory;
private int availableProcessors;
public SystemInfo() {
}
public SystemInfo(long freeMemory, long memoryInUse, long totalMemory, long maxMemory, int availableProcessors) {
this.freeMemory = freeMemory;
this.memoryInUse = memoryInUse;
this.totalMemory = totalMemory;
this.maxMemory = maxMemory;
this.availableProcessors = availableProcessors;
}
public long getFreeMemory() {
return freeMemory;
}
public long getMemoryInUse() {
return memoryInUse;
}
public long getTotalMemory() {
return totalMemory;
}
public long getMaxMemory() {
return maxMemory;
}
public int getAvailableProcessors() {
return availableProcessors;
}
}

View File

@ -200,4 +200,26 @@ public class ArbitraryTransactionData extends TransactionData {
return this.payments;
}
@Override
public String toString() {
return "ArbitraryTransactionData{" +
"version=" + version +
", service=" + service +
", nonce=" + nonce +
", size=" + size +
", name='" + name + '\'' +
", identifier='" + identifier + '\'' +
", method=" + method +
", compression=" + compression +
", dataType=" + dataType +
", type=" + type +
", timestamp=" + timestamp +
", fee=" + fee +
", txGroupId=" + txGroupId +
", blockHeight=" + blockHeight +
", blockSequence=" + blockSequence +
", approvalStatus=" + approvalStatus +
", approvalHeight=" + approvalHeight +
'}';
}
}

View File

@ -0,0 +1,57 @@
package org.qortal.event;
import javax.xml.bind.annotation.XmlAccessType;
import javax.xml.bind.annotation.XmlAccessorType;
@XmlAccessorType(XmlAccessType.FIELD)
public class DataMonitorEvent implements Event{
private long timestamp;
private String identifier;
private String name;
private String service;
private String description;
private long transactionTimestamp;
private long latestPutTimestamp;
public DataMonitorEvent() {
}
public DataMonitorEvent(long timestamp, String identifier, String name, String service, String description, long transactionTimestamp, long latestPutTimestamp) {
this.timestamp = timestamp;
this.identifier = identifier;
this.name = name;
this.service = service;
this.description = description;
this.transactionTimestamp = transactionTimestamp;
this.latestPutTimestamp = latestPutTimestamp;
}
public long getTimestamp() {
return timestamp;
}
public String getIdentifier() {
return identifier;
}
public String getName() {
return name;
}
public String getService() {
return service;
}
public String getDescription() {
return description;
}
public long getTransactionTimestamp() {
return transactionTimestamp;
}
public long getLatestPutTimestamp() {
return latestPutTimestamp;
}
}

View File

@ -2,6 +2,7 @@ package org.qortal.group;
import org.qortal.account.Account;
import org.qortal.account.PublicKeyAccount;
import org.qortal.block.BlockChain;
import org.qortal.controller.Controller;
import org.qortal.crypto.Crypto;
import org.qortal.data.group.*;
@ -150,7 +151,12 @@ public class Group {
// Adminship
private GroupAdminData getAdmin(String admin) throws DataException {
return groupRepository.getAdmin(this.groupData.getGroupId(), admin);
if( repository.getBlockRepository().getBlockchainHeight() < BlockChain.getInstance().getAdminQueryFixHeight()) {
return groupRepository.getAdminFaulty(this.groupData.getGroupId(), admin);
}
else {
return groupRepository.getAdmin(this.groupData.getGroupId(), admin);
}
}
private boolean adminExists(String admin) throws DataException {
@ -668,8 +674,8 @@ public class Group {
public void uninvite(GroupInviteTransactionData groupInviteTransactionData) throws DataException {
String invitee = groupInviteTransactionData.getInvitee();
// If member exists then they were added when invite matched join request
if (this.memberExists(invitee)) {
// If member exists and the join request is present then they were added when invite matched join request
if (this.memberExists(invitee) && groupInviteTransactionData.getJoinReference() != null) {
// Rebuild join request using cached reference to transaction that created join request.
this.rebuildJoinRequest(invitee, groupInviteTransactionData.getJoinReference());

View File

@ -4,10 +4,15 @@ import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.network.Network;
import org.qortal.network.Peer;
import org.qortal.utils.DaemonThreadFactory;
import org.qortal.utils.ExecuteProduceConsume.Task;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class PeerConnectTask implements Task {
private static final Logger LOGGER = LogManager.getLogger(PeerConnectTask.class);
private static final ExecutorService connectionExecutor = Executors.newCachedThreadPool(new DaemonThreadFactory(8));
private final Peer peer;
private final String name;
@ -24,6 +29,24 @@ public class PeerConnectTask implements Task {
@Override
public void perform() throws InterruptedException {
Network.getInstance().connectPeer(peer);
// Submit connection task to a dedicated thread pool for non-blocking I/O
connectionExecutor.submit(() -> {
try {
connectPeerAsync(peer);
} catch (InterruptedException e) {
LOGGER.error("Connection attempt interrupted for peer {}", peer, e);
Thread.currentThread().interrupt(); // Reset interrupt flag
}
});
}
private void connectPeerAsync(Peer peer) throws InterruptedException {
// Perform peer connection in a separate thread to avoid blocking main task execution
try {
Network.getInstance().connectPeer(peer);
LOGGER.trace("Successfully connected to peer {}", peer);
} catch (Exception e) {
LOGGER.error("Error connecting to peer {}", peer, e);
}
}
}

View File

@ -76,9 +76,9 @@ public interface ATRepository {
* Although <tt>expectedValue</tt>, if provided, is natively an unsigned long,
* the data segment comparison is done via unsigned hex string.
*/
public List<ATStateData> getMatchingFinalATStates(byte[] codeHash, Boolean isFinished,
Integer dataByteOffset, Long expectedValue, Integer minimumFinalHeight,
Integer limit, Integer offset, Boolean reverse) throws DataException;
public List<ATStateData> getMatchingFinalATStates(byte[] codeHash, byte[] buyerPublicKey, byte[] sellerPublicKey, Boolean isFinished,
Integer dataByteOffset, Long expectedValue, Integer minimumFinalHeight,
Integer limit, Integer offset, Boolean reverse) throws DataException;
/**
* Returns final ATStateData for ATs matching codeHash (required)

View File

@ -27,6 +27,10 @@ public interface ArbitraryRepository {
public List<ArbitraryTransactionData> getArbitraryTransactions(String name, Service service, String identifier, long since) throws DataException;
List<ArbitraryTransactionData> getLatestArbitraryTransactions() throws DataException;
List<ArbitraryTransactionData> getLatestArbitraryTransactionsByName(String name) throws DataException;
public ArbitraryTransactionData getInitialTransaction(String name, Service service, Method method, String identifier) throws DataException;
public ArbitraryTransactionData getLatestTransaction(String name, Service service, Method method, String identifier) throws DataException;
@ -42,7 +46,7 @@ public interface ArbitraryRepository {
public List<ArbitraryResourceData> getArbitraryResources(Service service, String identifier, List<String> names, boolean defaultResource, Boolean followedOnly, Boolean excludeBlocked, Boolean includeMetadata, Boolean includeStatus, Integer limit, Integer offset, Boolean reverse) throws DataException;
public List<ArbitraryResourceData> searchArbitraryResources(Service service, String query, String identifier, List<String> names, String title, String description, boolean prefixOnly, List<String> namesFilter, boolean defaultResource, SearchMode mode, Integer minLevel, Boolean followedOnly, Boolean excludeBlocked, Boolean includeMetadata, Boolean includeStatus, Long before, Long after, Integer limit, Integer offset, Boolean reverse) throws DataException;
public List<ArbitraryResourceData> searchArbitraryResources(Service service, String query, String identifier, List<String> names, String title, String description, List<String> keywords, boolean prefixOnly, List<String> namesFilter, boolean defaultResource, SearchMode mode, Integer minLevel, Boolean followedOnly, Boolean excludeBlocked, Boolean includeMetadata, Boolean includeStatus, Long before, Long after, Integer limit, Integer offset, Boolean reverse) throws DataException;
List<ArbitraryResourceData> searchArbitraryResourcesSimple(
Service service,

View File

@ -22,6 +22,6 @@ public interface ChatRepository {
public ChatMessage toChatMessage(ChatTransactionData chatTransactionData, Encoding encoding) throws DataException;
public ActiveChats getActiveChats(String address, Encoding encoding) throws DataException;
public ActiveChats getActiveChats(String address, Encoding encoding, Boolean hasChatReference) throws DataException;
}

View File

@ -48,6 +48,8 @@ public interface GroupRepository {
// Group Admins
public GroupAdminData getAdminFaulty(int groupId, String address) throws DataException;
public GroupAdminData getAdmin(int groupId, String address) throws DataException;
public boolean adminExists(int groupId, String address) throws DataException;

View File

@ -1,9 +1,11 @@
package org.qortal.repository.hsqldb;
import com.google.common.primitives.Longs;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.controller.Controller;
import org.qortal.crypto.Crypto;
import org.qortal.data.at.ATData;
import org.qortal.data.at.ATStateData;
import org.qortal.repository.ATRepository;
@ -16,6 +18,8 @@ import java.util.ArrayList;
import java.util.List;
import java.util.Set;
import org.qortal.data.account.AccountData;
public class HSQLDBATRepository implements ATRepository {
private static final Logger LOGGER = LogManager.getLogger(HSQLDBATRepository.class);
@ -400,9 +404,9 @@ public class HSQLDBATRepository implements ATRepository {
}
@Override
public List<ATStateData> getMatchingFinalATStates(byte[] codeHash, Boolean isFinished,
Integer dataByteOffset, Long expectedValue, Integer minimumFinalHeight,
Integer limit, Integer offset, Boolean reverse) throws DataException {
public List<ATStateData> getMatchingFinalATStates(byte[] codeHash, byte[] buyerPublicKey, byte[] sellerPublicKey, Boolean isFinished,
Integer dataByteOffset, Long expectedValue, Integer minimumFinalHeight,
Integer limit, Integer offset, Boolean reverse) throws DataException {
StringBuilder sql = new StringBuilder(1024);
List<Object> bindParams = new ArrayList<>();
@ -421,10 +425,14 @@ public class HSQLDBATRepository implements ATRepository {
// Order by AT_address and height to use compound primary key as index
// Both must be the same direction (DESC) also
sql.append("ORDER BY ATStates.AT_address DESC, ATStates.height DESC "
+ "LIMIT 1 "
+ ") AS FinalATStates "
+ "WHERE code_hash = ? ");
sql.append("ORDER BY ATStates.height DESC LIMIT 1) AS FinalATStates ");
// Optional JOIN with ATTRANSACTIONS for buyerAddress
if (buyerPublicKey != null && buyerPublicKey.length > 0) {
sql.append("JOIN ATTRANSACTIONS tx ON tx.at_address = ATs.AT_address ");
}
sql.append("WHERE ATs.code_hash = ? ");
bindParams.add(codeHash);
if (isFinished != null) {
@ -443,6 +451,20 @@ public class HSQLDBATRepository implements ATRepository {
bindParams.add(rawExpectedValue);
}
if (buyerPublicKey != null && buyerPublicKey.length > 0 ) {
// the buyer must be the recipient of the transaction and not the creator of the AT
sql.append("AND tx.recipient = ? AND ATs.creator != ? ");
bindParams.add(Crypto.toAddress(buyerPublicKey));
bindParams.add(buyerPublicKey);
}
if (sellerPublicKey != null && sellerPublicKey.length > 0) {
sql.append("AND ATs.creator = ? ");
bindParams.add(sellerPublicKey);
}
sql.append(" ORDER BY FinalATStates.height ");
if (reverse != null && reverse)
sql.append("DESC");
@ -483,7 +505,7 @@ public class HSQLDBATRepository implements ATRepository {
Integer dataByteOffset, Long expectedValue,
int minimumCount, int maximumCount, long minimumPeriod) throws DataException {
// We need most recent entry first so we can use its timestamp to slice further results
List<ATStateData> mostRecentStates = this.getMatchingFinalATStates(codeHash, isFinished,
List<ATStateData> mostRecentStates = this.getMatchingFinalATStates(codeHash, null, null, isFinished,
dataByteOffset, expectedValue, null,
1, 0, true);

View File

@ -7,7 +7,6 @@ import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Category;
import org.qortal.arbitrary.misc.Service;
import org.qortal.controller.arbitrary.ArbitraryDataManager;
import org.qortal.data.arbitrary.ArbitraryResourceCache;
import org.qortal.data.arbitrary.ArbitraryResourceData;
import org.qortal.data.arbitrary.ArbitraryResourceMetadata;
@ -29,6 +28,7 @@ import org.qortal.utils.ListUtils;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
@ -227,6 +227,144 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
}
}
@Override
public List<ArbitraryTransactionData> getLatestArbitraryTransactions() throws DataException {
String sql = "SELECT type, reference, signature, creator, created_when, fee, " +
"tx_group_id, block_height, approval_status, approval_height, " +
"version, nonce, service, size, is_data_raw, data, metadata_hash, " +
"name, identifier, update_method, secret, compression FROM ArbitraryTransactions " +
"JOIN Transactions USING (signature) " +
"WHERE name IS NOT NULL " +
"ORDER BY created_when DESC";
List<ArbitraryTransactionData> arbitraryTransactionData = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
if (resultSet == null)
return new ArrayList<>(0);
do {
byte[] reference = resultSet.getBytes(2);
byte[] signature = resultSet.getBytes(3);
byte[] creatorPublicKey = resultSet.getBytes(4);
long timestamp = resultSet.getLong(5);
Long fee = resultSet.getLong(6);
if (fee == 0 && resultSet.wasNull())
fee = null;
int txGroupId = resultSet.getInt(7);
Integer blockHeight = resultSet.getInt(8);
if (blockHeight == 0 && resultSet.wasNull())
blockHeight = null;
ApprovalStatus approvalStatus = ApprovalStatus.valueOf(resultSet.getInt(9));
Integer approvalHeight = resultSet.getInt(10);
if (approvalHeight == 0 && resultSet.wasNull())
approvalHeight = null;
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, txGroupId, reference, creatorPublicKey, fee, approvalStatus, blockHeight, approvalHeight, signature);
int version = resultSet.getInt(11);
int nonce = resultSet.getInt(12);
int serviceInt = resultSet.getInt(13);
int size = resultSet.getInt(14);
boolean isDataRaw = resultSet.getBoolean(15); // NOT NULL, so no null to false
DataType dataType = isDataRaw ? DataType.RAW_DATA : DataType.DATA_HASH;
byte[] data = resultSet.getBytes(16);
byte[] metadataHash = resultSet.getBytes(17);
String nameResult = resultSet.getString(18);
String identifierResult = resultSet.getString(19);
Method method = Method.valueOf(resultSet.getInt(20));
byte[] secret = resultSet.getBytes(21);
Compression compression = Compression.valueOf(resultSet.getInt(22));
// FUTURE: get payments from signature if needed. Avoiding for now to reduce database calls.
ArbitraryTransactionData transactionData = new ArbitraryTransactionData(baseTransactionData,
version, serviceInt, nonce, size, nameResult, identifierResult, method, secret,
compression, data, dataType, metadataHash, null);
arbitraryTransactionData.add(transactionData);
} while (resultSet.next());
return arbitraryTransactionData;
} catch (SQLException e) {
throw new DataException("Unable to fetch arbitrary transactions from repository", e);
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
return new ArrayList<>(0);
}
}
@Override
public List<ArbitraryTransactionData> getLatestArbitraryTransactionsByName( String name ) throws DataException {
String sql = "SELECT type, reference, signature, creator, created_when, fee, " +
"tx_group_id, block_height, approval_status, approval_height, " +
"version, nonce, service, size, is_data_raw, data, metadata_hash, " +
"name, identifier, update_method, secret, compression FROM ArbitraryTransactions " +
"JOIN Transactions USING (signature) " +
"WHERE name = ? " +
"ORDER BY created_when DESC";
List<ArbitraryTransactionData> arbitraryTransactionData = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(sql, name)) {
if (resultSet == null)
return new ArrayList<>(0);
do {
byte[] reference = resultSet.getBytes(2);
byte[] signature = resultSet.getBytes(3);
byte[] creatorPublicKey = resultSet.getBytes(4);
long timestamp = resultSet.getLong(5);
Long fee = resultSet.getLong(6);
if (fee == 0 && resultSet.wasNull())
fee = null;
int txGroupId = resultSet.getInt(7);
Integer blockHeight = resultSet.getInt(8);
if (blockHeight == 0 && resultSet.wasNull())
blockHeight = null;
ApprovalStatus approvalStatus = ApprovalStatus.valueOf(resultSet.getInt(9));
Integer approvalHeight = resultSet.getInt(10);
if (approvalHeight == 0 && resultSet.wasNull())
approvalHeight = null;
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, txGroupId, reference, creatorPublicKey, fee, approvalStatus, blockHeight, approvalHeight, signature);
int version = resultSet.getInt(11);
int nonce = resultSet.getInt(12);
int serviceInt = resultSet.getInt(13);
int size = resultSet.getInt(14);
boolean isDataRaw = resultSet.getBoolean(15); // NOT NULL, so no null to false
DataType dataType = isDataRaw ? DataType.RAW_DATA : DataType.DATA_HASH;
byte[] data = resultSet.getBytes(16);
byte[] metadataHash = resultSet.getBytes(17);
String nameResult = resultSet.getString(18);
String identifierResult = resultSet.getString(19);
Method method = Method.valueOf(resultSet.getInt(20));
byte[] secret = resultSet.getBytes(21);
Compression compression = Compression.valueOf(resultSet.getInt(22));
// FUTURE: get payments from signature if needed. Avoiding for now to reduce database calls.
ArbitraryTransactionData transactionData = new ArbitraryTransactionData(baseTransactionData,
version, serviceInt, nonce, size, nameResult, identifierResult, method, secret,
compression, data, dataType, metadataHash, null);
arbitraryTransactionData.add(transactionData);
} while (resultSet.next());
return arbitraryTransactionData;
} catch (SQLException e) {
throw new DataException("Unable to fetch arbitrary transactions from repository", e);
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
return new ArrayList<>(0);
}
}
private ArbitraryTransactionData getSingleTransaction(String name, Service service, Method method, String identifier, boolean firstNotLast) throws DataException {
if (name == null || service == null) {
// Required fields
@ -724,12 +862,11 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
}
@Override
public List<ArbitraryResourceData> searchArbitraryResources(Service service, String query, String identifier, List<String> names, String title, String description, boolean prefixOnly,
public List<ArbitraryResourceData> searchArbitraryResources(Service service, String query, String identifier, List<String> names, String title, String description, List<String> keywords, boolean prefixOnly,
List<String> exactMatchNames, boolean defaultResource, SearchMode mode, Integer minLevel, Boolean followedOnly, Boolean excludeBlocked,
Boolean includeMetadata, Boolean includeStatus, Long before, Long after, Integer limit, Integer offset, Boolean reverse) throws DataException {
if(Settings.getInstance().isDbCacheEnabled()) {
List<ArbitraryResourceData> list
= HSQLDBCacheUtils.callCache(
ArbitraryResourceCache.getInstance(),
@ -751,6 +888,7 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
Optional.ofNullable(description),
prefixOnly,
Optional.ofNullable(exactMatchNames),
Optional.ofNullable(keywords),
defaultResource,
Optional.ofNullable(minLevel),
Optional.ofNullable(() -> ListUtils.followedNames()),
@ -771,6 +909,7 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
}
}
StringBuilder sql = new StringBuilder(512);
List<Object> bindParams = new ArrayList<>();
@ -857,6 +996,26 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
bindParams.add(queryWildcard);
}
if (keywords != null && !keywords.isEmpty()) {
List<String> searchKeywords = new ArrayList<>(keywords);
List<String> conditions = new ArrayList<>();
List<String> bindValues = new ArrayList<>();
for (int i = 0; i < searchKeywords.size(); i++) {
conditions.add("LOWER(description) LIKE ?");
bindValues.add("%" + searchKeywords.get(i).trim().toLowerCase() + "%");
}
String finalCondition = String.join(" OR ", conditions);
sql.append(" AND (").append(finalCondition).append(")");
bindParams.addAll(bindValues);
}
// Handle name searches
if (names != null && !names.isEmpty()) {
sql.append(" AND (");

View File

@ -3,12 +3,24 @@ package org.qortal.repository.hsqldb;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.api.SearchMode;
import org.qortal.api.resource.TransactionsResource;
import org.qortal.arbitrary.misc.Category;
import org.qortal.arbitrary.misc.Service;
import org.qortal.controller.Controller;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.AddressAmountData;
import org.qortal.data.account.BlockHeightRange;
import org.qortal.data.account.BlockHeightRangeAddressAmounts;
import org.qortal.data.arbitrary.ArbitraryResourceCache;
import org.qortal.data.arbitrary.ArbitraryResourceData;
import org.qortal.data.arbitrary.ArbitraryResourceMetadata;
import org.qortal.data.arbitrary.ArbitraryResourceStatus;
import org.qortal.data.transaction.TransactionData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import org.qortal.settings.Settings;
import org.qortal.utils.BalanceRecorderUtils;
import java.sql.ResultSet;
import java.sql.SQLException;
@ -25,6 +37,7 @@ import java.util.Optional;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
@ -48,6 +61,11 @@ public class HSQLDBCacheUtils {
}
};
private static final String DEFAULT_IDENTIFIER = "default";
private static final int ZERO = 0;
public static final String DB_CACHE_TIMER = "DB Cache Timer";
public static final String DB_CACHE_TIMER_TASK = "DB Cache Timer Task";
public static final String BALANCE_RECORDER_TIMER = "Balance Recorder Timer";
public static final String BALANCE_RECORDER_TIMER_TASK = "Balance Recorder Timer Task";
/**
*
@ -149,6 +167,7 @@ public class HSQLDBCacheUtils {
Optional<String> description,
boolean prefixOnly,
Optional<List<String>> exactMatchNames,
Optional<List<String>> keywords,
boolean defaultResource,
Optional<Integer> minLevel,
Optional<Supplier<List<String>>> includeOnly,
@ -162,7 +181,18 @@ public class HSQLDBCacheUtils {
Optional<Boolean> reverse) {
// retain only candidates with names
Stream<ArbitraryResourceData> stream = candidates.stream().filter(candidate -> candidate.name != null);
Stream<ArbitraryResourceData> stream = candidates.stream().filter(candidate -> candidate.name != null );
if(after.isPresent()) {
stream = stream.filter( candidate -> candidate.created > after.get().longValue() );
}
if(before.isPresent()) {
stream = stream.filter( candidate -> candidate.created < before.get().longValue() );
}
if(exclude.isPresent())
stream = stream.filter( candidate -> !exclude.get().get().contains( candidate.name ));
// filter by service
if( service.isPresent() )
@ -186,6 +216,36 @@ public class HSQLDBCacheUtils {
stream = filterTerm(title, data -> data.metadata != null ? data.metadata.getTitle() : null, prefixOnly, stream);
stream = filterTerm(description, data -> data.metadata != null ? data.metadata.getDescription() : null, prefixOnly, stream);
// New: Filter by keywords if provided
if (keywords.isPresent() && !keywords.get().isEmpty()) {
List<String> searchKeywords = keywords.get().stream()
.map(String::toLowerCase)
.collect(Collectors.toList());
stream = stream.filter(candidate -> {
if (candidate.metadata != null && candidate.metadata.getDescription() != null) {
String descriptionLower = candidate.metadata.getDescription().toLowerCase();
return searchKeywords.stream().anyMatch(descriptionLower::contains);
}
return false;
});
}
if (keywords.isPresent() && !keywords.get().isEmpty()) {
List<String> searchKeywords = keywords.get().stream()
.map(String::toLowerCase)
.collect(Collectors.toList());
stream = stream.filter(candidate -> {
if (candidate.metadata != null && candidate.metadata.getDescription() != null) {
String descriptionLower = candidate.metadata.getDescription().toLowerCase();
return searchKeywords.stream().anyMatch(descriptionLower::contains);
}
return false;
});
}
// if exact names is set, retain resources with exact names
if( exactMatchNames.isPresent() && !exactMatchNames.get().isEmpty()) {
@ -241,15 +301,58 @@ public class HSQLDBCacheUtils {
// truncate to limit
if( limit.isPresent() && limit.get() > 0 ) stream = stream.limit(limit.get());
// include metadata
if( includeMetadata.isEmpty() || !includeMetadata.get() )
stream = stream.peek( candidate -> candidate.metadata = null );
List<ArbitraryResourceData> listCopy1 = stream.collect(Collectors.toList());
// include status
if( includeStatus.isEmpty() || !includeStatus.get() )
stream = stream.peek( candidate -> candidate.status = null);
List<ArbitraryResourceData> listCopy2 = new ArrayList<>(listCopy1.size());
return stream.collect(Collectors.toList());
// remove metadata from the first copy
if( includeMetadata.isEmpty() || !includeMetadata.get() ) {
for( ArbitraryResourceData data : listCopy1 ) {
ArbitraryResourceData copy = new ArbitraryResourceData();
copy.name = data.name;
copy.service = data.service;
copy.identifier = data.identifier;
copy.status = data.status;
copy.metadata = null;
copy.size = data.size;
copy.created = data.created;
copy.updated = data.updated;
listCopy2.add(copy);
}
}
// put the list copy 1 into the second copy
else {
listCopy2.addAll(listCopy1);
}
// remove status from final copy
if( includeStatus.isEmpty() || !includeStatus.get() ) {
List<ArbitraryResourceData> finalCopy = new ArrayList<>(listCopy2.size());
for( ArbitraryResourceData data : listCopy2 ) {
ArbitraryResourceData copy = new ArbitraryResourceData();
copy.name = data.name;
copy.service = data.service;
copy.identifier = data.identifier;
copy.status = null;
copy.metadata = data.metadata;
copy.size = data.size;
copy.created = data.created;
copy.updated = data.updated;
finalCopy.add(copy);
}
return finalCopy;
}
// keep status included by returning the second copy
else {
return listCopy2;
}
}
/**
@ -351,13 +454,200 @@ public class HSQLDBCacheUtils {
* Start Caching
*
* @param priorityRequested the thread priority to fill cache in
* @param frequency the frequency to fill the cache (in seconds)
* @param respository the data source
* @param frequency the frequency to fill the cache (in seconds)
*
* @return the data cache
*/
public static void startCaching(int priorityRequested, int frequency, HSQLDBRepository respository) {
public static void startCaching(int priorityRequested, int frequency) {
Timer timer = buildTimer(DB_CACHE_TIMER, priorityRequested);
TimerTask task = new TimerTask() {
@Override
public void run() {
Thread.currentThread().setName(DB_CACHE_TIMER_TASK);
try (final HSQLDBRepository respository = (HSQLDBRepository) Controller.REPOSITORY_FACTORY.getRepository()) {
fillCache(ArbitraryResourceCache.getInstance(), respository);
}
catch( DataException e ) {
LOGGER.error(e.getMessage(), e);
}
}
};
// delay 1 second
timer.scheduleAtFixedRate(task, 1000, frequency * 1000);
}
/**
* Start Recording Balances
*
* @param balancesByHeight height -> account balances
* @param balanceDynamics every balance dynamic
* @param priorityRequested the requested thread priority
* @param frequency the recording frequencies, in minutes
* @param capacity the maximum size of balanceDynamics
*/
public static void startRecordingBalances(
final ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight,
CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics,
int priorityRequested,
int frequency,
int capacity) {
Timer timer = buildTimer(BALANCE_RECORDER_TIMER, priorityRequested);
TimerTask task = new TimerTask() {
@Override
public void run() {
Thread.currentThread().setName(BALANCE_RECORDER_TIMER_TASK);
int currentHeight = recordCurrentBalances(balancesByHeight);
LOGGER.debug("recorded balances: height = " + currentHeight);
// remove invalidated recordings, recording after current height
BalanceRecorderUtils.removeRecordingsAboveHeight(currentHeight, balancesByHeight);
// remove invalidated dynamics, on or after current height
BalanceRecorderUtils.removeDynamicsOnOrAboveHeight(currentHeight, balanceDynamics);
// if there are 2 or more recordings, then produce balance dynamics for the first 2 recordings
if( balancesByHeight.size() > 1 ) {
Optional<Integer> priorHeight = BalanceRecorderUtils.getPriorHeight(currentHeight, balancesByHeight);
// if there is a prior height
if(priorHeight.isPresent()) {
boolean isRewardDistribution = BalanceRecorderUtils.isRewardDistributionRange(priorHeight.get(), currentHeight);
// if this range has a reward recording block or if other blocks are enabled for recording
if( isRewardDistribution || !Settings.getInstance().isRewardRecordingOnly() ) {
produceBalanceDynamics(currentHeight, priorHeight, isRewardDistribution, balancesByHeight, balanceDynamics, capacity);
}
}
else {
LOGGER.warn("Expecting prior height and nothing was discovered, current height = " + currentHeight);
}
}
// else this should be the first recording
else {
LOGGER.info("first balance recording completed");
}
}
};
// wait 5 minutes
timer.scheduleAtFixedRate(task, 300_000, frequency * 60_000);
}
private static void produceBalanceDynamics(int currentHeight, Optional<Integer> priorHeight, boolean isRewardDistribution, ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight, CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics, int capacity) {
BlockHeightRange blockHeightRange = new BlockHeightRange(priorHeight.get(), currentHeight, isRewardDistribution);
LOGGER.debug("building dynamics for block heights: range = " + blockHeightRange);
List<AccountBalanceData> currentBalances = balancesByHeight.get(currentHeight);
ArrayList<TransactionData> transactions = getTransactionDataForBlocks(blockHeightRange);
LOGGER.info("transactions counted for balance adjustments: count = " + transactions.size());
List<AddressAmountData> currentDynamics
= BalanceRecorderUtils.buildBalanceDynamics(
currentBalances,
balancesByHeight.get(priorHeight.get()),
Settings.getInstance().getMinimumBalanceRecording(),
transactions);
LOGGER.debug("dynamics built: count = " + currentDynamics.size());
if(LOGGER.isDebugEnabled())
currentDynamics.stream()
.sorted(Comparator.comparingLong(AddressAmountData::getAmount).reversed())
.limit(Settings.getInstance().getTopBalanceLoggingLimit())
.forEach(top5Dynamic -> LOGGER.debug("Top Dynamics = " + top5Dynamic));
BlockHeightRangeAddressAmounts amounts
= new BlockHeightRangeAddressAmounts( blockHeightRange, currentDynamics );
balanceDynamics.add(amounts);
BalanceRecorderUtils.removeRecordingsBelowHeight(currentHeight - Settings.getInstance().getBalanceRecorderRollbackAllowance(), balancesByHeight);
while(balanceDynamics.size() > capacity) {
BlockHeightRangeAddressAmounts oldestDynamics = BalanceRecorderUtils.removeOldestDynamics(balanceDynamics);
LOGGER.debug("removing oldest dynamics: range " + oldestDynamics.getRange());
}
}
private static ArrayList<TransactionData> getTransactionDataForBlocks(BlockHeightRange blockHeightRange) {
ArrayList<TransactionData> transactions;
try (final Repository repository = RepositoryManager.getRepository()) {
List<byte[]> signatures
= repository.getTransactionRepository().getSignaturesMatchingCriteria(
blockHeightRange.getBegin() + 1, blockHeightRange.getEnd() - blockHeightRange.getBegin(),
null, null,null, null, null,
TransactionsResource.ConfirmationStatus.CONFIRMED,
null, null, null);
transactions = new ArrayList<>(signatures.size());
for (byte[] signature : signatures) {
transactions.add(repository.getTransactionRepository().fromSignature(signature));
}
LOGGER.debug(String.format("Found %s transactions for " + blockHeightRange, transactions.size()));
} catch (Exception e) {
transactions = new ArrayList<>(0);
LOGGER.warn("Problems getting transactions for balance recording: " + e.getMessage());
}
return transactions;
}
private static int recordCurrentBalances(ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight) {
int currentHeight;
try (final HSQLDBRepository repository = (HSQLDBRepository) Controller.REPOSITORY_FACTORY.getRepository()) {
// get current balances
List<AccountBalanceData> accountBalances = getAccountBalances(repository);
// get anyone of the balances
Optional<AccountBalanceData> data = accountBalances.stream().findAny();
// if there are any balances, then record them
if (data.isPresent()) {
// map all new balances to the current height
balancesByHeight.put(data.get().getHeight(), accountBalances);
currentHeight = data.get().getHeight();
}
else {
currentHeight = Integer.MAX_VALUE;
}
} catch (DataException e) {
LOGGER.error(e.getMessage(), e);
currentHeight = Integer.MAX_VALUE;
}
return currentHeight;
}
/**
* Build Timer
*
* Build a timer for scheduling a timer task.
*
* @param name the name for the thread running the timer task
* @param priorityRequested the priority for the thread running the timer task
*
* @return a timer for scheduling a timer task
*/
private static Timer buildTimer( final String name, int priorityRequested) {
// ensure priority is in between 1-10
final int priority = Math.max(0, Math.min(10, priorityRequested));
@ -365,7 +655,7 @@ public class HSQLDBCacheUtils {
Timer timer = new Timer(true) { // 'true' to make the Timer daemon
@Override
public void schedule(TimerTask task, long delay) {
Thread thread = new Thread(task) {
Thread thread = new Thread(task, name) {
@Override
public void run() {
this.setPriority(priority);
@ -376,17 +666,7 @@ public class HSQLDBCacheUtils {
thread.start();
}
};
TimerTask task = new TimerTask() {
@Override
public void run() {
fillCache(ArbitraryResourceCache.getInstance(), respository);
}
};
// delay 1 second
timer.scheduleAtFixedRate(task, 1000, frequency * 1000);
return timer;
}
/**
@ -541,4 +821,43 @@ public class HSQLDBCacheUtils {
return resources;
}
public static List<AccountBalanceData> getAccountBalances(HSQLDBRepository repository) {
StringBuilder sql = new StringBuilder();
sql.append("SELECT account, balance, height ");
sql.append("FROM ACCOUNTBALANCES as balances ");
sql.append("JOIN (SELECT height FROM BLOCKS ORDER BY height DESC LIMIT 1) AS max_height ON true ");
sql.append("WHERE asset_id=0");
List<AccountBalanceData> data = new ArrayList<>();
LOGGER.info( "Getting account balances ...");
try {
Statement statement = repository.connection.createStatement();
ResultSet resultSet = statement.executeQuery(sql.toString());
if (resultSet == null || !resultSet.next())
return new ArrayList<>(0);
do {
String account = resultSet.getString(1);
long balance = resultSet.getLong(2);
int height = resultSet.getInt(3);
data.add(new AccountBalanceData(account, ZERO, balance, height));
} while (resultSet.next());
} catch (SQLException e) {
LOGGER.warn(e.getMessage());
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
LOGGER.info("Retrieved account balances: count = " + data.size());
return data;
}
}

View File

@ -23,7 +23,7 @@ public class HSQLDBChatRepository implements ChatRepository {
public HSQLDBChatRepository(HSQLDBRepository repository) {
this.repository = repository;
}
@Override
public List<ChatMessage> getMessagesMatchingCriteria(Long before, Long after, Integer txGroupId, byte[] referenceBytes,
byte[] chatReferenceBytes, Boolean hasChatReference, List<String> involving, String senderAddress,
@ -176,14 +176,14 @@ public class HSQLDBChatRepository implements ChatRepository {
}
@Override
public ActiveChats getActiveChats(String address, Encoding encoding) throws DataException {
List<GroupChat> groupChats = getActiveGroupChats(address, encoding);
List<DirectChat> directChats = getActiveDirectChats(address);
public ActiveChats getActiveChats(String address, Encoding encoding, Boolean hasChatReference) throws DataException {
List<GroupChat> groupChats = getActiveGroupChats(address, encoding, hasChatReference);
List<DirectChat> directChats = getActiveDirectChats(address, hasChatReference);
return new ActiveChats(groupChats, directChats);
}
private List<GroupChat> getActiveGroupChats(String address, Encoding encoding) throws DataException {
private List<GroupChat> getActiveGroupChats(String address, Encoding encoding, Boolean hasChatReference) throws DataException {
// Find groups where address is a member and potential latest message details
String groupsSql = "SELECT group_id, group_name, latest_timestamp, sender, sender_name, signature, data "
+ "FROM GroupMembers "
@ -194,11 +194,19 @@ public class HSQLDBChatRepository implements ChatRepository {
+ "JOIN Transactions USING (signature) "
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
// NOTE: We need to qualify "Groups.group_id" here to avoid "General error" bug in HSQLDB v2.5.0
+ "WHERE tx_group_id = Groups.group_id AND type = " + TransactionType.CHAT.value + " "
+ "ORDER BY created_when DESC "
+ "WHERE tx_group_id = Groups.group_id AND type = " + TransactionType.CHAT.value + " ";
if (hasChatReference != null) {
if (hasChatReference) {
groupsSql += "AND chat_reference IS NOT NULL ";
} else {
groupsSql += "AND chat_reference IS NULL ";
}
}
groupsSql += "ORDER BY created_when DESC "
+ "LIMIT 1"
+ ") AS LatestMessages ON TRUE "
+ "WHERE address = ?";
+ ") AS LatestMessages ON TRUE "
+ "WHERE address = ?";
List<GroupChat> groupChats = new ArrayList<>();
try (ResultSet resultSet = this.repository.checkedExecute(groupsSql, address)) {
@ -230,8 +238,16 @@ public class HSQLDBChatRepository implements ChatRepository {
+ "JOIN Transactions USING (signature) "
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
+ "WHERE tx_group_id = 0 "
+ "AND recipient IS NULL "
+ "ORDER BY created_when DESC "
+ "AND recipient IS NULL ";
if (hasChatReference != null) {
if (hasChatReference) {
grouplessSql += "AND chat_reference IS NOT NULL ";
} else {
grouplessSql += "AND chat_reference IS NULL ";
}
}
grouplessSql += "ORDER BY created_when DESC "
+ "LIMIT 1";
try (ResultSet resultSet = this.repository.checkedExecute(grouplessSql)) {
@ -259,7 +275,7 @@ public class HSQLDBChatRepository implements ChatRepository {
return groupChats;
}
private List<DirectChat> getActiveDirectChats(String address) throws DataException {
private List<DirectChat> getActiveDirectChats(String address, Boolean hasChatReference) throws DataException {
// Find chat messages involving address
String directSql = "SELECT other_address, name, latest_timestamp, sender, sender_name "
+ "FROM ("
@ -275,11 +291,21 @@ public class HSQLDBChatRepository implements ChatRepository {
+ "NATURAL JOIN Transactions "
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
+ "WHERE (sender = other_address AND recipient = ?) "
+ "OR (sender = ? AND recipient = other_address) "
+ "ORDER BY created_when DESC "
+ "LIMIT 1"
+ ") AS LatestMessages "
+ "LEFT OUTER JOIN Names ON owner = other_address";
+ "OR (sender = ? AND recipient = other_address) ";
// Apply hasChatReference filter
if (hasChatReference != null) {
if (hasChatReference) {
directSql += "AND chat_reference IS NOT NULL ";
} else {
directSql += "AND chat_reference IS NULL ";
}
}
directSql += "ORDER BY created_when DESC "
+ "LIMIT 1"
+ ") AS LatestMessages "
+ "LEFT OUTER JOIN Names ON owner = other_address";
Object[] bindParams = new Object[] { address, address, address, address };

View File

@ -454,40 +454,41 @@ public class HSQLDBDatabaseUpdates {
case 12:
// Groups
stmt.execute("CREATE TABLE Groups (group_id GroupID, owner QortalAddress NOT NULL, group_name GroupName NOT NULL, "
// NOTE: We need to set Groups to `GROUPS` here to avoid SQL Standard Keywords in HSQLDB v2.7.4
stmt.execute("CREATE TABLE `GROUPS` (group_id GroupID, owner QortalAddress NOT NULL, group_name GroupName NOT NULL, "
+ "created_when EpochMillis NOT NULL, updated_when EpochMillis, is_open BOOLEAN NOT NULL, "
+ "approval_threshold TINYINT NOT NULL, min_block_delay INTEGER NOT NULL, max_block_delay INTEGER NOT NULL, "
+ "reference Signature, creation_group_id GroupID, reduced_group_name GroupName NOT NULL, "
+ "description GenericDescription NOT NULL, PRIMARY KEY (group_id))");
// For finding groups by name
stmt.execute("CREATE INDEX GroupNameIndex on Groups (group_name)");
stmt.execute("CREATE INDEX GroupNameIndex on `GROUPS` (group_name)");
// For finding groups by reduced name
stmt.execute("CREATE INDEX GroupReducedNameIndex on Groups (reduced_group_name)");
stmt.execute("CREATE INDEX GroupReducedNameIndex on `GROUPS` (reduced_group_name)");
// For finding groups by owner
stmt.execute("CREATE INDEX GroupOwnerIndex ON Groups (owner)");
stmt.execute("CREATE INDEX GroupOwnerIndex ON `GROUPS` (owner)");
// We need a corresponding trigger to make sure new group_id values are assigned sequentially starting from 1
stmt.execute("CREATE TRIGGER Group_ID_Trigger BEFORE INSERT ON Groups "
stmt.execute("CREATE TRIGGER Group_ID_Trigger BEFORE INSERT ON `GROUPS` "
+ "REFERENCING NEW ROW AS new_row FOR EACH ROW WHEN (new_row.group_id IS NULL) "
+ "SET new_row.group_id = (SELECT IFNULL(MAX(group_id) + 1, 1) FROM Groups)");
+ "SET new_row.group_id = (SELECT IFNULL(MAX(group_id) + 1, 1) FROM `GROUPS`)");
// Admins
stmt.execute("CREATE TABLE GroupAdmins (group_id GroupID, admin QortalAddress, reference Signature NOT NULL, "
+ "PRIMARY KEY (group_id, admin), FOREIGN KEY (group_id) REFERENCES Groups (group_id) ON DELETE CASCADE)");
+ "PRIMARY KEY (group_id, admin), FOREIGN KEY (group_id) REFERENCES `GROUPS` (group_id) ON DELETE CASCADE)");
// For finding groups by admin address
stmt.execute("CREATE INDEX GroupAdminIndex ON GroupAdmins (admin)");
// Members
stmt.execute("CREATE TABLE GroupMembers (group_id GroupID, address QortalAddress, "
+ "joined_when EpochMillis NOT NULL, reference Signature NOT NULL, "
+ "PRIMARY KEY (group_id, address), FOREIGN KEY (group_id) REFERENCES Groups (group_id) ON DELETE CASCADE)");
+ "PRIMARY KEY (group_id, address), FOREIGN KEY (group_id) REFERENCES `GROUPS` (group_id) ON DELETE CASCADE)");
// For finding groups by member address
stmt.execute("CREATE INDEX GroupMemberIndex ON GroupMembers (address)");
// Invites
stmt.execute("CREATE TABLE GroupInvites (group_id GroupID, inviter QortalAddress, invitee QortalAddress, "
+ "expires_when EpochMillis, reference Signature, "
+ "PRIMARY KEY (group_id, invitee), FOREIGN KEY (group_id) REFERENCES Groups (group_id) ON DELETE CASCADE)");
+ "PRIMARY KEY (group_id, invitee), FOREIGN KEY (group_id) REFERENCES `GROUPS` (group_id) ON DELETE CASCADE)");
// For finding invites sent by inviter
stmt.execute("CREATE INDEX GroupInviteInviterIndex ON GroupInvites (inviter)");
// For finding invites by group
@ -503,7 +504,7 @@ public class HSQLDBDatabaseUpdates {
// NULL expires_when means does not expire!
stmt.execute("CREATE TABLE GroupBans (group_id GroupID, offender QortalAddress, admin QortalAddress NOT NULL, "
+ "banned_when EpochMillis NOT NULL, reason GenericDescription NOT NULL, expires_when EpochMillis, reference Signature NOT NULL, "
+ "PRIMARY KEY (group_id, offender), FOREIGN KEY (group_id) REFERENCES Groups (group_id) ON DELETE CASCADE)");
+ "PRIMARY KEY (group_id, offender), FOREIGN KEY (group_id) REFERENCES `GROUPS` (group_id) ON DELETE CASCADE)");
// For expiry maintenance
stmt.execute("CREATE INDEX GroupBanExpiryIndex ON GroupBans (expires_when)");
break;

View File

@ -351,7 +351,7 @@ public class HSQLDBGroupRepository implements GroupRepository {
// Group Admins
@Override
public GroupAdminData getAdmin(int groupId, String address) throws DataException {
public GroupAdminData getAdminFaulty(int groupId, String address) throws DataException {
try (ResultSet resultSet = this.repository.checkedExecute("SELECT admin, reference FROM GroupAdmins WHERE group_id = ?", groupId)) {
if (resultSet == null)
return null;
@ -365,6 +365,21 @@ public class HSQLDBGroupRepository implements GroupRepository {
}
}
@Override
public GroupAdminData getAdmin(int groupId, String address) throws DataException {
try (ResultSet resultSet = this.repository.checkedExecute("SELECT admin, reference FROM GroupAdmins WHERE group_id = ? AND admin = ?", groupId, address)) {
if (resultSet == null)
return null;
String admin = resultSet.getString(1);
byte[] reference = resultSet.getBytes(2);
return new GroupAdminData(groupId, admin, reference);
} catch (SQLException e) {
throw new DataException("Unable to fetch group admin from repository", e);
}
}
@Override
public boolean adminExists(int groupId, String address) throws DataException {
try {

View File

@ -5,6 +5,8 @@ import org.apache.logging.log4j.Logger;
import org.hsqldb.HsqlException;
import org.hsqldb.error.ErrorCode;
import org.hsqldb.jdbc.HSQLDBPool;
import org.hsqldb.jdbc.HSQLDBPoolMonitored;
import org.qortal.data.system.DbConnectionInfo;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryFactory;
@ -14,6 +16,8 @@ import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;
public class HSQLDBRepositoryFactory implements RepositoryFactory {
@ -57,7 +61,13 @@ public class HSQLDBRepositoryFactory implements RepositoryFactory {
HSQLDBRepository.attemptRecovery(connectionUrl, "backup");
}
this.connectionPool = new HSQLDBPool(Settings.getInstance().getRepositoryConnectionPoolSize());
if(Settings.getInstance().isConnectionPoolMonitorEnabled()) {
this.connectionPool = new HSQLDBPoolMonitored(Settings.getInstance().getRepositoryConnectionPoolSize());
}
else {
this.connectionPool = new HSQLDBPool(Settings.getInstance().getRepositoryConnectionPoolSize());
}
this.connectionPool.setUrl(this.connectionUrl);
Properties properties = new Properties();
@ -153,4 +163,19 @@ public class HSQLDBRepositoryFactory implements RepositoryFactory {
return HSQLDBRepository.isDeadlockException(e);
}
/**
* Get Connection States
*
* Get the database connection states, if database connection pool monitoring is enabled.
*
* @return the connection states if enabled, otherwise an empty list
*/
public List<DbConnectionInfo> getDbConnectionsStates() {
if( Settings.getInstance().isConnectionPoolMonitorEnabled() ) {
return ((HSQLDBPoolMonitored) this.connectionPool).getDbConnectionsStates();
}
else {
return new ArrayList<>(0);
}
}
}

View File

@ -213,7 +213,7 @@ public class Settings {
public long recoveryModeTimeout = 9999999999999L;
/** Minimum peer version number required in order to sync with them */
private String minPeerVersion = "4.6.0";
private String minPeerVersion = "4.6.5";
/** Whether to allow connections with peers below minPeerVersion
* If true, we won't sync with them but they can still sync with us, and will show in the peers list
* If false, sync will be blocked both ways, and they will not appear in the peers list */
@ -222,7 +222,7 @@ public class Settings {
/** Minimum time (in seconds) that we should attempt to remain connected to a peer for */
private int minPeerConnectionTime = 2 * 60 * 60; // seconds
/** Maximum time (in seconds) that we should attempt to remain connected to a peer for */
private int maxPeerConnectionTime = 4 * 60 * 60; // seconds
private int maxPeerConnectionTime = 6 * 60 * 60; // seconds
/** Maximum time (in seconds) that a peer should remain connected when requesting QDN data */
private int maxDataPeerConnectionTime = 30 * 60; // seconds
@ -281,7 +281,10 @@ public class Settings {
// Auto-update sources
private String[] autoUpdateRepos = new String[] {
"https://github.com/Qortal/qortal/raw/%s/qortal.update",
"https://raw.githubusercontent.com@151.101.16.133/Qortal/qortal/%s/qortal.update"
"https://raw.githubusercontent.com@151.101.16.133/Qortal/qortal/%s/qortal.update",
"https://qortal.link/Auto-Update/%s/qortal.update",
"https://qortal.name/Auto-Update/%s/qortal.update",
"https://update.qortal.org/Auto-Update/%s/qortal.update"
};
// Lists
@ -383,7 +386,7 @@ public class Settings {
/**
* DB Cache Enabled?
*/
private boolean dbCacheEnabled = false;
private boolean dbCacheEnabled = true;
/**
* DB Cache Thread Priority
@ -441,6 +444,107 @@ public class Settings {
*/
private long archivingPause = 3000;
/**
* Enable Balance Recorder?
*
* True for balance recording, otherwise false.
*/
private boolean balanceRecorderEnabled = false;
/**
* Balance Recorder Priority
*
* The thread priority (1 is lowest, 10 is highest) of the balance recorder thread, if enabled.
*/
private int balanceRecorderPriority = 1;
/**
* Balance Recorder Frequency
*
* How often the balances will be recorded, if enabled, measured in minutes.
*/
private int balanceRecorderFrequency = 20;
/**
* Balance Recorder Capacity
*
* The number of balance recorder ranges will be held in memory.
*/
private int balanceRecorderCapacity = 1000;
/**
* Minimum Balance Recording
*
* The minimum recored balance change in Qortoshis (1/100000000 QORT)
*/
private long minimumBalanceRecording = 100000000;
/**
* Top Balance Logging Limit
*
* When logging the number limit of top balance changes to show in the logs for any given block range.
*/
private long topBalanceLoggingLimit = 100;
/**
* Balance Recorder Rollback Allowance
*
* If the balance recorder is enabled, it must protect its prior balances by this number of blocks in case of
* a blockchain rollback and reorganization.
*/
private int balanceRecorderRollbackAllowance = 100;
/**
* Is Reward Recording Only
*
* Set true to only retain the recordings that cover reward distributions, otherwise set false.
*/
private boolean rewardRecordingOnly = true;
/**
* Is The Connection Monitored?
*
* Is the database connection pooled monitored?
*/
private boolean connectionPoolMonitorEnabled = false;
/**
* Buiild Arbitrary Resources Batch Size
*
* The number resources to batch per iteration when rebuilding.
*/
private int buildArbitraryResourcesBatchSize = 200;
/**
* Arbitrary Indexing Priority
*
* The thread priority when indexing arbirary resources.
*/
private int arbitraryIndexingPriority = 5;
/**
* Arbitrary Indexing Frequency (In Minutes)
*
* The frequency at which the arbitrary indices are cached.
*/
private int arbitraryIndexingFrequency = 10;
private boolean rebuildArbitraryResourceCacheTaskEnabled = false;
/**
* Rebuild Arbitrary Resource Cache Task Delay (In Minutes)
*
* Waiting period before the first rebuild task is started.
*/
private int rebuildArbitraryResourceCacheTaskDelay = 300;
/**
* Rebuild Arbitrary Resource Cache Task Period (In Hours)
*
* The frequency the arbitrary resource cache is rebuilt.
*/
private int rebuildArbitraryResourceCacheTaskPeriod = 24;
// Domain mapping
public static class ThreadLimit {
private String messageType;
@ -1230,4 +1334,64 @@ public class Settings {
public long getArchivingPause() {
return archivingPause;
}
public int getBalanceRecorderPriority() {
return balanceRecorderPriority;
}
public int getBalanceRecorderFrequency() {
return balanceRecorderFrequency;
}
public int getBalanceRecorderCapacity() {
return balanceRecorderCapacity;
}
public boolean isBalanceRecorderEnabled() {
return balanceRecorderEnabled;
}
public long getMinimumBalanceRecording() {
return minimumBalanceRecording;
}
public long getTopBalanceLoggingLimit() {
return topBalanceLoggingLimit;
}
public int getBalanceRecorderRollbackAllowance() {
return balanceRecorderRollbackAllowance;
}
public boolean isRewardRecordingOnly() {
return rewardRecordingOnly;
}
public boolean isConnectionPoolMonitorEnabled() {
return connectionPoolMonitorEnabled;
}
public int getBuildArbitraryResourcesBatchSize() {
return buildArbitraryResourcesBatchSize;
}
public int getArbitraryIndexingPriority() {
return arbitraryIndexingPriority;
}
public int getArbitraryIndexingFrequency() {
return arbitraryIndexingFrequency;
}
public boolean isRebuildArbitraryResourceCacheTaskEnabled() {
return rebuildArbitraryResourceCacheTaskEnabled;
}
public int getRebuildArbitraryResourceCacheTaskDelay() {
return rebuildArbitraryResourceCacheTaskDelay;
}
public int getRebuildArbitraryResourceCacheTaskPeriod() {
return rebuildArbitraryResourceCacheTaskPeriod;
}
}

View File

@ -9,6 +9,7 @@ import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
import org.qortal.arbitrary.misc.Service;
import org.qortal.block.BlockChain;
import org.qortal.controller.arbitrary.ArbitraryDataManager;
import org.qortal.controller.arbitrary.ArbitraryTransactionDataHashWrapper;
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
import org.qortal.crypto.Crypto;
import org.qortal.crypto.MemoryPoW;
@ -31,8 +32,12 @@ import org.qortal.utils.ArbitraryTransactionUtils;
import org.qortal.utils.NTP;
import java.io.IOException;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.stream.Collectors;
public class ArbitraryTransaction extends Transaction {
@ -303,8 +308,13 @@ public class ArbitraryTransaction extends Transaction {
// Add/update arbitrary resource caches, but don't update the status as this involves time-consuming
// disk reads, and is more prone to failure. The status will be updated on metadata retrieval, or when
// accessing the resource.
this.updateArbitraryResourceCache(repository);
this.updateArbitraryMetadataCache(repository);
// Also, must add this transaction as a latest transaction, since the it has not been saved to the
// repository yet.
this.updateArbitraryResourceCacheIncludingMetadata(
repository,
Set.of(new ArbitraryTransactionDataHashWrapper(arbitraryTransactionData)),
new HashMap<>(0)
);
repository.saveChanges();
@ -360,7 +370,10 @@ public class ArbitraryTransaction extends Transaction {
*
* @throws DataException
*/
public void updateArbitraryResourceCache(Repository repository) throws DataException {
public void updateArbitraryResourceCacheIncludingMetadata(
Repository repository,
Set<ArbitraryTransactionDataHashWrapper> latestTransactionWrappers,
Map<ArbitraryTransactionDataHashWrapper, ArbitraryResourceData> resourceByWrapper) throws DataException {
// Don't cache resources without a name (such as auto updates)
if (arbitraryTransactionData.getName() == null) {
return;
@ -385,17 +398,33 @@ public class ArbitraryTransaction extends Transaction {
arbitraryResourceData.name = name;
arbitraryResourceData.identifier = identifier;
// Get the latest transaction
ArbitraryTransactionData latestTransactionData = repository.getArbitraryRepository().getLatestTransaction(arbitraryTransactionData.getName(), arbitraryTransactionData.getService(), null, arbitraryTransactionData.getIdentifier());
if (latestTransactionData == null) {
// We don't have a latest transaction, so delete from cache
repository.getArbitraryRepository().delete(arbitraryResourceData);
return;
}
final ArbitraryTransactionDataHashWrapper wrapper = new ArbitraryTransactionDataHashWrapper(arbitraryTransactionData);
// Get existing cached entry if it exists
ArbitraryResourceData existingArbitraryResourceData = repository.getArbitraryRepository()
.getArbitraryResource(service, name, identifier);
ArbitraryTransactionData latestTransactionData;
if( latestTransactionWrappers.contains(wrapper)) {
latestTransactionData
= latestTransactionWrappers.stream()
.filter( latestWrapper -> latestWrapper.equals(wrapper))
.findAny().get()
.getData();
}
else {
// Get the latest transaction
latestTransactionData = repository.getArbitraryRepository().getLatestTransaction(arbitraryTransactionData.getName(), arbitraryTransactionData.getService(), null, arbitraryTransactionData.getIdentifier());
if (latestTransactionData == null) {
LOGGER.info("We don't have a latest transaction, so delete from cache: arbitraryResourceData = " + arbitraryResourceData);
// We don't have a latest transaction, so delete from cache
repository.getArbitraryRepository().delete(arbitraryResourceData);
return;
}
}
ArbitraryResourceData existingArbitraryResourceData = resourceByWrapper.get(wrapper);
if( existingArbitraryResourceData == null ) {
// Get existing cached entry if it exists
existingArbitraryResourceData = repository.getArbitraryRepository()
.getArbitraryResource(service, name, identifier);
}
// Check for existing cached data
if (existingArbitraryResourceData == null) {
@ -404,6 +433,7 @@ public class ArbitraryTransaction extends Transaction {
arbitraryResourceData.updated = null;
}
else {
resourceByWrapper.put(wrapper, existingArbitraryResourceData);
// An entry already exists - update created time from current transaction if this is older
arbitraryResourceData.created = Math.min(existingArbitraryResourceData.created, arbitraryTransactionData.getTimestamp());
@ -421,6 +451,34 @@ public class ArbitraryTransaction extends Transaction {
// Save
repository.getArbitraryRepository().save(arbitraryResourceData);
// Update metadata for latest transaction if it is local
if (latestTransactionData.getMetadataHash() != null) {
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(latestTransactionData.getMetadataHash(), latestTransactionData.getSignature());
if (metadataFile.exists()) {
ArbitraryDataTransactionMetadata transactionMetadata = new ArbitraryDataTransactionMetadata(metadataFile.getFilePath());
try {
transactionMetadata.read();
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
metadata.setArbitraryResourceData(arbitraryResourceData);
metadata.setTitle(transactionMetadata.getTitle());
metadata.setDescription(transactionMetadata.getDescription());
metadata.setCategory(transactionMetadata.getCategory());
metadata.setTags(transactionMetadata.getTags());
repository.getArbitraryRepository().save(metadata);
} catch (IOException e) {
// Ignore, as we can add it again later
}
} else {
// We don't have a local copy of this metadata file, so delete it from the cache
// It will be re-added if the file later arrives via the network
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
metadata.setArbitraryResourceData(arbitraryResourceData);
repository.getArbitraryRepository().delete(metadata);
}
}
}
public void updateArbitraryResourceStatus(Repository repository) throws DataException {
@ -455,60 +513,4 @@ public class ArbitraryTransaction extends Transaction {
repository.getArbitraryRepository().setStatus(arbitraryResourceData, status);
}
public void updateArbitraryMetadataCache(Repository repository) throws DataException {
// Get the latest transaction
ArbitraryTransactionData latestTransactionData = repository.getArbitraryRepository().getLatestTransaction(arbitraryTransactionData.getName(), arbitraryTransactionData.getService(), null, arbitraryTransactionData.getIdentifier());
if (latestTransactionData == null) {
// We don't have a latest transaction, so give up
return;
}
Service service = latestTransactionData.getService();
String name = latestTransactionData.getName();
String identifier = latestTransactionData.getIdentifier();
if (service == null) {
// Unsupported service - ignore this resource
return;
}
// In the cache we store null identifiers as "default", as it is part of the primary key
if (identifier == null) {
identifier = "default";
}
ArbitraryResourceData arbitraryResourceData = new ArbitraryResourceData();
arbitraryResourceData.service = service;
arbitraryResourceData.name = name;
arbitraryResourceData.identifier = identifier;
// Update metadata for latest transaction if it is local
if (latestTransactionData.getMetadataHash() != null) {
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(latestTransactionData.getMetadataHash(), latestTransactionData.getSignature());
if (metadataFile.exists()) {
ArbitraryDataTransactionMetadata transactionMetadata = new ArbitraryDataTransactionMetadata(metadataFile.getFilePath());
try {
transactionMetadata.read();
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
metadata.setArbitraryResourceData(arbitraryResourceData);
metadata.setTitle(transactionMetadata.getTitle());
metadata.setDescription(transactionMetadata.getDescription());
metadata.setCategory(transactionMetadata.getCategory());
metadata.setTags(transactionMetadata.getTags());
repository.getArbitraryRepository().save(metadata);
} catch (IOException e) {
// Ignore, as we can add it again later
}
} else {
// We don't have a local copy of this metadata file, so delete it from the cache
// It will be re-added if the file later arrives via the network
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
metadata.setArbitraryResourceData(arbitraryResourceData);
repository.getArbitraryRepository().delete(metadata);
}
}
}
}

View File

@ -2,6 +2,7 @@ package org.qortal.transaction;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.data.group.GroupData;
import org.qortal.data.transaction.CancelGroupBanTransactionData;
@ -12,6 +13,7 @@ import org.qortal.repository.Repository;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
public class CancelGroupBanTransaction extends Transaction {
@ -70,9 +72,26 @@ public class CancelGroupBanTransaction extends Transaction {
if (!this.repository.getGroupRepository().adminExists(groupId, admin.getAddress()))
return ValidationResult.NOT_GROUP_ADMIN;
// Can't unban if not group's current owner
if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
if( this.repository.getBlockRepository().getBlockchainHeight() < BlockChain.getInstance().getNullGroupMembershipHeight() ) {
// Can't cancel ban if not group's current owner
if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
}
// if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() )
else {
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
// if null ownership group, then check for admin approval
if(groupOwnedByNullAccount ) {
// Require approval if transaction relates to a group owned by the null account
if (!this.needsGroupApproval())
return ValidationResult.GROUP_APPROVAL_REQUIRED;
}
// Can't cancel ban if not group's current owner
else if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
}
Account member = getMember();

View File

@ -2,6 +2,7 @@ package org.qortal.transaction;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.data.group.GroupData;
import org.qortal.data.transaction.CancelGroupInviteTransactionData;
@ -12,6 +13,7 @@ import org.qortal.repository.Repository;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
public class CancelGroupInviteTransaction extends Transaction {
@ -80,6 +82,16 @@ public class CancelGroupInviteTransaction extends Transaction {
if (admin.getConfirmedBalance(Asset.QORT) < this.cancelGroupInviteTransactionData.getFee())
return ValidationResult.NO_BALANCE;
// if null ownership group, then check for admin approval
if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() ) {
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
// Require approval if transaction relates to a group owned by the null account
if (groupOwnedByNullAccount && !this.needsGroupApproval())
return ValidationResult.GROUP_APPROVAL_REQUIRED;
}
return ValidationResult.OK;
}

View File

@ -2,6 +2,7 @@ package org.qortal.transaction;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.data.group.GroupData;
import org.qortal.data.transaction.GroupBanTransactionData;
@ -12,6 +13,7 @@ import org.qortal.repository.Repository;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
public class GroupBanTransaction extends Transaction {
@ -70,9 +72,25 @@ public class GroupBanTransaction extends Transaction {
if (!this.repository.getGroupRepository().adminExists(groupId, admin.getAddress()))
return ValidationResult.NOT_GROUP_ADMIN;
// Can't ban if not group's current owner
if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
if( this.repository.getBlockRepository().getBlockchainHeight() < BlockChain.getInstance().getNullGroupMembershipHeight() ) {
// Can't ban if not group's current owner
if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
}
// if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() )
else {
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
// if null ownership group, then check for admin approval
if(groupOwnedByNullAccount ) {
// Require approval if transaction relates to a group owned by the null account
if (!this.needsGroupApproval())
return ValidationResult.GROUP_APPROVAL_REQUIRED;
}
else if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
}
Account offender = getOffender();

View File

@ -2,6 +2,7 @@ package org.qortal.transaction;
import org.qortal.account.Account;
import org.qortal.asset.Asset;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.data.transaction.GroupInviteTransactionData;
import org.qortal.data.transaction.TransactionData;
@ -11,6 +12,7 @@ import org.qortal.repository.Repository;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
public class GroupInviteTransaction extends Transaction {
@ -85,6 +87,16 @@ public class GroupInviteTransaction extends Transaction {
if (admin.getConfirmedBalance(Asset.QORT) < this.groupInviteTransactionData.getFee())
return ValidationResult.NO_BALANCE;
// if null ownership group, then check for admin approval
if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() ) {
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
// Require approval if transaction relates to a group owned by the null account
if (groupOwnedByNullAccount && !this.needsGroupApproval())
return ValidationResult.GROUP_APPROVAL_REQUIRED;
}
return ValidationResult.OK;
}

View File

@ -3,6 +3,7 @@ package org.qortal.transaction;
import org.qortal.account.Account;
import org.qortal.account.PublicKeyAccount;
import org.qortal.asset.Asset;
import org.qortal.block.BlockChain;
import org.qortal.crypto.Crypto;
import org.qortal.data.group.GroupData;
import org.qortal.data.transaction.GroupKickTransactionData;
@ -14,6 +15,7 @@ import org.qortal.repository.Repository;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
public class GroupKickTransaction extends Transaction {
@ -82,9 +84,26 @@ public class GroupKickTransaction extends Transaction {
if (!admin.getAddress().equals(groupData.getOwner()) && groupRepository.adminExists(groupId, member.getAddress()))
return ValidationResult.INVALID_GROUP_OWNER;
// Can't kick if not group's current owner
if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
if( this.repository.getBlockRepository().getBlockchainHeight() < BlockChain.getInstance().getNullGroupMembershipHeight() ) {
// Can't kick if not group's current owner
if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
}
// if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() )
else {
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
// if null ownership group, then check for admin approval
if(groupOwnedByNullAccount ) {
// Require approval if transaction relates to a group owned by the null account
if (!this.needsGroupApproval())
return ValidationResult.GROUP_APPROVAL_REQUIRED;
}
// Can't kick if not group's current owner
else if (!admin.getAddress().equals(groupData.getOwner()))
return ValidationResult.INVALID_GROUP_OWNER;
}
// Check creator has enough funds
if (admin.getConfirmedBalance(Asset.QORT) < this.groupKickTransactionData.getFee())

View File

@ -123,7 +123,7 @@ public class RewardShareTransaction extends Transaction {
final boolean isCancellingSharePercent = this.rewardShareTransactionData.getSharePercent() < 0;
// Creator themselves needs to be allowed to mint (unless cancelling)
if (!isCancellingSharePercent && !creator.canMint())
if (!isCancellingSharePercent && !creator.canMint(false))
return ValidationResult.NOT_MINTING_ACCOUNT;
// Qortal: special rules in play depending whether recipient is also minter

View File

@ -65,11 +65,11 @@ public abstract class Transaction {
UPDATE_GROUP(23, true),
ADD_GROUP_ADMIN(24, true),
REMOVE_GROUP_ADMIN(25, true),
GROUP_BAN(26, false),
CANCEL_GROUP_BAN(27, false),
GROUP_KICK(28, false),
GROUP_INVITE(29, false),
CANCEL_GROUP_INVITE(30, false),
GROUP_BAN(26, true),
CANCEL_GROUP_BAN(27, true),
GROUP_KICK(28, true),
GROUP_INVITE(29, true),
CANCEL_GROUP_INVITE(30, true),
JOIN_GROUP(31, false),
LEAVE_GROUP(32, false),
GROUP_APPROVAL(33, false),

View File

@ -0,0 +1,250 @@
package org.qortal.utils;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.exc.InvalidFormatException;
import com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException;
import org.apache.commons.lang3.ArrayUtils;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.api.SearchMode;
import org.qortal.arbitrary.ArbitraryDataFile;
import org.qortal.arbitrary.ArbitraryDataReader;
import org.qortal.arbitrary.exception.MissingDataException;
import org.qortal.arbitrary.misc.Service;
import org.qortal.controller.Controller;
import org.qortal.data.arbitrary.ArbitraryDataIndex;
import org.qortal.data.arbitrary.ArbitraryDataIndexDetail;
import org.qortal.data.arbitrary.ArbitraryResourceData;
import org.qortal.data.arbitrary.IndexCache;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.repository.RepositoryManager;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Timer;
import java.util.TimerTask;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class ArbitraryIndexUtils {
public static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
private static final Logger LOGGER = LogManager.getLogger(ArbitraryIndexUtils.class);
public static final String INDEX_CACHE_TIMER = "Arbitrary Index Cache Timer";
public static final String INDEX_CACHE_TIMER_TASK = "Arbitrary Index Cache Timer Task";
public static void startCaching(int priorityRequested, int frequency) {
Timer timer = buildTimer(INDEX_CACHE_TIMER, priorityRequested);
TimerTask task = new TimerTask() {
@Override
public void run() {
Thread.currentThread().setName(INDEX_CACHE_TIMER_TASK);
try {
fillCache(IndexCache.getInstance());
} catch (IOException | DataException e) {
LOGGER.error(e.getMessage(), e);
}
}
};
// delay 1 second
timer.scheduleAtFixedRate(task, 1_000, frequency * 60_000);
}
private static void fillCache(IndexCache instance) throws DataException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
List<ArbitraryResourceData> indexResources
= repository.getArbitraryRepository().searchArbitraryResources(
Service.JSON,
null,
"idx-",
null,
null,
null,
null,
true,
null,
false,
SearchMode.ALL,
0,
null,
null,
null,
null,
null,
null,
null,
null,
true);
List<ArbitraryDataIndexDetail> indexDetails = new ArrayList<>();
LOGGER.debug("processing index resource data: count = " + indexResources.size());
// process all index resources
for( ArbitraryResourceData indexResource : indexResources ) {
try {
LOGGER.debug("processing index resource: name = " + indexResource.name + ", identifier = " + indexResource.identifier);
String json = ArbitraryIndexUtils.getJson(indexResource.name, indexResource.identifier);
// map the JSON string to a list of Java objects
List<ArbitraryDataIndex> indices = OBJECT_MAPPER.readValue(json, new TypeReference<List<ArbitraryDataIndex>>() {});
LOGGER.debug("processed indices = " + indices);
// rank and create index detail for each index in this index resource
for( int rank = 1; rank <= indices.size(); rank++ ) {
indexDetails.add( new ArbitraryDataIndexDetail(indexResource.name, rank, indices.get(rank - 1), indexResource.identifier ));
}
} catch (InvalidFormatException e) {
LOGGER.debug("invalid format, skipping: " + indexResource);
} catch (UnrecognizedPropertyException e) {
LOGGER.debug("unrecognized property, skipping " + indexResource);
}
}
LOGGER.debug("processing indices by term ...");
Map<String, List<ArbitraryDataIndexDetail>> indicesByTerm
= indexDetails.stream().collect(
Collectors.toMap(
detail -> detail.term, // map by term
detail -> List.of(detail), // create list for term
(list1, list2) // merge lists for same term
-> Stream.of(list1, list2)
.flatMap(List::stream)
.collect(Collectors.toList())
)
);
LOGGER.info("processed indices by term: count = " + indicesByTerm.size());
// lock, clear old, load new
synchronized( IndexCache.getInstance().getIndicesByTerm() ) {
IndexCache.getInstance().getIndicesByTerm().clear();
IndexCache.getInstance().getIndicesByTerm().putAll(indicesByTerm);
}
LOGGER.info("loaded indices by term");
LOGGER.debug("processing indices by issuer ...");
Map<String, List<ArbitraryDataIndexDetail>> indicesByIssuer
= indexDetails.stream().collect(
Collectors.toMap(
detail -> detail.issuer, // map by issuer
detail -> List.of(detail), // create list for issuer
(list1, list2) // merge lists for same issuer
-> Stream.of(list1, list2)
.flatMap(List::stream)
.collect(Collectors.toList())
)
);
LOGGER.info("processed indices by issuer: count = " + indicesByIssuer.size());
// lock, clear old, load new
synchronized( IndexCache.getInstance().getIndicesByIssuer() ) {
IndexCache.getInstance().getIndicesByIssuer().clear();
IndexCache.getInstance().getIndicesByIssuer().putAll(indicesByIssuer);
}
LOGGER.info("loaded indices by issuer");
}
}
private static Timer buildTimer( final String name, int priorityRequested) {
// ensure priority is in between 1-10
final int priority = Math.max(0, Math.min(10, priorityRequested));
// Create a custom Timer with updated priority threads
Timer timer = new Timer(true) { // 'true' to make the Timer daemon
@Override
public void schedule(TimerTask task, long delay) {
Thread thread = new Thread(task, name) {
@Override
public void run() {
this.setPriority(priority);
super.run();
}
};
thread.setPriority(priority);
thread.start();
}
};
return timer;
}
public static String getJsonWithExceptionHandling( String name, String identifier ) {
try {
return getJson(name, identifier);
}
catch( Exception e ) {
LOGGER.error(e.getMessage(), e);
return e.getMessage();
}
}
public static String getJson(String name, String identifier) throws IOException {
try {
ArbitraryDataReader arbitraryDataReader
= new ArbitraryDataReader(name, ArbitraryDataFile.ResourceIdType.NAME, Service.JSON, identifier);
int attempts = 0;
Integer maxAttempts = 5;
while (!Controller.isStopping()) {
attempts++;
if (!arbitraryDataReader.isBuilding()) {
try {
arbitraryDataReader.loadSynchronously(false);
break;
} catch (MissingDataException e) {
if (attempts > maxAttempts) {
// Give up after 5 attempts
throw new IOException("Data unavailable. Please try again later.");
}
}
}
Thread.sleep(3000L);
}
java.nio.file.Path outputPath = arbitraryDataReader.getFilePath();
if (outputPath == null) {
// Assume the resource doesn't exist
throw new IOException( "File not found");
}
// No file path supplied - so check if this is a single file resource
String[] files = ArrayUtils.removeElement(outputPath.toFile().list(), ".qortal");
String filepath = files[0];
java.nio.file.Path path = Paths.get(outputPath.toString(), filepath);
if (!Files.exists(path)) {
String message = String.format("No file exists at filepath: %s", filepath);
throw new IOException( message );
}
String data = Files.readString(path);
return data;
} catch (Exception e) {
throw new IOException(String.format("Unable to load %s %s: %s", Service.JSON, name, e.getMessage()));
}
}
}

View File

@ -24,6 +24,7 @@ import java.nio.file.attribute.BasicFileAttributes;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Optional;
import java.util.stream.Collectors;
import static java.nio.file.StandardCopyOption.REPLACE_EXISTING;
@ -72,23 +73,23 @@ public class ArbitraryTransactionUtils {
return latestPut;
}
public static boolean hasMoreRecentPutTransaction(Repository repository, ArbitraryTransactionData arbitraryTransactionData) {
public static Optional<ArbitraryTransactionData> hasMoreRecentPutTransaction(Repository repository, ArbitraryTransactionData arbitraryTransactionData) {
byte[] signature = arbitraryTransactionData.getSignature();
if (signature == null) {
// We can't make a sensible decision without a signature
// so it's best to assume there is nothing newer
return false;
return Optional.empty();
}
ArbitraryTransactionData latestPut = ArbitraryTransactionUtils.fetchLatestPut(repository, arbitraryTransactionData);
if (latestPut == null) {
return false;
return Optional.empty();
}
// If the latest PUT transaction has a newer timestamp, it will override the existing transaction
// Any data relating to the older transaction is no longer needed
boolean hasNewerPut = (latestPut.getTimestamp() > arbitraryTransactionData.getTimestamp());
return hasNewerPut;
return hasNewerPut ? Optional.of(latestPut) : Optional.empty();
}
public static boolean completeFileExists(ArbitraryTransactionData transactionData) throws DataException {
@ -208,7 +209,15 @@ public class ArbitraryTransactionUtils {
return ArbitraryTransactionUtils.isFileRecent(filePath, now, cleanupAfter);
}
public static void deleteCompleteFile(ArbitraryTransactionData arbitraryTransactionData, long now, long cleanupAfter) throws DataException {
/**
*
* @param arbitraryTransactionData
* @param now
* @param cleanupAfter
* @return true if file is deleted, otherwise return false
* @throws DataException
*/
public static boolean deleteCompleteFile(ArbitraryTransactionData arbitraryTransactionData, long now, long cleanupAfter) throws DataException {
byte[] completeHash = arbitraryTransactionData.getData();
byte[] signature = arbitraryTransactionData.getSignature();
@ -219,6 +228,11 @@ public class ArbitraryTransactionUtils {
"if needed", Base58.encode(completeHash));
arbitraryDataFile.delete();
return true;
}
else {
return false;
}
}

View File

@ -0,0 +1,319 @@
package org.qortal.utils;
import org.qortal.block.Block;
import org.qortal.crypto.Crypto;
import org.qortal.data.PaymentData;
import org.qortal.data.account.AccountBalanceData;
import org.qortal.data.account.AddressAmountData;
import org.qortal.data.account.BlockHeightRange;
import org.qortal.data.account.BlockHeightRangeAddressAmounts;
import org.qortal.data.transaction.ATTransactionData;
import org.qortal.data.transaction.BaseTransactionData;
import org.qortal.data.transaction.BuyNameTransactionData;
import org.qortal.data.transaction.CreateAssetOrderTransactionData;
import org.qortal.data.transaction.DeployAtTransactionData;
import org.qortal.data.transaction.MultiPaymentTransactionData;
import org.qortal.data.transaction.PaymentTransactionData;
import org.qortal.data.transaction.TransactionData;
import org.qortal.data.transaction.TransferAssetTransactionData;
import java.util.Comparator;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.CopyOnWriteArrayList;
import java.util.function.Predicate;
import java.util.stream.Collectors;
public class BalanceRecorderUtils {
public static final Predicate<AddressAmountData> ADDRESS_AMOUNT_DATA_NOT_ZERO = addressAmount -> addressAmount.getAmount() != 0;
public static final Comparator<BlockHeightRangeAddressAmounts> BLOCK_HEIGHT_RANGE_ADDRESS_AMOUNTS_COMPARATOR = new Comparator<BlockHeightRangeAddressAmounts>() {
@Override
public int compare(BlockHeightRangeAddressAmounts amounts1, BlockHeightRangeAddressAmounts amounts2) {
return amounts1.getRange().getEnd() - amounts2.getRange().getEnd();
}
};
public static final Comparator<AddressAmountData> ADDRESS_AMOUNT_DATA_COMPARATOR = new Comparator<AddressAmountData>() {
@Override
public int compare(AddressAmountData addressAmountData, AddressAmountData t1) {
if( addressAmountData.getAmount() > t1.getAmount() ) {
return 1;
}
else if( addressAmountData.getAmount() < t1.getAmount() ) {
return -1;
}
else {
return 0;
}
}
};
public static final Comparator<BlockHeightRange> BLOCK_HEIGHT_RANGE_COMPARATOR = new Comparator<BlockHeightRange>() {
@Override
public int compare(BlockHeightRange range1, BlockHeightRange range2) {
return range1.getEnd() - range2.getEnd();
}
};
/**
* Build Balance Dynmaics For Account
*
* @param priorBalances the balances prior to the current height, assuming only one balance per address
* @param accountBalance the current balance
*
* @return the difference between the current balance and the prior balance for the current balance address
*/
public static AddressAmountData buildBalanceDynamicsForAccount(List<AccountBalanceData> priorBalances, AccountBalanceData accountBalance) {
Optional<AccountBalanceData> matchingAccountPriorBalance
= priorBalances.stream()
.filter(priorBalance -> accountBalance.getAddress().equals(priorBalance.getAddress()))
.findFirst();
if(matchingAccountPriorBalance.isPresent()) {
return new AddressAmountData(accountBalance.getAddress(), accountBalance.getBalance() - matchingAccountPriorBalance.get().getBalance());
}
else {
return new AddressAmountData(accountBalance.getAddress(), accountBalance.getBalance());
}
}
public static List<AddressAmountData> buildBalanceDynamics(
final List<AccountBalanceData> balances,
final List<AccountBalanceData> priorBalances,
long minimum,
List<TransactionData> transactions) {
Map<String, Long> amountsByAddress = new HashMap<>(transactions.size());
for( TransactionData transactionData : transactions ) {
mapBalanceModificationsForTransaction(amountsByAddress, transactionData);
}
List<AddressAmountData> addressAmounts
= balances.stream()
.map(balance -> buildBalanceDynamicsForAccount(priorBalances, balance))
.map( data -> adjustAddressAmount(amountsByAddress.getOrDefault(data.getAddress(), 0L), data))
.filter(ADDRESS_AMOUNT_DATA_NOT_ZERO)
.filter(data -> data.getAmount() >= minimum)
.collect(Collectors.toList());
return addressAmounts;
}
public static AddressAmountData adjustAddressAmount(long adjustment, AddressAmountData data) {
return new AddressAmountData(data.getAddress(), data.getAmount() - adjustment);
}
public static void mapBalanceModificationsForTransaction(Map<String, Long> amountsByAddress, TransactionData transactionData) {
String creatorAddress;
// AT Transaction
if( transactionData instanceof ATTransactionData) {
creatorAddress = mapBalanceModificationsForAtTransaction(amountsByAddress, (ATTransactionData) transactionData);
}
// Buy Name Transaction
else if( transactionData instanceof BuyNameTransactionData) {
creatorAddress = mapBalanceModificationsForBuyNameTransaction(amountsByAddress, (BuyNameTransactionData) transactionData);
}
// Create Asset Order Transaction
else if( transactionData instanceof CreateAssetOrderTransactionData) {
//TODO I'm not sure how to handle this one. This hasn't been used at this point in the blockchain.
creatorAddress = Crypto.toAddress(transactionData.getCreatorPublicKey());
}
// Deploy AT Transaction
else if( transactionData instanceof DeployAtTransactionData ) {
creatorAddress = mapBalanceModificationsForDeployAtTransaction(amountsByAddress, (DeployAtTransactionData) transactionData);
}
// Multi Payment Transaction
else if( transactionData instanceof MultiPaymentTransactionData) {
creatorAddress = mapBalanceModificationsForMultiPaymentTransaction(amountsByAddress, (MultiPaymentTransactionData) transactionData);
}
// Payment Transaction
else if( transactionData instanceof PaymentTransactionData ) {
creatorAddress = mapBalanceModicationsForPaymentTransaction(amountsByAddress, (PaymentTransactionData) transactionData);
}
// Transfer Asset Transaction
else if( transactionData instanceof TransferAssetTransactionData) {
creatorAddress = mapBalanceModificationsForTransferAssetTransaction(amountsByAddress, (TransferAssetTransactionData) transactionData);
}
// Other Transactions
else {
creatorAddress = Crypto.toAddress(transactionData.getCreatorPublicKey());
}
// all transactions modify the balance for fees
mapBalanceModifications(amountsByAddress, transactionData.getFee(), creatorAddress, Optional.empty());
}
public static String mapBalanceModificationsForTransferAssetTransaction(Map<String, Long> amountsByAddress, TransferAssetTransactionData transferAssetData) {
String creatorAddress = Crypto.toAddress(transferAssetData.getSenderPublicKey());
if( transferAssetData.getAssetId() == 0) {
mapBalanceModifications(
amountsByAddress,
transferAssetData.getAmount(),
creatorAddress,
Optional.of(transferAssetData.getRecipient())
);
}
return creatorAddress;
}
public static String mapBalanceModicationsForPaymentTransaction(Map<String, Long> amountsByAddress, PaymentTransactionData paymentData) {
String creatorAddress = Crypto.toAddress(paymentData.getCreatorPublicKey());
mapBalanceModifications(amountsByAddress,
paymentData.getAmount(),
creatorAddress,
Optional.of(paymentData.getRecipient())
);
return creatorAddress;
}
public static String mapBalanceModificationsForMultiPaymentTransaction(Map<String, Long> amountsByAddress, MultiPaymentTransactionData multiPaymentData) {
String creatorAddress = Crypto.toAddress(multiPaymentData.getCreatorPublicKey());
for(PaymentData payment : multiPaymentData.getPayments() ) {
mapBalanceModificationsForTransaction(
amountsByAddress,
getPaymentTransactionData(multiPaymentData, payment)
);
}
return creatorAddress;
}
public static String mapBalanceModificationsForDeployAtTransaction(Map<String, Long> amountsByAddress, DeployAtTransactionData transactionData) {
String creatorAddress;
DeployAtTransactionData deployAtData = transactionData;
creatorAddress = Crypto.toAddress(deployAtData.getCreatorPublicKey());
if( deployAtData.getAssetId() == 0 ) {
mapBalanceModifications(
amountsByAddress,
deployAtData.getAmount(),
creatorAddress,
Optional.of(deployAtData.getAtAddress())
);
}
return creatorAddress;
}
public static String mapBalanceModificationsForBuyNameTransaction(Map<String, Long> amountsByAddress, BuyNameTransactionData transactionData) {
String creatorAddress;
BuyNameTransactionData buyNameData = transactionData;
creatorAddress = Crypto.toAddress(buyNameData.getCreatorPublicKey());
mapBalanceModifications(
amountsByAddress,
buyNameData.getAmount(),
creatorAddress,
Optional.of(buyNameData.getSeller())
);
return creatorAddress;
}
public static String mapBalanceModificationsForAtTransaction(Map<String, Long> amountsByAddress, ATTransactionData transactionData) {
String creatorAddress;
ATTransactionData atData = transactionData;
creatorAddress = atData.getATAddress();
if( atData.getAssetId() != null && atData.getAssetId() == 0) {
mapBalanceModifications(
amountsByAddress,
atData.getAmount(),
creatorAddress,
Optional.of(atData.getRecipient())
);
}
return creatorAddress;
}
public static PaymentTransactionData getPaymentTransactionData(MultiPaymentTransactionData multiPaymentData, PaymentData payment) {
return new PaymentTransactionData(
new BaseTransactionData(
multiPaymentData.getTimestamp(),
multiPaymentData.getTxGroupId(),
multiPaymentData.getReference(),
multiPaymentData.getCreatorPublicKey(),
0L,
multiPaymentData.getSignature()
),
payment.getRecipient(),
payment.getAmount()
);
}
public static void mapBalanceModifications(Map<String, Long> amountsByAddress, Long amount, String sender, Optional<String> recipient) {
amountsByAddress.put(
sender,
amountsByAddress.getOrDefault(sender, 0L) - amount
);
if( recipient.isPresent() )
amountsByAddress.put(
recipient.get(),
amountsByAddress.getOrDefault(recipient.get(), 0L) + amount
);
}
public static void removeRecordingsAboveHeight(int currentHeight, ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight) {
balancesByHeight.entrySet().stream()
.filter(heightWithBalances -> heightWithBalances.getKey() > currentHeight)
.forEach(heightWithBalances -> balancesByHeight.remove(heightWithBalances.getKey()));
}
public static void removeRecordingsBelowHeight(int currentHeight, ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight) {
balancesByHeight.entrySet().stream()
.filter(heightWithBalances -> heightWithBalances.getKey() < currentHeight)
.forEach(heightWithBalances -> balancesByHeight.remove(heightWithBalances.getKey()));
}
public static void removeDynamicsOnOrAboveHeight(int currentHeight, CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics) {
balanceDynamics.stream()
.filter(addressAmounts -> addressAmounts.getRange().getEnd() >= currentHeight)
.forEach(addressAmounts -> balanceDynamics.remove(addressAmounts));
}
public static BlockHeightRangeAddressAmounts removeOldestDynamics(CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics) {
BlockHeightRangeAddressAmounts oldestDynamics
= balanceDynamics.stream().sorted(BLOCK_HEIGHT_RANGE_ADDRESS_AMOUNTS_COMPARATOR).findFirst().get();
balanceDynamics.remove(oldestDynamics);
return oldestDynamics;
}
public static Optional<Integer> getPriorHeight(int currentHeight, ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight) {
Optional<Integer> priorHeight
= balancesByHeight.keySet().stream()
.filter(height -> height < currentHeight)
.sorted(Comparator.reverseOrder()).findFirst();
return priorHeight;
}
/**
* Is Reward Distribution Range?
*
* @param start start height, exclusive
* @param end end height, inclusive
*
* @return true there is a reward distribution block within this block range
*/
public static boolean isRewardDistributionRange(int start, int end) {
// iterate through the block height until a reward distribution block or the end of the range
for( int i = start + 1; i <= end; i++) {
if( Block.isRewardDistributionBlock(i) ) return true;
}
// no reward distribution blocks found within range
return false;
}
}

View File

@ -0,0 +1,99 @@
package org.qortal.utils;
import io.druid.extendedset.intset.ConciseSet;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.qortal.block.BlockChain;
import org.qortal.data.account.AddressLevelPairing;
import org.qortal.data.account.RewardShareData;
import org.qortal.data.block.BlockData;
import org.qortal.data.block.DecodedOnlineAccountData;
import org.qortal.data.group.GroupMemberData;
import org.qortal.data.naming.NameData;
import org.qortal.repository.DataException;
import org.qortal.repository.Repository;
import org.qortal.transform.block.BlockTransformer;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
/**
* Class Blocks
*
* Methods for block related logic.
*/
public class Blocks {
private static final Logger LOGGER = LogManager.getLogger(Blocks.class);
/**
* Get Decode Online Accounts For Block
*
* @param repository the data repository
* @param blockData the block data
*
* @return the online accounts set to the block
*
* @throws DataException
*/
public static Set<DecodedOnlineAccountData> getDecodedOnlineAccountsForBlock(Repository repository, BlockData blockData) throws DataException {
try {
// get all online account indices from block
ConciseSet onlineAccountIndices = BlockTransformer.decodeOnlineAccounts(blockData.getEncodedOnlineAccounts());
// get online reward shares from the online accounts on the block
List<RewardShareData> onlineRewardShares = repository.getAccountRepository().getRewardSharesByIndexes(onlineAccountIndices.toArray());
// online timestamp for block
long onlineTimestamp = blockData.getOnlineAccountsTimestamp();
Set<DecodedOnlineAccountData> onlineAccounts = new HashSet<>();
// all minting group member addresses
List<String> mintingGroupAddresses
= Groups.getAllMembers(
repository.getGroupRepository(),
Groups.getGroupIdsToMint(BlockChain.getInstance(), blockData.getHeight())
);
// all names, indexed by address
Map<String, String> nameByAddress
= repository.getNameRepository()
.getAllNames().stream()
.collect(Collectors.toMap(NameData::getOwner, NameData::getName));
// all accounts at level 1 or higher, indexed by address
Map<String, Integer> levelByAddress
= repository.getAccountRepository().getAddressLevelPairings(1).stream()
.collect(Collectors.toMap(AddressLevelPairing::getAddress, AddressLevelPairing::getLevel));
// for each reward share where the minter is online,
// construct the data object and add it to the return list
for (RewardShareData onlineRewardShare : onlineRewardShares) {
String minter = onlineRewardShare.getMinter();
DecodedOnlineAccountData onlineAccountData
= new DecodedOnlineAccountData(
onlineTimestamp,
minter,
onlineRewardShare.getRecipient(),
onlineRewardShare.getSharePercent(),
mintingGroupAddresses.contains(minter),
nameByAddress.get(minter),
levelByAddress.get(minter)
);
onlineAccounts.add(onlineAccountData);
}
return onlineAccounts;
} catch (DataException e) {
throw e;
} catch (Exception e ) {
LOGGER.error(e.getMessage(), e);
return new HashSet<>(0);
}
}
}

View File

@ -0,0 +1,122 @@
package org.qortal.utils;
import org.qortal.block.BlockChain;
import org.qortal.data.group.GroupAdminData;
import org.qortal.data.group.GroupMemberData;
import org.qortal.repository.DataException;
import org.qortal.repository.GroupRepository;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.HashSet;
import java.util.List;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;
/**
* Class Groups
*
* A utility class for group related functionality.
*/
public class Groups {
/**
* Does the member exist in any of these groups?
*
* @param groupRepository the group data repository
* @param groupsIds the group Ids to look for the address
* @param address the address
*
* @return true if the address is in any of the groups listed otherwise false
* @throws DataException
*/
public static boolean memberExistsInAnyGroup(GroupRepository groupRepository, List<Integer> groupsIds, String address) throws DataException {
// if any of the listed groups have the address as a member, then return true
for( Integer groupIdToMint : groupsIds) {
if( groupRepository.memberExists(groupIdToMint, address) ) {
return true;
}
}
// if none of the listed groups have the address as a member, then return false
return false;
}
/**
* Get All Members
*
* Get all the group members from a list of groups.
*
* @param groupRepository the group data repository
* @param groupIds the list of group Ids to look at
*
* @return the list of all members belonging to any of the groups, no duplicates
* @throws DataException
*/
public static List<String> getAllMembers( GroupRepository groupRepository, List<Integer> groupIds ) throws DataException {
// collect all the members in a set, the set keeps out duplicates
Set<String> allMembers = new HashSet<>();
// add all members from each group to the all members set
for( int groupId : groupIds ) {
allMembers.addAll( groupRepository.getGroupMembers(groupId).stream().map(GroupMemberData::getMember).collect(Collectors.toList()));
}
return new ArrayList<>(allMembers);
}
/**
* Get All Admins
*
* Get all the admins from a list of groups.
*
* @param groupRepository the group data repository
* @param groupIds the list of group Ids to look at
*
* @return the list of all admins to any of the groups, no duplicates
* @throws DataException
*/
public static List<String> getAllAdmins( GroupRepository groupRepository, List<Integer> groupIds ) throws DataException {
// collect all the admins in a set, the set keeps out duplicates
Set<String> allAdmins = new HashSet<>();
// collect admins for each group
for( int groupId : groupIds ) {
allAdmins.addAll( groupRepository.getGroupAdmins(groupId).stream().map(GroupAdminData::getAdmin).collect(Collectors.toList()) );
}
return new ArrayList<>(allAdmins);
}
/**
* Get Group Ids To Mint
*
* @param blockchain the blockchain
* @param blockchainHeight the block height to mint
*
* @return the group Ids for the minting groups at the height given
*/
public static List<Integer> getGroupIdsToMint(BlockChain blockchain, int blockchainHeight) {
// sort heights lowest to highest
Comparator<BlockChain.IdsForHeight> compareByHeight = Comparator.comparingInt(entry -> entry.height);
// sort heights highest to lowest
Comparator<BlockChain.IdsForHeight> compareByHeightReversed = compareByHeight.reversed();
// get highest height that is less than the blockchain height
Optional<BlockChain.IdsForHeight> ids = blockchain.getMintingGroupIds().stream()
.filter(entry -> entry.height < blockchainHeight)
.sorted(compareByHeightReversed)
.findFirst();
if( ids.isPresent()) {
return ids.get().ids;
}
else {
return new ArrayList<>(0);
}
}
}

View File

@ -29,6 +29,7 @@
"onlineAccountSignaturesMinLifetime": 43200000,
"onlineAccountSignaturesMaxLifetime": 86400000,
"onlineAccountsModulusV2Timestamp": 1659801600000,
"onlineAccountsModulusV3Timestamp": 1731961800000,
"selfSponsorshipAlgoV1SnapshotTimestamp": 1670230000000,
"selfSponsorshipAlgoV2SnapshotTimestamp": 1708360200000,
"selfSponsorshipAlgoV3SnapshotTimestamp": 1708432200000,
@ -37,7 +38,9 @@
"blockRewardBatchStartHeight": 1508000,
"blockRewardBatchSize": 1000,
"blockRewardBatchAccountsBlockCount": 25,
"mintingGroupId": 694,
"mintingGroupIds": [
{ "height": 0, "ids": [ 694 ]}
],
"rewardsByHeight": [
{ "height": 1, "reward": 5.00 },
{ "height": 259201, "reward": 4.75 },
@ -95,6 +98,7 @@
"transactionV6Timestamp": 9999999999999,
"disableReferenceTimestamp": 1655222400000,
"increaseOnlineAccountsDifficultyTimestamp": 9999999999999,
"decreaseOnlineAccountsDifficultyTimestamp": 1731958200000,
"onlineAccountMinterLevelValidationHeight": 1092000,
"selfSponsorshipAlgoV1Height": 1092400,
"selfSponsorshipAlgoV2Height": 1611200,
@ -109,7 +113,13 @@
"disableRewardshareHeight": 1899100,
"enableRewardshareHeight": 1905100,
"onlyMintWithNameHeight": 1900300,
"groupMemberCheckHeight": 1902700
"removeOnlyMintWithNameHeight": 1935500,
"groupMemberCheckHeight": 1902700,
"fixBatchRewardHeight": 1945900,
"adminsReplaceFoundersHeight": 2012800,
"nullGroupMembershipHeight": 2012800,
"ignoreLevelForRewardShareHeight": 2012800,
"adminQueryFixHeight": 2012800
},
"checkpoints": [
{ "height": 1136300, "signature": "3BbwawEF2uN8Ni5ofpJXkukoU8ctAPxYoFB7whq9pKfBnjfZcpfEJT4R95NvBDoTP8WDyWvsUvbfHbcr9qSZuYpSKZjUQTvdFf6eqznHGEwhZApWfvXu6zjGCxYCp65F4jsVYYJjkzbjmkCg5WAwN5voudngA23kMK6PpTNygapCzXt" }

View File

@ -20,17 +20,21 @@
width: 100%;
text-align: center;
z-index: 1000;
top: 45%;
top: 50%;
-ms-transform: translateY(-50%);
transform: translateY(-50%);
transform: translate(-50% , -50%);
left: 50%;
}
#panel {
text-align: center;
background: white;
word-wrap: break-word;
width: 350px;
max-width: 100%;
margin: auto;
padding: 25px;
border-radius: 30px;
box-sizing: border-box;
}
#status {
color: #03a9f4;

View File

@ -84,6 +84,7 @@ isDOMContentLoaded: isDOMContentLoaded ? true : false
function handleQDNResourceDisplayed(pathurl, isDOMContentLoaded) {
// make sure that an empty string the root path
if(pathurl?.startsWith('/render/hash/')) return;
const path = pathurl || '/'
if (!isManualNavigation) {
isManualNavigation = true
@ -284,11 +285,9 @@ window.addEventListener("message", async (event) => {
return;
}
console.log("Core received action: " + JSON.stringify(event.data.action));
let url;
let data = event.data;
let identifier;
switch (data.action) {
case "GET_ACCOUNT_DATA":
return httpGetAsyncWithEvent(event, "/addresses/" + data.address);
@ -383,6 +382,7 @@ window.addEventListener("message", async (event) => {
if (data.identifier != null) url = url.concat("&identifier=" + data.identifier);
if (data.name != null) url = url.concat("&name=" + data.name);
if (data.names != null) data.names.forEach((x, i) => url = url.concat("&name=" + x));
if (data.keywords != null) data.keywords.forEach((x, i) => url = url.concat("&keywords=" + x));
if (data.title != null) url = url.concat("&title=" + data.title);
if (data.description != null) url = url.concat("&description=" + data.description);
if (data.prefix != null) url = url.concat("&prefix=" + new Boolean(data.prefix).toString());
@ -419,7 +419,7 @@ window.addEventListener("message", async (event) => {
return httpGetAsyncWithEvent(event, url);
case "GET_QDN_RESOURCE_PROPERTIES":
let identifier = (data.identifier != null) ? data.identifier : "default";
identifier = (data.identifier != null) ? data.identifier : "default";
url = "/arbitrary/resource/properties/" + data.service + "/" + data.name + "/" + identifier;
return httpGetAsyncWithEvent(event, url);
@ -456,7 +456,7 @@ window.addEventListener("message", async (event) => {
return httpGetAsyncWithEvent(event, url);
case "GET_AT":
url = "/at" + data.atAddress;
url = "/at/" + data.atAddress;
return httpGetAsyncWithEvent(event, url);
case "GET_AT_DATA":
@ -473,7 +473,7 @@ window.addEventListener("message", async (event) => {
case "FETCH_BLOCK":
if (data.signature != null) {
url = "/blocks/" + data.signature;
url = "/blocks/signature/" + data.signature;
} else if (data.height != null) {
url = "/blocks/byheight/" + data.height;
}
@ -614,6 +614,7 @@ function getDefaultTimeout(action) {
switch (action) {
case "GET_USER_ACCOUNT":
case "SAVE_FILE":
case "SIGN_TRANSACTION":
case "DECRYPT_DATA":
// User may take a long time to accept/deny the popup
return 60 * 60 * 1000;
@ -635,6 +636,11 @@ function getDefaultTimeout(action) {
// Chat messages rely on PoW computations, so allow extra time
return 60 * 1000;
case "CREATE_TRADE_BUY_ORDER":
case "CREATE_TRADE_SELL_ORDER":
case "CANCEL_TRADE_SELL_ORDER":
case "VOTE_ON_POLL":
case "CREATE_POLL":
case "JOIN_GROUP":
case "DEPLOY_AT":
case "SEND_COIN":
@ -649,7 +655,7 @@ function getDefaultTimeout(action) {
break;
}
}
return 10 * 1000;
return 30 * 1000;
}
/**
@ -688,6 +694,7 @@ const qortalRequestWithTimeout = (request, timeout) =>
* Send current page details to UI
*/
document.addEventListener('DOMContentLoaded', (event) => {
resetVariables()
qortalRequest({
action: "QDN_RESOURCE_DISPLAYED",
@ -706,6 +713,7 @@ resetVariables()
* Handle app navigation
*/
navigation.addEventListener('navigate', (event) => {
const url = new URL(event.destination.url);
let fullpath = url.pathname + url.hash;

View File

@ -54,26 +54,39 @@ public class BlockArchiveV1Tests extends Common {
public void testWriter() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testWriter");
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
System.out.println("Set trim heights to 901.");
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height (Expected 900): " + maximumArchiveHeight);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Archive contains " + writer.getWrittenCount() + " blocks. (Expected 899)");
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
@ -84,6 +97,9 @@ public class BlockArchiveV1Tests extends Common {
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
System.out.println("testWriter completed successfully.");
}
}
@ -91,26 +107,39 @@ public class BlockArchiveV1Tests extends Common {
public void testWriterAndReader() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testWriterAndReader");
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
System.out.println("Set trim heights to 901.");
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height (Expected 900): " + maximumArchiveHeight);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Archive contains " + writer.getWrittenCount() + " blocks. (Expected 899)");
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
@ -121,8 +150,10 @@ public class BlockArchiveV1Tests extends Common {
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
// Read block 2 from the archive
System.out.println("Reading block 2 from the archive...");
BlockArchiveReader reader = BlockArchiveReader.getInstance();
BlockTransformation block2Info = reader.fetchBlockAtHeight(2);
BlockData block2ArchiveData = block2Info.getBlockData();
@ -131,6 +162,7 @@ public class BlockArchiveV1Tests extends Common {
BlockData block2RepositoryData = repository.getBlockRepository().fromHeight(2);
// Ensure the values match
System.out.println("Comparing block 2 data...");
assertEquals(block2ArchiveData.getHeight(), block2RepositoryData.getHeight());
assertArrayEquals(block2ArchiveData.getSignature(), block2RepositoryData.getSignature());
@ -138,6 +170,7 @@ public class BlockArchiveV1Tests extends Common {
assertEquals(1, block2ArchiveData.getOnlineAccountsCount());
// Read block 900 from the archive
System.out.println("Reading block 900 from the archive...");
BlockTransformation block900Info = reader.fetchBlockAtHeight(900);
BlockData block900ArchiveData = block900Info.getBlockData();
@ -145,12 +178,14 @@ public class BlockArchiveV1Tests extends Common {
BlockData block900RepositoryData = repository.getBlockRepository().fromHeight(900);
// Ensure the values match
System.out.println("Comparing block 900 data...");
assertEquals(block900ArchiveData.getHeight(), block900RepositoryData.getHeight());
assertArrayEquals(block900ArchiveData.getSignature(), block900RepositoryData.getSignature());
// Test some values in the archive
assertEquals(1, block900ArchiveData.getOnlineAccountsCount());
System.out.println("testWriterAndReader completed successfully.");
}
}
@ -158,33 +193,48 @@ public class BlockArchiveV1Tests extends Common {
public void testArchivedAtStates() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testArchivedAtStates");
// Deploy an AT so that we have AT state data
System.out.println("Deploying AT...");
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
System.out.println("AT deployed at address: " + atAddress);
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// 9 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(10);
repository.getATRepository().setAtTrimHeight(10);
System.out.println("Set trim heights to 10.");
// Check the max archive height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height (Expected 9): " + maximumArchiveHeight);
assertEquals(9, maximumArchiveHeight);
// Write blocks 2-9 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Archive contains " + writer.getWrittenCount() + " blocks. (Expected 8)");
assertEquals(9 - 1, writer.getWrittenCount());
// Increment block archive height
@ -195,10 +245,13 @@ public class BlockArchiveV1Tests extends Common {
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
// Check blocks 3-9
System.out.println("Checking blocks 3 to 9...");
for (Integer testHeight = 2; testHeight <= 9; testHeight++) {
System.out.println("Reading block " + testHeight + " from the archive...");
// Read a block from the archive
BlockArchiveReader reader = BlockArchiveReader.getInstance();
BlockTransformation blockInfo = reader.fetchBlockAtHeight(testHeight);
@ -216,6 +269,7 @@ public class BlockArchiveV1Tests extends Common {
// Check the archived AT state
if (testHeight == 2) {
System.out.println("Checking block " + testHeight + " AT state data (expected null)...");
// Block 2 won't have an AT state hash because it's initial (and has the DEPLOY_AT in the same block)
assertNull(archivedAtStateData);
@ -223,6 +277,7 @@ public class BlockArchiveV1Tests extends Common {
assertEquals(Transaction.TransactionType.DEPLOY_AT, archivedTransactions.get(0).getType());
}
else {
System.out.println("Checking block " + testHeight + " AT state data...");
// For blocks 3+, ensure the archive has the AT state data, but not the hashes
assertNotNull(archivedAtStateData.getStateHash());
assertNull(archivedAtStateData.getStateData());
@ -255,10 +310,12 @@ public class BlockArchiveV1Tests extends Common {
}
// Check block 10 (unarchived)
System.out.println("Checking block 10 (should not be in archive)...");
BlockArchiveReader reader = BlockArchiveReader.getInstance();
BlockTransformation blockInfo = reader.fetchBlockAtHeight(10);
assertNull(blockInfo);
System.out.println("testArchivedAtStates completed successfully.");
}
}
@ -267,32 +324,46 @@ public class BlockArchiveV1Tests extends Common {
public void testArchiveAndPrune() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testArchiveAndPrune");
// Deploy an AT so that we have AT state data
System.out.println("Deploying AT...");
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// Assume 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
System.out.println("Set trim heights to 901.");
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height (Expected 900): " + maximumArchiveHeight);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Archive contains " + writer.getWrittenCount() + " blocks. (Expected 899)");
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
@ -303,17 +374,21 @@ public class BlockArchiveV1Tests extends Common {
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
// Ensure the SQL repository contains blocks 2 and 900...
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(900));
System.out.println("Blocks 2 and 900 exist in the repository.");
// Prune all the archived blocks
System.out.println("Pruning blocks 2 to 900...");
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 900);
assertEquals(900-1, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(901);
// Prune the AT states for the archived blocks
System.out.println("Pruning AT states up to height 900...");
repository.getATRepository().rebuildLatestAtStates(900);
repository.saveChanges();
int numATStatesPruned = repository.getATRepository().pruneAtStates(0, 900);
@ -323,14 +398,19 @@ public class BlockArchiveV1Tests extends Common {
// Now ensure the SQL repository is missing blocks 2 and 900...
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(900));
System.out.println("Blocks 2 and 900 have been pruned from the repository.");
// ... but it's not missing blocks 1 and 901 (we don't prune the genesis block)
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(901));
System.out.println("Blocks 1 and 901 still exist in the repository.");
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
int lastBlockHeight = repository.getBlockRepository().getLastBlock().getHeight();
System.out.println("Latest block height in repository (Expected 1002): " + lastBlockHeight);
assertEquals(1002, lastBlockHeight);
System.out.println("testArchiveAndPrune completed successfully.");
}
}
@ -338,137 +418,190 @@ public class BlockArchiveV1Tests extends Common {
public void testTrimArchivePruneAndOrphan() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testTrimArchivePruneAndOrphan");
// Deploy an AT so that we have AT state data
System.out.println("Deploying AT...");
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
System.out.println("AT deployed successfully.");
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// Make sure that block 500 has full AT state data and data hash
System.out.println("Verifying block 500 AT state data...");
List<ATStateData> block500AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(500);
ATStateData atStatesData = repository.getATRepository().getATStateAtHeight(block500AtStatesData.get(0).getATAddress(), 500);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
System.out.println("Block 500 AT state data verified.");
// Trim the first 500 blocks
System.out.println("Trimming first 500 blocks...");
repository.getBlockRepository().trimOldOnlineAccountsSignatures(0, 500);
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(501);
repository.getATRepository().rebuildLatestAtStates(500);
repository.getATRepository().trimAtStates(0, 500, 1000);
repository.getATRepository().setAtTrimHeight(501);
System.out.println("Trimming completed.");
// Now block 499 should only have the AT state data hash
System.out.println("Checking block 499 AT state data...");
List<ATStateData> block499AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(499);
atStatesData = repository.getATRepository().getATStateAtHeight(block499AtStatesData.get(0).getATAddress(), 499);
assertNotNull(atStatesData.getStateHash());
assertNull(atStatesData.getStateData());
System.out.println("Block 499 AT state data contains only state hash as expected.");
// ... but block 500 should have the full data (due to being retained as the "latest" AT state in the trimmed range
System.out.println("Verifying block 500 AT state data again...");
block500AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(500);
atStatesData = repository.getATRepository().getATStateAtHeight(block500AtStatesData.get(0).getATAddress(), 500);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
System.out.println("Block 500 AT state data contains full data.");
// ... and block 501 should also have the full data
System.out.println("Verifying block 501 AT state data...");
List<ATStateData> block501AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(501);
atStatesData = repository.getATRepository().getATStateAtHeight(block501AtStatesData.get(0).getATAddress(), 501);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
System.out.println("Block 501 AT state data contains full data.");
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height determined (Expected 500): " + maximumArchiveHeight);
assertEquals(500, maximumArchiveHeight);
BlockData block3DataPreArchive = repository.getBlockRepository().fromHeight(3);
// Write blocks 2-500 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Number of blocks written to archive (Expected 499): " + writer.getWrittenCount());
assertEquals(500 - 1, writer.getWrittenCount()); // -1 for the genesis block
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(500 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
System.out.println("Block archive height updated to: " + (500 - 1));
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
// Ensure the SQL repository contains blocks 2 and 500...
System.out.println("Verifying that blocks 2 and 500 exist in the repository...");
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(500));
System.out.println("Blocks 2 and 500 are present in the repository.");
// Prune all the archived blocks
System.out.println("Pruning blocks 2 to 500...");
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 500);
System.out.println("Number of blocks pruned (Expected 499): " + numBlocksPruned);
assertEquals(500-1, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(501);
// Prune the AT states for the archived blocks
System.out.println("Pruning AT states up to height 500...");
repository.getATRepository().rebuildLatestAtStates(500);
repository.saveChanges();
int numATStatesPruned = repository.getATRepository().pruneAtStates(2, 500);
System.out.println("Number of AT states pruned (Expected 498): " + numATStatesPruned);
assertEquals(498, numATStatesPruned); // Minus 1 for genesis block, and another for the latest AT state
repository.getATRepository().setAtPruneHeight(501);
// Now ensure the SQL repository is missing blocks 2 and 500...
System.out.println("Verifying that blocks 2 and 500 have been pruned...");
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(500));
System.out.println("Blocks 2 and 500 have been successfully pruned.");
// ... but it's not missing blocks 1 and 501 (we don't prune the genesis block)
System.out.println("Verifying that blocks 1 and 501 still exist...");
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(501));
System.out.println("Blocks 1 and 501 are present in the repository.");
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
int lastBlockHeight = repository.getBlockRepository().getLastBlock().getHeight();
System.out.println("Latest block height in repository (Expected 1002): " + lastBlockHeight);
assertEquals(1002, lastBlockHeight);
// Now orphan some unarchived blocks.
System.out.println("Orphaning 500 blocks...");
BlockUtils.orphanBlocks(repository, 500);
assertEquals(502, (int) repository.getBlockRepository().getLastBlock().getHeight());
int currentLastBlockHeight = repository.getBlockRepository().getLastBlock().getHeight();
System.out.println("New last block height after orphaning (Expected 502): " + currentLastBlockHeight);
assertEquals(502, currentLastBlockHeight);
// We're close to the lower limit of the SQL database now, so
// we need to import some blocks from the archive
System.out.println("Importing blocks 401 to 500 from the archive...");
BlockArchiveUtils.importFromArchive(401, 500, repository);
// Ensure the SQL repository now contains block 401 but not 400...
System.out.println("Verifying that block 401 exists and block 400 does not...");
assertNotNull(repository.getBlockRepository().fromHeight(401));
assertNull(repository.getBlockRepository().fromHeight(400));
System.out.println("Block 401 exists, block 400 does not.");
// Import the remaining 399 blocks
System.out.println("Importing blocks 2 to 400 from the archive...");
BlockArchiveUtils.importFromArchive(2, 400, repository);
// Verify that block 3 matches the original
System.out.println("Verifying that block 3 matches the original data...");
BlockData block3DataPostArchive = repository.getBlockRepository().fromHeight(3);
assertArrayEquals(block3DataPreArchive.getSignature(), block3DataPostArchive.getSignature());
assertEquals(block3DataPreArchive.getHeight(), block3DataPostArchive.getHeight());
System.out.println("Block 3 data matches the original.");
// Orphan 1 more block, which should be the last one that is possible to be orphaned
System.out.println("Orphaning 1 more block...");
BlockUtils.orphanBlocks(repository, 1);
System.out.println("Orphaned 1 block successfully.");
// Orphan another block, which should fail
System.out.println("Attempting to orphan another block, which should fail...");
Exception exception = null;
try {
BlockUtils.orphanBlocks(repository, 1);
} catch (DataException e) {
exception = e;
System.out.println("Caught expected DataException: " + e.getMessage());
}
// Ensure that a DataException is thrown because there is no more AT states data available
assertNotNull(exception);
assertEquals(DataException.class, exception.getClass());
System.out.println("DataException confirmed due to lack of AT states data.");
// FUTURE: we may be able to retain unique AT states when trimming, to avoid this exception
// and allow orphaning back through blocks with trimmed AT states.
System.out.println("testTrimArchivePruneAndOrphan completed successfully.");
}
}
@ -482,16 +615,26 @@ public class BlockArchiveV1Tests extends Common {
public void testMissingAtStatesHeightIndex() throws DataException, SQLException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
System.out.println("Starting testMissingAtStatesHeightIndex");
// Firstly check that we're able to prune or archive when the index exists
System.out.println("Checking existence of ATStatesHeightIndex...");
assertTrue(repository.getATRepository().hasAtStatesHeightIndex());
assertTrue(RepositoryManager.canArchiveOrPrune());
System.out.println("ATStatesHeightIndex exists. Archiving and pruning are possible.");
// Delete the index
System.out.println("Dropping ATStatesHeightIndex...");
repository.prepareStatement("DROP INDEX ATSTATESHEIGHTINDEX").execute();
System.out.println("ATStatesHeightIndex dropped.");
// Ensure check that we're unable to prune or archive when the index doesn't exist
System.out.println("Verifying that ATStatesHeightIndex no longer exists...");
assertFalse(repository.getATRepository().hasAtStatesHeightIndex());
assertFalse(RepositoryManager.canArchiveOrPrune());
System.out.println("ATStatesHeightIndex does not exist. Archiving and pruning are disabled.");
System.out.println("testMissingAtStatesHeightIndex completed successfully.");
}
}
@ -501,8 +644,10 @@ public class BlockArchiveV1Tests extends Common {
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toAbsolutePath();
try {
FileUtils.deleteDirectory(archivePath.toFile());
System.out.println("Deleted archive directory at: " + archivePath);
} catch (IOException e) {
System.out.println("Failed to delete archive directory: " + e.getMessage());
}
}

View File

@ -54,26 +54,39 @@ public class BlockArchiveV2Tests extends Common {
public void testWriter() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testWriter");
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
System.out.println("Set trim heights to 901.");
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height (Expected 900): " + maximumArchiveHeight);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Archive contains " + writer.getWrittenCount() + " blocks. (Expected 899)");
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
@ -84,6 +97,9 @@ public class BlockArchiveV2Tests extends Common {
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
System.out.println("testWriter completed successfully.");
}
}
@ -91,26 +107,39 @@ public class BlockArchiveV2Tests extends Common {
public void testWriterAndReader() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testWriterAndReader");
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
System.out.println("Set trim heights to 901.");
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height (Expected 900): " + maximumArchiveHeight);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Archive contains " + writer.getWrittenCount() + " blocks. (Expected 899)");
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
@ -121,8 +150,10 @@ public class BlockArchiveV2Tests extends Common {
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
// Read block 2 from the archive
System.out.println("Reading block 2 from the archive...");
BlockArchiveReader reader = BlockArchiveReader.getInstance();
BlockTransformation block2Info = reader.fetchBlockAtHeight(2);
BlockData block2ArchiveData = block2Info.getBlockData();
@ -131,6 +162,7 @@ public class BlockArchiveV2Tests extends Common {
BlockData block2RepositoryData = repository.getBlockRepository().fromHeight(2);
// Ensure the values match
System.out.println("Comparing block 2 data...");
assertEquals(block2ArchiveData.getHeight(), block2RepositoryData.getHeight());
assertArrayEquals(block2ArchiveData.getSignature(), block2RepositoryData.getSignature());
@ -138,6 +170,7 @@ public class BlockArchiveV2Tests extends Common {
assertEquals(1, block2ArchiveData.getOnlineAccountsCount());
// Read block 900 from the archive
System.out.println("Reading block 900 from the archive...");
BlockTransformation block900Info = reader.fetchBlockAtHeight(900);
BlockData block900ArchiveData = block900Info.getBlockData();
@ -145,12 +178,14 @@ public class BlockArchiveV2Tests extends Common {
BlockData block900RepositoryData = repository.getBlockRepository().fromHeight(900);
// Ensure the values match
System.out.println("Comparing block 900 data...");
assertEquals(block900ArchiveData.getHeight(), block900RepositoryData.getHeight());
assertArrayEquals(block900ArchiveData.getSignature(), block900RepositoryData.getSignature());
// Test some values in the archive
assertEquals(1, block900ArchiveData.getOnlineAccountsCount());
System.out.println("testWriterAndReader completed successfully.");
}
}
@ -158,47 +193,66 @@ public class BlockArchiveV2Tests extends Common {
public void testArchivedAtStates() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testArchivedAtStates");
// Deploy an AT so that we have AT state data
System.out.println("Deploying AT...");
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
DeployAtTransaction deployAtTransaction = AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
String atAddress = deployAtTransaction.getATAccount().getAddress();
System.out.println("AT deployed at address: " + atAddress);
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// 9 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(10);
repository.getATRepository().setAtTrimHeight(10);
System.out.println("Set trim heights to 10.");
// Check the max archive height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height (Expected 9): " + maximumArchiveHeight);
assertEquals(9, maximumArchiveHeight);
// Write blocks 2-9 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Archive contains " + writer.getWrittenCount() + " blocks. (Expected 8)");
assertEquals(9 - 1, writer.getWrittenCount());
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(9 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
System.out.println("Block archive height updated to: " + (9 - 1));
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
// Check blocks 3-9
System.out.println("Checking blocks 2 to 9...");
for (Integer testHeight = 2; testHeight <= 9; testHeight++) {
System.out.println("Reading block " + testHeight + " from the archive...");
// Read a block from the archive
BlockArchiveReader reader = BlockArchiveReader.getInstance();
BlockTransformation blockInfo = reader.fetchBlockAtHeight(testHeight);
@ -216,15 +270,18 @@ public class BlockArchiveV2Tests extends Common {
// Check the archived AT state
if (testHeight == 2) {
System.out.println("Checking block " + testHeight + " AT state data (expected transactions)...");
assertEquals(1, archivedTransactions.size());
assertEquals(Transaction.TransactionType.DEPLOY_AT, archivedTransactions.get(0).getType());
}
else {
System.out.println("Checking block " + testHeight + " AT state data (no transactions expected)...");
// Blocks 3+ shouldn't have any transactions
assertTrue(archivedTransactions.isEmpty());
}
// Ensure the archive has the AT states hash
System.out.println("Checking block " + testHeight + " AT states hash...");
assertNotNull(archivedAtStateHash);
// Also check the online accounts count and height
@ -232,6 +289,7 @@ public class BlockArchiveV2Tests extends Common {
assertEquals(testHeight, archivedBlockData.getHeight());
// Ensure the values match
System.out.println("Comparing block " + testHeight + " data...");
assertEquals(archivedBlockData.getHeight(), repositoryBlockData.getHeight());
assertArrayEquals(archivedBlockData.getSignature(), repositoryBlockData.getSignature());
assertEquals(archivedBlockData.getOnlineAccountsCount(), repositoryBlockData.getOnlineAccountsCount());
@ -249,10 +307,12 @@ public class BlockArchiveV2Tests extends Common {
}
// Check block 10 (unarchived)
System.out.println("Checking block 10 (should not be in archive)...");
BlockArchiveReader reader = BlockArchiveReader.getInstance();
BlockTransformation blockInfo = reader.fetchBlockAtHeight(10);
assertNull(blockInfo);
System.out.println("testArchivedAtStates completed successfully.");
}
}
@ -261,32 +321,47 @@ public class BlockArchiveV2Tests extends Common {
public void testArchiveAndPrune() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testArchiveAndPrune");
// Deploy an AT so that we have AT state data
System.out.println("Deploying AT...");
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
System.out.println("AT deployed successfully.");
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// Assume 900 blocks are trimmed (this specifies the first untrimmed height)
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(901);
repository.getATRepository().setAtTrimHeight(901);
System.out.println("Set trim heights to 901.");
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height (Expected 900): " + maximumArchiveHeight);
assertEquals(900, maximumArchiveHeight);
// Write blocks 2-900 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Archive contains " + writer.getWrittenCount() + " blocks. (Expected 899)");
assertEquals(900 - 1, writer.getWrittenCount());
// Increment block archive height
@ -297,34 +372,48 @@ public class BlockArchiveV2Tests extends Common {
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
// Ensure the SQL repository contains blocks 2 and 900...
System.out.println("Verifying that blocks 2 and 900 exist in the repository...");
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(900));
System.out.println("Blocks 2 and 900 are present in the repository.");
// Prune all the archived blocks
System.out.println("Pruning blocks 2 to 900...");
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 900);
System.out.println("Number of blocks pruned (Expected 899): " + numBlocksPruned);
assertEquals(900-1, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(901);
// Prune the AT states for the archived blocks
System.out.println("Pruning AT states up to height 900...");
repository.getATRepository().rebuildLatestAtStates(900);
repository.saveChanges();
int numATStatesPruned = repository.getATRepository().pruneAtStates(0, 900);
System.out.println("Number of AT states pruned (Expected 898): " + numATStatesPruned);
assertEquals(900-2, numATStatesPruned); // Minus 1 for genesis block, and another for the latest AT state
repository.getATRepository().setAtPruneHeight(901);
// Now ensure the SQL repository is missing blocks 2 and 900...
System.out.println("Verifying that blocks 2 and 900 have been pruned...");
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(900));
System.out.println("Blocks 2 and 900 have been successfully pruned.");
// ... but it's not missing blocks 1 and 901 (we don't prune the genesis block)
System.out.println("Verifying that blocks 1 and 901 still exist...");
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(901));
System.out.println("Blocks 1 and 901 are present in the repository.");
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
int lastBlockHeight = repository.getBlockRepository().getLastBlock().getHeight();
System.out.println("Latest block height in repository (Expected 1002): " + lastBlockHeight);
assertEquals(1002, lastBlockHeight);
System.out.println("testArchiveAndPrune completed successfully.");
}
}
@ -332,138 +421,191 @@ public class BlockArchiveV2Tests extends Common {
public void testTrimArchivePruneAndOrphan() throws DataException, InterruptedException, TransformationException, IOException {
try (final Repository repository = RepositoryManager.getRepository()) {
System.out.println("Starting testTrimArchivePruneAndOrphan");
// Deploy an AT so that we have AT state data
System.out.println("Deploying AT...");
PrivateKeyAccount deployer = Common.getTestAccount(repository, "alice");
byte[] creationBytes = AtUtils.buildSimpleAT();
long fundingAmount = 1_00000000L;
AtUtils.doDeployAT(repository, deployer, creationBytes, fundingAmount);
System.out.println("AT deployed successfully.");
// Mint some blocks so that we are able to archive them later
System.out.println("Minting 1000 blocks...");
for (int i = 0; i < 1000; i++) {
BlockMinter.mintTestingBlock(repository, Common.getTestAccount(repository, "alice-reward-share"));
// Log every 100 blocks
if ((i + 1) % 100 == 0) {
System.out.println("Minted block " + (i + 1));
}
}
System.out.println("Finished minting blocks.");
// Make sure that block 500 has full AT state data and data hash
System.out.println("Verifying block 500 AT state data...");
List<ATStateData> block500AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(500);
ATStateData atStatesData = repository.getATRepository().getATStateAtHeight(block500AtStatesData.get(0).getATAddress(), 500);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
System.out.println("Block 500 AT state data verified.");
// Trim the first 500 blocks
System.out.println("Trimming first 500 blocks...");
repository.getBlockRepository().trimOldOnlineAccountsSignatures(0, 500);
repository.getBlockRepository().setOnlineAccountsSignaturesTrimHeight(501);
repository.getATRepository().rebuildLatestAtStates(500);
repository.getATRepository().trimAtStates(0, 500, 1000);
repository.getATRepository().setAtTrimHeight(501);
System.out.println("Trimming completed.");
// Now block 499 should only have the AT state data hash
System.out.println("Checking block 499 AT state data...");
List<ATStateData> block499AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(499);
atStatesData = repository.getATRepository().getATStateAtHeight(block499AtStatesData.get(0).getATAddress(), 499);
assertNotNull(atStatesData.getStateHash());
assertNull(atStatesData.getStateData());
System.out.println("Block 499 AT state data contains only state hash as expected.");
// ... but block 500 should have the full data (due to being retained as the "latest" AT state in the trimmed range
System.out.println("Verifying block 500 AT state data again...");
block500AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(500);
atStatesData = repository.getATRepository().getATStateAtHeight(block500AtStatesData.get(0).getATAddress(), 500);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
System.out.println("Block 500 AT state data contains full data.");
// ... and block 501 should also have the full data
System.out.println("Verifying block 501 AT state data...");
List<ATStateData> block501AtStatesData = repository.getATRepository().getBlockATStatesAtHeight(501);
atStatesData = repository.getATRepository().getATStateAtHeight(block501AtStatesData.get(0).getATAddress(), 501);
assertNotNull(atStatesData.getStateHash());
assertNotNull(atStatesData.getStateData());
System.out.println("Block 501 AT state data contains full data.");
// Check the max archive height - this should be one less than the first untrimmed height
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
System.out.println("Maximum archive height determined (Expected 500): " + maximumArchiveHeight);
assertEquals(500, maximumArchiveHeight);
BlockData block3DataPreArchive = repository.getBlockRepository().fromHeight(3);
// Write blocks 2-500 to the archive
System.out.println("Writing blocks 2 to " + maximumArchiveHeight + " to the archive...");
BlockArchiveWriter writer = new BlockArchiveWriter(0, maximumArchiveHeight, repository);
writer.setShouldEnforceFileSizeTarget(false); // To avoid the need to pre-calculate file sizes
BlockArchiveWriter.BlockArchiveWriteResult result = writer.write();
System.out.println("Finished writing blocks to archive. Result: " + result);
assertEquals(BlockArchiveWriter.BlockArchiveWriteResult.OK, result);
// Make sure that the archive contains the correct number of blocks
System.out.println("Number of blocks written to archive (Expected 499): " + writer.getWrittenCount());
assertEquals(500 - 1, writer.getWrittenCount()); // -1 for the genesis block
// Increment block archive height
repository.getBlockArchiveRepository().setBlockArchiveHeight(writer.getWrittenCount());
repository.saveChanges();
assertEquals(500 - 1, repository.getBlockArchiveRepository().getBlockArchiveHeight());
System.out.println("Block archive height updated to: " + (500 - 1));
// Ensure the file exists
File outputFile = writer.getOutputPath().toFile();
assertTrue(outputFile.exists());
System.out.println("Archive file exists at: " + outputFile.getAbsolutePath());
// Ensure the SQL repository contains blocks 2 and 500...
System.out.println("Verifying that blocks 2 and 500 exist in the repository...");
assertNotNull(repository.getBlockRepository().fromHeight(2));
assertNotNull(repository.getBlockRepository().fromHeight(500));
System.out.println("Blocks 2 and 500 are present in the repository.");
// Prune all the archived blocks
System.out.println("Pruning blocks 2 to 500...");
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(0, 500);
System.out.println("Number of blocks pruned (Expected 499): " + numBlocksPruned);
assertEquals(500-1, numBlocksPruned);
repository.getBlockRepository().setBlockPruneHeight(501);
// Prune the AT states for the archived blocks
System.out.println("Pruning AT states up to height 500...");
repository.getATRepository().rebuildLatestAtStates(500);
repository.saveChanges();
int numATStatesPruned = repository.getATRepository().pruneAtStates(2, 500);
System.out.println("Number of AT states pruned (Expected 498): " + numATStatesPruned);
assertEquals(498, numATStatesPruned); // Minus 1 for genesis block, and another for the latest AT state
repository.getATRepository().setAtPruneHeight(501);
// Now ensure the SQL repository is missing blocks 2 and 500...
System.out.println("Verifying that blocks 2 and 500 have been pruned...");
assertNull(repository.getBlockRepository().fromHeight(2));
assertNull(repository.getBlockRepository().fromHeight(500));
System.out.println("Blocks 2 and 500 have been successfully pruned.");
// ... but it's not missing blocks 1 and 501 (we don't prune the genesis block)
System.out.println("Verifying that blocks 1 and 501 still exist...");
assertNotNull(repository.getBlockRepository().fromHeight(1));
assertNotNull(repository.getBlockRepository().fromHeight(501));
System.out.println("Blocks 1 and 501 are present in the repository.");
// Validate the latest block height in the repository
assertEquals(1002, (int) repository.getBlockRepository().getLastBlock().getHeight());
int lastBlockHeight = repository.getBlockRepository().getLastBlock().getHeight();
System.out.println("Latest block height in repository (Expected 1002): " + lastBlockHeight);
assertEquals(1002, lastBlockHeight);
// Now orphan some unarchived blocks.
System.out.println("Orphaning 500 blocks...");
BlockUtils.orphanBlocks(repository, 500);
assertEquals(502, (int) repository.getBlockRepository().getLastBlock().getHeight());
int currentLastBlockHeight = repository.getBlockRepository().getLastBlock().getHeight();
System.out.println("New last block height after orphaning (Expected 502): " + currentLastBlockHeight);
assertEquals(502, currentLastBlockHeight);
// We're close to the lower limit of the SQL database now, so
// we need to import some blocks from the archive
System.out.println("Importing blocks 401 to 500 from the archive...");
BlockArchiveUtils.importFromArchive(401, 500, repository);
// Ensure the SQL repository now contains block 401 but not 400...
System.out.println("Verifying that block 401 exists and block 400 does not...");
assertNotNull(repository.getBlockRepository().fromHeight(401));
assertNull(repository.getBlockRepository().fromHeight(400));
System.out.println("Block 401 exists, block 400 does not.");
// Import the remaining 399 blocks
System.out.println("Importing blocks 2 to 400 from the archive...");
BlockArchiveUtils.importFromArchive(2, 400, repository);
// Verify that block 3 matches the original
System.out.println("Verifying that block 3 matches the original data...");
BlockData block3DataPostArchive = repository.getBlockRepository().fromHeight(3);
assertArrayEquals(block3DataPreArchive.getSignature(), block3DataPostArchive.getSignature());
assertEquals(block3DataPreArchive.getHeight(), block3DataPostArchive.getHeight());
System.out.println("Block 3 data matches the original.");
// Orphan 2 more block, which should be the last one that is possible to be orphaned
// TODO: figure out why this is 1 block more than in the equivalent block archive V1 test
System.out.println("Orphaning 2 more blocks...");
BlockUtils.orphanBlocks(repository, 2);
System.out.println("Orphaned 2 blocks successfully.");
// Orphan another block, which should fail
System.out.println("Attempting to orphan another block, which should fail...");
Exception exception = null;
try {
BlockUtils.orphanBlocks(repository, 1);
} catch (DataException e) {
exception = e;
System.out.println("Caught expected DataException: " + e.getMessage());
}
// Ensure that a DataException is thrown because there is no more AT states data available
assertNotNull(exception);
assertEquals(DataException.class, exception.getClass());
System.out.println("DataException confirmed due to lack of AT states data.");
// FUTURE: we may be able to retain unique AT states when trimming, to avoid this exception
// and allow orphaning back through blocks with trimmed AT states.
System.out.println("testTrimArchivePruneAndOrphan completed successfully.");
}
}
@ -477,16 +619,26 @@ public class BlockArchiveV2Tests extends Common {
public void testMissingAtStatesHeightIndex() throws DataException, SQLException {
try (final HSQLDBRepository repository = (HSQLDBRepository) RepositoryManager.getRepository()) {
System.out.println("Starting testMissingAtStatesHeightIndex");
// Firstly check that we're able to prune or archive when the index exists
System.out.println("Checking existence of ATStatesHeightIndex...");
assertTrue(repository.getATRepository().hasAtStatesHeightIndex());
assertTrue(RepositoryManager.canArchiveOrPrune());
System.out.println("ATStatesHeightIndex exists. Archiving and pruning are possible.");
// Delete the index
System.out.println("Dropping ATStatesHeightIndex...");
repository.prepareStatement("DROP INDEX ATSTATESHEIGHTINDEX").execute();
System.out.println("ATStatesHeightIndex dropped.");
// Ensure check that we're unable to prune or archive when the index doesn't exist
System.out.println("Verifying that ATStatesHeightIndex no longer exists...");
assertFalse(repository.getATRepository().hasAtStatesHeightIndex());
assertFalse(RepositoryManager.canArchiveOrPrune());
System.out.println("ATStatesHeightIndex does not exist. Archiving and pruning are disabled.");
System.out.println("testMissingAtStatesHeightIndex completed successfully.");
}
}
@ -496,8 +648,10 @@ public class BlockArchiveV2Tests extends Common {
Path archivePath = Paths.get(Settings.getInstance().getRepositoryPath(), "archive").toAbsolutePath();
try {
FileUtils.deleteDirectory(archivePath.toFile());
System.out.println("Deleted archive directory at: " + archivePath);
} catch (IOException e) {
System.out.println("Failed to delete archive directory: " + e.getMessage());
}
}

View File

@ -405,19 +405,26 @@ public class RepositoryTests extends Common {
Integer offset = null;
Boolean reverse = null;
hsqldb.getATRepository().getMatchingFinalATStates(codeHash, isFinished, dataByteOffset, expectedValue, minimumFinalHeight, limit, offset, reverse);
hsqldb.getATRepository().getMatchingFinalATStates(codeHash,null, null, isFinished, dataByteOffset, expectedValue, minimumFinalHeight, limit, offset, reverse);
} catch (DataException e) {
fail("HSQLDB bug #1580");
}
}
/** Specifically test LATERAL() usage in Chat repository */
/** Specifically test LATERAL() usage in Chat repository with hasChatReference */
@Test
public void testChatLateral() {
try (final HSQLDBRepository hsqldb = (HSQLDBRepository) RepositoryManager.getRepository()) {
String address = Crypto.toAddress(new byte[32]);
hsqldb.getChatRepository().getActiveChats(address, ChatMessage.Encoding.BASE58);
// Test without hasChatReference
hsqldb.getChatRepository().getActiveChats(address, ChatMessage.Encoding.BASE58, null);
// Test with hasChatReference = true
hsqldb.getChatRepository().getActiveChats(address, ChatMessage.Encoding.BASE58, true);
// Test with hasChatReference = false
hsqldb.getChatRepository().getActiveChats(address, ChatMessage.Encoding.BASE58, false);
} catch (DataException e) {
fail("HSQLDB bug #1580");
}

View File

@ -74,7 +74,7 @@ public class TransferPrivsTests extends Common {
public void testAliceIntoNewAccountTransferPrivs() throws DataException {
try (final Repository repository = RepositoryManager.getRepository()) {
TestAccount alice = Common.getTestAccount(repository, "alice");
assertTrue(alice.canMint());
assertTrue(alice.canMint(false));
PrivateKeyAccount aliceMintingAccount = Common.getTestAccount(repository, "alice-reward-share");
@ -86,8 +86,8 @@ public class TransferPrivsTests extends Common {
combineAccounts(repository, alice, randomAccount, aliceMintingAccount);
assertFalse(alice.canMint());
assertTrue(randomAccount.canMint());
assertFalse(alice.canMint(false));
assertTrue(randomAccount.canMint(false));
}
}
@ -97,8 +97,8 @@ public class TransferPrivsTests extends Common {
TestAccount alice = Common.getTestAccount(repository, "alice");
TestAccount dilbert = Common.getTestAccount(repository, "dilbert");
assertTrue(alice.canMint());
assertTrue(dilbert.canMint());
assertTrue(alice.canMint(false));
assertTrue(dilbert.canMint(false));
// Dilbert has level, Alice does not so we need Alice to mint enough blocks to bump Dilbert's level post-combine
final int expectedPostCombineLevel = dilbert.getLevel() + 1;
@ -118,11 +118,11 @@ public class TransferPrivsTests extends Common {
// Post-combine sender checks
checkSenderPostTransfer(postCombineAliceData);
assertFalse(alice.canMint());
assertFalse(alice.canMint(false));
// Post-combine recipient checks
checkRecipientPostTransfer(preCombineAliceData, preCombineDilbertData, postCombineDilbertData, expectedPostCombineLevel);
assertTrue(dilbert.canMint());
assertTrue(dilbert.canMint(false));
// Orphan previous block
BlockUtils.orphanLastBlock(repository);
@ -130,12 +130,12 @@ public class TransferPrivsTests extends Common {
// Sender checks
AccountData orphanedAliceData = repository.getAccountRepository().getAccount(alice.getAddress());
checkAccountDataRestored("sender", preCombineAliceData, orphanedAliceData);
assertTrue(alice.canMint());
assertTrue(alice.canMint(false));
// Recipient checks
AccountData orphanedDilbertData = repository.getAccountRepository().getAccount(dilbert.getAddress());
checkAccountDataRestored("recipient", preCombineDilbertData, orphanedDilbertData);
assertTrue(dilbert.canMint());
assertTrue(dilbert.canMint(false));
}
}
@ -145,8 +145,8 @@ public class TransferPrivsTests extends Common {
TestAccount alice = Common.getTestAccount(repository, "alice");
TestAccount dilbert = Common.getTestAccount(repository, "dilbert");
assertTrue(dilbert.canMint());
assertTrue(alice.canMint());
assertTrue(dilbert.canMint(false));
assertTrue(alice.canMint(false));
// Dilbert has level, Alice does not so we need Alice to mint enough blocks to surpass Dilbert's level post-combine
final int expectedPostCombineLevel = dilbert.getLevel() + 1;
@ -166,11 +166,11 @@ public class TransferPrivsTests extends Common {
// Post-combine sender checks
checkSenderPostTransfer(postCombineDilbertData);
assertFalse(dilbert.canMint());
assertFalse(dilbert.canMint(false));
// Post-combine recipient checks
checkRecipientPostTransfer(preCombineDilbertData, preCombineAliceData, postCombineAliceData, expectedPostCombineLevel);
assertTrue(alice.canMint());
assertTrue(alice.canMint(false));
// Orphan previous block
BlockUtils.orphanLastBlock(repository);
@ -178,12 +178,12 @@ public class TransferPrivsTests extends Common {
// Sender checks
AccountData orphanedDilbertData = repository.getAccountRepository().getAccount(dilbert.getAddress());
checkAccountDataRestored("sender", preCombineDilbertData, orphanedDilbertData);
assertTrue(dilbert.canMint());
assertTrue(dilbert.canMint(false));
// Recipient checks
AccountData orphanedAliceData = repository.getAccountRepository().getAccount(alice.getAddress());
checkAccountDataRestored("recipient", preCombineAliceData, orphanedAliceData);
assertTrue(alice.canMint());
assertTrue(alice.canMint(false));
}
}
@ -202,8 +202,8 @@ public class TransferPrivsTests extends Common {
TestAccount chloe = Common.getTestAccount(repository, "chloe");
TestAccount dilbert = Common.getTestAccount(repository, "dilbert");
assertTrue(dilbert.canMint());
assertFalse(chloe.canMint());
assertTrue(dilbert.canMint(false));
assertFalse(chloe.canMint(false));
// COMBINE DILBERT INTO CHLOE
@ -225,16 +225,16 @@ public class TransferPrivsTests extends Common {
// Post-combine sender checks
checkSenderPostTransfer(post1stCombineDilbertData);
assertFalse(dilbert.canMint());
assertFalse(dilbert.canMint(false));
// Post-combine recipient checks
checkRecipientPostTransfer(pre1stCombineDilbertData, pre1stCombineChloeData, post1stCombineChloeData, expectedPost1stCombineLevel);
assertTrue(chloe.canMint());
assertTrue(chloe.canMint(false));
// COMBINE ALICE INTO CHLOE
assertTrue(alice.canMint());
assertTrue(chloe.canMint());
assertTrue(alice.canMint(false));
assertTrue(chloe.canMint(false));
// Alice needs to mint enough blocks to surpass Chloe's level post-combine
final int expectedPost2ndCombineLevel = chloe.getLevel() + 1;
@ -254,11 +254,11 @@ public class TransferPrivsTests extends Common {
// Post-combine sender checks
checkSenderPostTransfer(post2ndCombineAliceData);
assertFalse(alice.canMint());
assertFalse(alice.canMint(false));
// Post-combine recipient checks
checkRecipientPostTransfer(pre2ndCombineAliceData, pre2ndCombineChloeData, post2ndCombineChloeData, expectedPost2ndCombineLevel);
assertTrue(chloe.canMint());
assertTrue(chloe.canMint(false));
// Orphan 2nd combine
BlockUtils.orphanLastBlock(repository);
@ -266,12 +266,12 @@ public class TransferPrivsTests extends Common {
// Sender checks
AccountData orphanedAliceData = repository.getAccountRepository().getAccount(alice.getAddress());
checkAccountDataRestored("sender", pre2ndCombineAliceData, orphanedAliceData);
assertTrue(alice.canMint());
assertTrue(alice.canMint(false));
// Recipient checks
AccountData orphanedChloeData = repository.getAccountRepository().getAccount(chloe.getAddress());
checkAccountDataRestored("recipient", pre2ndCombineChloeData, orphanedChloeData);
assertTrue(chloe.canMint());
assertTrue(chloe.canMint(false));
// Orphan 1nd combine
BlockUtils.orphanToBlock(repository, pre1stCombineBlockHeight);
@ -279,7 +279,7 @@ public class TransferPrivsTests extends Common {
// Sender checks
AccountData orphanedDilbertData = repository.getAccountRepository().getAccount(dilbert.getAddress());
checkAccountDataRestored("sender", pre1stCombineDilbertData, orphanedDilbertData);
assertTrue(dilbert.canMint());
assertTrue(dilbert.canMint(false));
// Recipient checks
orphanedChloeData = repository.getAccountRepository().getAccount(chloe.getAddress());
@ -287,7 +287,7 @@ public class TransferPrivsTests extends Common {
// Chloe canMint() would return true here due to Alice-Chloe reward-share minting at top of method, so undo that minting by orphaning back to block 1
BlockUtils.orphanToBlock(repository, 1);
assertFalse(chloe.canMint());
assertFalse(chloe.canMint(false));
}
}

View File

@ -26,7 +26,7 @@ public class CrossChainApiTests extends ApiCommon {
@Test
public void testGetCompletedTrades() {
long minimumTimestamp = System.currentTimeMillis();
assertNoApiError((limit, offset, reverse) -> this.crossChainResource.getCompletedTrades(SPECIFIC_BLOCKCHAIN, minimumTimestamp, limit, offset, reverse));
assertNoApiError((limit, offset, reverse) -> this.crossChainResource.getCompletedTrades(SPECIFIC_BLOCKCHAIN, minimumTimestamp, null, null, limit, offset, reverse));
}
@Test
@ -35,8 +35,8 @@ public class CrossChainApiTests extends ApiCommon {
Integer offset = null;
Boolean reverse = null;
assertApiError(ApiError.INVALID_CRITERIA, () -> this.crossChainResource.getCompletedTrades(SPECIFIC_BLOCKCHAIN, -1L /*minimumTimestamp*/, limit, offset, reverse));
assertApiError(ApiError.INVALID_CRITERIA, () -> this.crossChainResource.getCompletedTrades(SPECIFIC_BLOCKCHAIN, 0L /*minimumTimestamp*/, limit, offset, reverse));
assertApiError(ApiError.INVALID_CRITERIA, () -> this.crossChainResource.getCompletedTrades(SPECIFIC_BLOCKCHAIN, -1L /*minimumTimestamp*/, null, null, limit, offset, reverse));
assertApiError(ApiError.INVALID_CRITERIA, () -> this.crossChainResource.getCompletedTrades(SPECIFIC_BLOCKCHAIN, 0L /*minimumTimestamp*/, null, null, limit, offset, reverse));
}
}

View File

@ -3,10 +3,15 @@ package org.qortal.test.api;
import org.json.simple.JSONObject;
import org.junit.Assert;
import org.junit.Test;
import org.qortal.api.model.CrossChainTradeLedgerEntry;
import org.qortal.api.resource.CrossChainUtils;
import org.qortal.test.common.ApiCommon;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
public class CrossChainUtilsTests extends ApiCommon {
@ -137,4 +142,53 @@ public class CrossChainUtilsTests extends ApiCommon {
Assert.assertEquals(5, versionDecimal, 0.001);
Assert.assertFalse(thrown);
}
@Test
public void testWriteToLedgerHeaderOnly() throws IOException {
CrossChainUtils.writeToLedger(new PrintWriter(System.out), new ArrayList<>());
}
@Test
public void testWriteToLedgerOneRow() throws IOException {
CrossChainUtils.writeToLedger(
new PrintWriter(System.out),
List.of(
new CrossChainTradeLedgerEntry(
"QORT",
"LTC",
1000,
0,
"LTC",
1,
System.currentTimeMillis())
)
);
}
@Test
public void testWriteToLedgerTwoRows() throws IOException {
CrossChainUtils.writeToLedger(
new PrintWriter(System.out),
List.of(
new CrossChainTradeLedgerEntry(
"QORT",
"LTC",
1000,
0,
"LTC",
1,
System.currentTimeMillis()
),
new CrossChainTradeLedgerEntry(
"LTC",
"QORT",
1,
0,
"LTC",
1000,
System.currentTimeMillis()
)
)
);
}
}

View File

@ -145,56 +145,6 @@ public class ArbitraryDataStorageCapacityTests extends Common {
}
}
@Test
public void testDeleteRandomFilesForName() throws DataException, IOException, InterruptedException, IllegalAccessException {
try (final Repository repository = RepositoryManager.getRepository()) {
String identifier = null; // Not used for this test
Service service = Service.ARBITRARY_DATA;
int chunkSize = 100;
int dataLength = 900; // Actual data length will be longer due to encryption
// Set originalCopyIndicatorFileEnabled to false, otherwise nothing will be deleted as it all originates from this node
FieldUtils.writeField(Settings.getInstance(), "originalCopyIndicatorFileEnabled", false, true);
// Alice hosts some data (with 10 chunks)
PrivateKeyAccount alice = Common.getTestAccount(repository, "alice");
String aliceName = "alice";
RegisterNameTransactionData transactionData = new RegisterNameTransactionData(TestTransaction.generateBase(alice), aliceName, "");
transactionData.setFee(new RegisterNameTransaction(null, null).getUnitFee(transactionData.getTimestamp()));
TransactionUtils.signAndMint(repository, transactionData, alice);
Path alicePath = ArbitraryUtils.generateRandomDataPath(dataLength);
ArbitraryDataFile aliceArbitraryDataFile = ArbitraryUtils.createAndMintTxn(repository, Base58.encode(alice.getPublicKey()), alicePath, aliceName, identifier, ArbitraryTransactionData.Method.PUT, service, alice, chunkSize);
// Bob hosts some data too (also with 10 chunks)
PrivateKeyAccount bob = Common.getTestAccount(repository, "bob");
String bobName = "bob";
transactionData = new RegisterNameTransactionData(TestTransaction.generateBase(bob), bobName, "");
transactionData.setFee(new RegisterNameTransaction(null, null).getUnitFee(transactionData.getTimestamp()));
TransactionUtils.signAndMint(repository, transactionData, bob);
Path bobPath = ArbitraryUtils.generateRandomDataPath(dataLength);
ArbitraryDataFile bobArbitraryDataFile = ArbitraryUtils.createAndMintTxn(repository, Base58.encode(bob.getPublicKey()), bobPath, bobName, identifier, ArbitraryTransactionData.Method.PUT, service, bob, chunkSize);
// All 20 chunks should exist
assertEquals(10, aliceArbitraryDataFile.chunkCount());
assertTrue(aliceArbitraryDataFile.allChunksExist());
assertEquals(10, bobArbitraryDataFile.chunkCount());
assertTrue(bobArbitraryDataFile.allChunksExist());
// Now pretend that Bob has reached his storage limit - this should delete random files
// Run it 10 times to remove the likelihood of the randomizer always picking Alice's files
for (int i=0; i<10; i++) {
ArbitraryDataCleanupManager.getInstance().storageLimitReachedForName(repository, bobName);
}
// Alice should still have all chunks
assertTrue(aliceArbitraryDataFile.allChunksExist());
// Bob should be missing some chunks
assertFalse(bobArbitraryDataFile.allChunksExist());
}
}
private void deleteListsDirectory() {
// Delete lists directory if exists
Path listsPath = Paths.get(Settings.getInstance().getListsPath());

View File

@ -73,14 +73,14 @@ public class ArbitraryDataStoragePolicyTests extends Common {
// We should store and pre-fetch data for this transaction
assertEquals(StoragePolicy.FOLLOWED_OR_VIEWED, Settings.getInstance().getStoragePolicy());
assertTrue(storageManager.canStoreData(arbitraryTransactionData));
assertTrue(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertTrue(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
// Now unfollow the name
assertTrue(ResourceListManager.getInstance().removeFromList("followedNames", name, false));
// We should store but not pre-fetch data for this transaction
assertTrue(storageManager.canStoreData(arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
}
}
@ -108,14 +108,14 @@ public class ArbitraryDataStoragePolicyTests extends Common {
// We should store and pre-fetch data for this transaction
assertEquals(StoragePolicy.FOLLOWED, Settings.getInstance().getStoragePolicy());
assertTrue(storageManager.canStoreData(arbitraryTransactionData));
assertTrue(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertTrue(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
// Now unfollow the name
assertTrue(ResourceListManager.getInstance().removeFromList("followedNames", name, false));
// We shouldn't store or pre-fetch data for this transaction
assertFalse(storageManager.canStoreData(arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
}
}
@ -143,14 +143,14 @@ public class ArbitraryDataStoragePolicyTests extends Common {
// We should store but not pre-fetch data for this transaction
assertEquals(StoragePolicy.VIEWED, Settings.getInstance().getStoragePolicy());
assertTrue(storageManager.canStoreData(arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
// Now unfollow the name
assertTrue(ResourceListManager.getInstance().removeFromList("followedNames", name, false));
// We should store but not pre-fetch data for this transaction
assertTrue(storageManager.canStoreData(arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
}
}
@ -178,14 +178,14 @@ public class ArbitraryDataStoragePolicyTests extends Common {
// We should store and pre-fetch data for this transaction
assertEquals(StoragePolicy.ALL, Settings.getInstance().getStoragePolicy());
assertTrue(storageManager.canStoreData(arbitraryTransactionData));
assertTrue(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertTrue(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
// Now unfollow the name
assertTrue(ResourceListManager.getInstance().removeFromList("followedNames", name, false));
// We should store and pre-fetch data for this transaction
assertTrue(storageManager.canStoreData(arbitraryTransactionData));
assertTrue(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertTrue(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
}
}
@ -213,14 +213,14 @@ public class ArbitraryDataStoragePolicyTests extends Common {
// We shouldn't store or pre-fetch data for this transaction
assertEquals(StoragePolicy.NONE, Settings.getInstance().getStoragePolicy());
assertFalse(storageManager.canStoreData(arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
// Now unfollow the name
assertTrue(ResourceListManager.getInstance().removeFromList("followedNames", name, false));
// We shouldn't store or pre-fetch data for this transaction
assertFalse(storageManager.canStoreData(arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData));
assertFalse(storageManager.shouldPreFetchData(repository, arbitraryTransactionData).isPass());
}
}
@ -236,7 +236,7 @@ public class ArbitraryDataStoragePolicyTests extends Common {
// We should store but not pre-fetch data for this transaction
assertTrue(storageManager.canStoreData(transactionData));
assertFalse(storageManager.shouldPreFetchData(repository, transactionData));
assertFalse(storageManager.shouldPreFetchData(repository, transactionData).isPass());
}
}

Some files were not shown because too many files have changed in this diff Show More