mirror of
https://github.com/Qortal/qortal.git
synced 2025-04-21 18:37:50 +00:00
Compare commits
No commits in common. "master" and "v4.6.0" have entirely different histories.
35
README.md
35
README.md
@ -1,19 +1,4 @@
|
|||||||
# Qortal Project - Qortal Core - Primary Repository
|
# Qortal Project - Official Repo
|
||||||
The Qortal Core is the blockchain and node component of the overall project. It contains the primary API, and ability to make calls to create transactions, and interact with the Qortal Blockchain Network.
|
|
||||||
|
|
||||||
In order to run the Qortal Core, a machine with java 11+ installed is required. Minimum RAM specs will vary depending on settings, but as low as 4GB of RAM should be acceptable in most scenarios.
|
|
||||||
|
|
||||||
Qortal is a complete infrastructure platform with a blockchain backend, it is capable of indefinite web and application hosting with no continual fees, replacement of DNS and centralized name systems and communications systems, and is the foundation of the next generation digital infrastructure of the world. Qortal is unique in nearly every way, and was written from scratch to address as many concerns from both the existing 'blockchain space' and the 'typical internet' as possible, while maintaining a system that is easy to use and able to run on 'any' computer.
|
|
||||||
|
|
||||||
Qortal contains extensive functionality geared toward complete decentralization of the digital world. Removal of 'middlemen' of any kind from all transactions, and ability to publish websites and applications that require no continual fees, on a name that is truly owned by the account that registered it, or purchased it from another. A single name on Qortal is capable of being both a namespace and a 'username'. That single name can have an application, website, public and private data, communications, authentication, the namespace itself and more, which can subsequently be sold to anyone else without the need to change any type of 'hosting' or DNS entries that do not exist, email that doesn't exist, etc. Maintaining the same functionality as those replaced features of web 2.0.
|
|
||||||
|
|
||||||
Over time Qortal has progressed into a fully featured environment catering to any and all types of people and organizations, and will continue to advance as time goes on. Brining more features, capability, device support, and availale replacements for web2.0. Ultimately building a new, completely user-controlled digital world without limits.
|
|
||||||
|
|
||||||
Qortal has no owner, no company on top of it, and is completely community built, run, and funded. A community-established and run group of developers known as the 'dev-group' or Qortal Development Group, make group_approval based decisions for the project's future. If you are a developer interested in assisting with the project, you meay reach out to the Qortal Development Group in any of the available Qortal community locations. Either on the Qortal network itself, or on one of the temporary centralized social media locations.
|
|
||||||
|
|
||||||
Building the future one block at a time. Welcome to Qortal.
|
|
||||||
|
|
||||||
# Building the Qortal Core from Source
|
|
||||||
|
|
||||||
## Build / run
|
## Build / run
|
||||||
|
|
||||||
@ -25,21 +10,3 @@ Building the future one block at a time. Welcome to Qortal.
|
|||||||
- Run JAR in same working directory as *settings.json*: `java -jar target/qortal-1.0.jar`
|
- Run JAR in same working directory as *settings.json*: `java -jar target/qortal-1.0.jar`
|
||||||
- Wrap in shell script, add JVM flags, redirection, backgrounding, etc. as necessary.
|
- Wrap in shell script, add JVM flags, redirection, backgrounding, etc. as necessary.
|
||||||
- Or use supplied example shell script: *start.sh*
|
- Or use supplied example shell script: *start.sh*
|
||||||
|
|
||||||
# Using a pre-built Qortal 'jar' binary
|
|
||||||
|
|
||||||
If you would prefer to utilize a released version of Qortal, you may do so by downloading one of the available releases from the releases page, that are also linked on https://qortal.org and https://qortal.dev.
|
|
||||||
|
|
||||||
# Learning Q-App Development
|
|
||||||
|
|
||||||
https://qortal.dev contains dev documentation for building JS/React (and other client-side languages) applications or 'Q-Apps' on Qortal. Q-Apps are published on Registered Qortal Names, and aside from a single Name Registration fee, and a fraction of QORT for a publish transaction, require zero continual costs. These applications get more redundant with each new access from a new Qortal Node, making your application faster for the next user to download, and stronger as time goes on. Q-Apps live indefinitely in the history of the blockchain-secured Qortal Data Network (QDN).
|
|
||||||
|
|
||||||
# How to learn more
|
|
||||||
|
|
||||||
If the project interests you, you may learn more from the various web2 and QDN based websites focused on introductory information.
|
|
||||||
|
|
||||||
https://qortal.org - primary internet presence
|
|
||||||
https://qortal.dev - secondary and development focused website with links to many new developments and documentation
|
|
||||||
https://wiki.qortal.org - community built and managed wiki with detailed information regarding the project
|
|
||||||
|
|
||||||
links to telegram and discord communities are at the top of https://qortal.org as well.
|
|
||||||
|
@ -1,4 +1,3 @@
|
|||||||
{
|
{
|
||||||
"apiDocumentationEnabled": true,
|
"apiDocumentationEnabled": true
|
||||||
"apiWhitelistEnabled": false
|
|
||||||
}
|
}
|
||||||
|
Binary file not shown.
Before Width: | Height: | Size: 1.7 MiB |
Binary file not shown.
Before Width: | Height: | Size: 160 KiB |
File diff suppressed because it is too large
Load Diff
@ -2,9 +2,7 @@
|
|||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
|
||||||
* AdvancedInstaller v19.4 or better, and enterprise licence.
|
* AdvancedInstaller v19.4 or better, and enterprise licence if translations are required
|
||||||
* Qortal has an open source license, however it currently (as of December 2024) only supports up to version 19. (We may need to reach out to Advanced Installer again to obtain a new license at some point, if needed.
|
|
||||||
* Reach out to @crowetic for links to the installer install files, and license.
|
|
||||||
* Installed AdoptOpenJDK v17 64bit, full JDK *not* JRE
|
* Installed AdoptOpenJDK v17 64bit, full JDK *not* JRE
|
||||||
|
|
||||||
## General build instructions
|
## General build instructions
|
||||||
@ -12,12 +10,6 @@
|
|||||||
If this is your first time opening the `qortal.aip` file then you might need to adjust
|
If this is your first time opening the `qortal.aip` file then you might need to adjust
|
||||||
configured paths, or create a dummy `D:` drive with the expected layout.
|
configured paths, or create a dummy `D:` drive with the expected layout.
|
||||||
|
|
||||||
Opening the aip file from within a clone of the qortal repo also works, if you have a separate windows machine setup to do the build.
|
|
||||||
|
|
||||||
You May need to change the location of the 'jre64' files inside Advanced Installer, if it is set to a path that your build machine doesn't have.
|
|
||||||
|
|
||||||
The Java Memory Arguments can be set manually, but as of December 2024 they have been reset back to system defaults. This should include G1GC Garbage Collector.
|
|
||||||
|
|
||||||
Typical build procedure:
|
Typical build procedure:
|
||||||
|
|
||||||
* Place the `qortal.jar` file in `Install-Files\`
|
* Place the `qortal.jar` file in `Install-Files\`
|
||||||
|
20
pom.xml
20
pom.xml
@ -3,7 +3,7 @@
|
|||||||
<modelVersion>4.0.0</modelVersion>
|
<modelVersion>4.0.0</modelVersion>
|
||||||
<groupId>org.qortal</groupId>
|
<groupId>org.qortal</groupId>
|
||||||
<artifactId>qortal</artifactId>
|
<artifactId>qortal</artifactId>
|
||||||
<version>4.7.1</version>
|
<version>4.6.0</version>
|
||||||
<packaging>jar</packaging>
|
<packaging>jar</packaging>
|
||||||
<properties>
|
<properties>
|
||||||
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
|
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
|
||||||
@ -16,19 +16,19 @@
|
|||||||
<ciyam-at.version>1.4.2</ciyam-at.version>
|
<ciyam-at.version>1.4.2</ciyam-at.version>
|
||||||
<commons-net.version>3.8.0</commons-net.version>
|
<commons-net.version>3.8.0</commons-net.version>
|
||||||
<commons-text.version>1.12.0</commons-text.version>
|
<commons-text.version>1.12.0</commons-text.version>
|
||||||
<commons-io.version>2.18.0</commons-io.version>
|
<commons-io.version>2.17.0</commons-io.version>
|
||||||
<commons-compress.version>1.27.1</commons-compress.version>
|
<commons-compress.version>1.27.1</commons-compress.version>
|
||||||
<commons-lang3.version>3.17.0</commons-lang3.version>
|
<commons-lang3.version>3.17.0</commons-lang3.version>
|
||||||
<dagger.version>1.2.2</dagger.version>
|
<dagger.version>1.2.2</dagger.version>
|
||||||
<extendedset.version>0.12.3</extendedset.version>
|
<extendedset.version>0.12.3</extendedset.version>
|
||||||
<git-commit-id-plugin.version>4.9.10</git-commit-id-plugin.version>
|
<git-commit-id-plugin.version>4.9.10</git-commit-id-plugin.version>
|
||||||
<grpc.version>1.68.1</grpc.version>
|
<grpc.version>1.66.0</grpc.version>
|
||||||
<guava.version>33.3.1-jre</guava.version>
|
<guava.version>33.3.0-jre</guava.version>
|
||||||
<hamcrest-library.version>2.2</hamcrest-library.version>
|
<hamcrest-library.version>2.2</hamcrest-library.version>
|
||||||
<homoglyph.version>1.2.1</homoglyph.version>
|
<homoglyph.version>1.2.1</homoglyph.version>
|
||||||
<hsqldb.version>2.7.4</hsqldb.version>
|
<hsqldb.version>2.5.1</hsqldb.version>
|
||||||
<icu4j.version>76.1</icu4j.version>
|
<icu4j.version>75.1</icu4j.version>
|
||||||
<java-diff-utils.version>4.15</java-diff-utils.version>
|
<java-diff-utils.version>4.12</java-diff-utils.version>
|
||||||
<javax.servlet-api.version>4.0.1</javax.servlet-api.version>
|
<javax.servlet-api.version>4.0.1</javax.servlet-api.version>
|
||||||
<jaxb-runtime.version>2.3.9</jaxb-runtime.version>
|
<jaxb-runtime.version>2.3.9</jaxb-runtime.version>
|
||||||
<jersey.version>2.42</jersey.version>
|
<jersey.version>2.42</jersey.version>
|
||||||
@ -45,17 +45,17 @@
|
|||||||
<maven-dependency-plugin.version>3.6.1</maven-dependency-plugin.version>
|
<maven-dependency-plugin.version>3.6.1</maven-dependency-plugin.version>
|
||||||
<maven-jar-plugin.version>3.4.2</maven-jar-plugin.version>
|
<maven-jar-plugin.version>3.4.2</maven-jar-plugin.version>
|
||||||
<maven-package-info-plugin.version>1.1.0</maven-package-info-plugin.version>
|
<maven-package-info-plugin.version>1.1.0</maven-package-info-plugin.version>
|
||||||
<maven-plugin.version>2.18.0</maven-plugin.version>
|
<maven-plugin.version>2.17.1</maven-plugin.version>
|
||||||
<maven-reproducible-build-plugin.version>0.17</maven-reproducible-build-plugin.version>
|
<maven-reproducible-build-plugin.version>0.17</maven-reproducible-build-plugin.version>
|
||||||
<maven-resources-plugin.version>3.3.1</maven-resources-plugin.version>
|
<maven-resources-plugin.version>3.3.1</maven-resources-plugin.version>
|
||||||
<maven-shade-plugin.version>3.6.0</maven-shade-plugin.version>
|
<maven-shade-plugin.version>3.6.0</maven-shade-plugin.version>
|
||||||
<maven-surefire-plugin.version>3.5.2</maven-surefire-plugin.version>
|
<maven-surefire-plugin.version>3.5.0</maven-surefire-plugin.version>
|
||||||
<protobuf.version>3.25.3</protobuf.version>
|
<protobuf.version>3.25.3</protobuf.version>
|
||||||
<replacer.version>1.5.3</replacer.version>
|
<replacer.version>1.5.3</replacer.version>
|
||||||
<simplemagic.version>1.17</simplemagic.version>
|
<simplemagic.version>1.17</simplemagic.version>
|
||||||
<slf4j.version>1.7.36</slf4j.version>
|
<slf4j.version>1.7.36</slf4j.version>
|
||||||
<swagger-api.version>2.0.10</swagger-api.version>
|
<swagger-api.version>2.0.10</swagger-api.version>
|
||||||
<swagger-ui.version>5.18.2</swagger-ui.version>
|
<swagger-ui.version>5.17.14</swagger-ui.version>
|
||||||
<upnp.version>1.2</upnp.version>
|
<upnp.version>1.2</upnp.version>
|
||||||
<xz.version>1.10</xz.version>
|
<xz.version>1.10</xz.version>
|
||||||
</properties>
|
</properties>
|
||||||
|
@ -1,173 +0,0 @@
|
|||||||
package org.hsqldb.jdbc;
|
|
||||||
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
|
||||||
import org.apache.logging.log4j.Logger;
|
|
||||||
import org.hsqldb.jdbc.pool.JDBCPooledConnection;
|
|
||||||
import org.qortal.data.system.DbConnectionInfo;
|
|
||||||
import org.qortal.repository.hsqldb.HSQLDBRepositoryFactory;
|
|
||||||
|
|
||||||
import javax.sql.ConnectionEvent;
|
|
||||||
import javax.sql.PooledConnection;
|
|
||||||
import java.sql.Connection;
|
|
||||||
import java.sql.SQLException;
|
|
||||||
import java.util.Comparator;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Class HSQLDBPoolMonitored
|
|
||||||
*
|
|
||||||
* This class uses the same logic as HSQLDBPool. The only difference is it monitors the state of every connection
|
|
||||||
* to the database. This is used for debugging purposes only.
|
|
||||||
*/
|
|
||||||
public class HSQLDBPoolMonitored extends HSQLDBPool {
|
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(HSQLDBRepositoryFactory.class);
|
|
||||||
|
|
||||||
private static final String EMPTY = "Empty";
|
|
||||||
private static final String AVAILABLE = "Available";
|
|
||||||
private static final String ALLOCATED = "Allocated";
|
|
||||||
|
|
||||||
private ConcurrentHashMap<Integer, DbConnectionInfo> infoByIndex;
|
|
||||||
|
|
||||||
public HSQLDBPoolMonitored(int poolSize) {
|
|
||||||
super(poolSize);
|
|
||||||
|
|
||||||
this.infoByIndex = new ConcurrentHashMap<>(poolSize);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Tries to retrieve a new connection using the properties that have already been
|
|
||||||
* set.
|
|
||||||
*
|
|
||||||
* @return a connection to the data source, or null if no spare connections in pool
|
|
||||||
* @exception SQLException if a database access error occurs
|
|
||||||
*/
|
|
||||||
public Connection tryConnection() throws SQLException {
|
|
||||||
for (int i = 0; i < states.length(); i++) {
|
|
||||||
if (states.compareAndSet(i, RefState.available, RefState.allocated)) {
|
|
||||||
JDBCPooledConnection pooledConnection = connections[i];
|
|
||||||
|
|
||||||
if (pooledConnection == null)
|
|
||||||
// Probably shutdown situation
|
|
||||||
return null;
|
|
||||||
|
|
||||||
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), ALLOCATED));
|
|
||||||
|
|
||||||
return pooledConnection.getConnection();
|
|
||||||
}
|
|
||||||
|
|
||||||
if (states.compareAndSet(i, RefState.empty, RefState.allocated)) {
|
|
||||||
try {
|
|
||||||
JDBCPooledConnection pooledConnection = (JDBCPooledConnection) source.getPooledConnection();
|
|
||||||
|
|
||||||
if (pooledConnection == null)
|
|
||||||
// Probably shutdown situation
|
|
||||||
return null;
|
|
||||||
|
|
||||||
pooledConnection.addConnectionEventListener(this);
|
|
||||||
pooledConnection.addStatementEventListener(this);
|
|
||||||
connections[i] = pooledConnection;
|
|
||||||
|
|
||||||
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), ALLOCATED));
|
|
||||||
|
|
||||||
return pooledConnection.getConnection();
|
|
||||||
} catch (SQLException e) {
|
|
||||||
states.set(i, RefState.empty);
|
|
||||||
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), EMPTY));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
public Connection getConnection() throws SQLException {
|
|
||||||
int var1 = 300;
|
|
||||||
if (this.source.loginTimeout != 0) {
|
|
||||||
var1 = this.source.loginTimeout * 10;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.closed) {
|
|
||||||
throw new SQLException("connection pool is closed");
|
|
||||||
} else {
|
|
||||||
for(int var2 = 0; var2 < var1; ++var2) {
|
|
||||||
for(int var3 = 0; var3 < this.states.length(); ++var3) {
|
|
||||||
if (this.states.compareAndSet(var3, 1, 2)) {
|
|
||||||
infoByIndex.put(var3, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), ALLOCATED));
|
|
||||||
return this.connections[var3].getConnection();
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.states.compareAndSet(var3, 0, 2)) {
|
|
||||||
try {
|
|
||||||
JDBCPooledConnection var4 = (JDBCPooledConnection)this.source.getPooledConnection();
|
|
||||||
var4.addConnectionEventListener(this);
|
|
||||||
var4.addStatementEventListener(this);
|
|
||||||
this.connections[var3] = var4;
|
|
||||||
|
|
||||||
infoByIndex.put(var3, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), ALLOCATED));
|
|
||||||
|
|
||||||
return this.connections[var3].getConnection();
|
|
||||||
} catch (SQLException var6) {
|
|
||||||
this.states.set(var3, 0);
|
|
||||||
infoByIndex.put(var3, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), EMPTY));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
|
||||||
Thread.sleep(100L);
|
|
||||||
} catch (InterruptedException var5) {
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
throw JDBCUtil.invalidArgument();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public void connectionClosed(ConnectionEvent event) {
|
|
||||||
PooledConnection connection = (PooledConnection) event.getSource();
|
|
||||||
|
|
||||||
for (int i = 0; i < connections.length; i++) {
|
|
||||||
if (connections[i] == connection) {
|
|
||||||
states.set(i, RefState.available);
|
|
||||||
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), AVAILABLE));
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public void connectionErrorOccurred(ConnectionEvent event) {
|
|
||||||
PooledConnection connection = (PooledConnection) event.getSource();
|
|
||||||
|
|
||||||
for (int i = 0; i < connections.length; i++) {
|
|
||||||
if (connections[i] == connection) {
|
|
||||||
states.set(i, RefState.allocated);
|
|
||||||
connections[i] = null;
|
|
||||||
states.set(i, RefState.empty);
|
|
||||||
infoByIndex.put(i, new DbConnectionInfo(System.currentTimeMillis(), Thread.currentThread().getName(), EMPTY));
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public List<DbConnectionInfo> getDbConnectionsStates() {
|
|
||||||
|
|
||||||
return infoByIndex.values().stream()
|
|
||||||
.sorted(Comparator.comparingLong(DbConnectionInfo::getUpdated))
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
|
|
||||||
private int findConnectionIndex(ConnectionEvent connectionEvent) {
|
|
||||||
PooledConnection pooledConnection = (PooledConnection) connectionEvent.getSource();
|
|
||||||
|
|
||||||
for(int i = 0; i < this.connections.length; ++i) {
|
|
||||||
if (this.connections[i] == pooledConnection) {
|
|
||||||
return i;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,17 +1,14 @@
|
|||||||
package org.qortal;
|
package org.qortal;
|
||||||
|
|
||||||
import org.apache.commons.io.FileUtils;
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
import org.apache.logging.log4j.LogManager;
|
||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
import org.bouncycastle.jce.provider.BouncyCastleProvider;
|
import org.bouncycastle.jce.provider.BouncyCastleProvider;
|
||||||
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
|
import org.bouncycastle.jsse.provider.BouncyCastleJsseProvider;
|
||||||
import org.qortal.api.ApiKey;
|
import org.qortal.api.ApiKey;
|
||||||
import org.qortal.api.ApiRequest;
|
import org.qortal.api.ApiRequest;
|
||||||
import org.qortal.controller.Controller;
|
|
||||||
import org.qortal.controller.RestartNode;
|
import org.qortal.controller.RestartNode;
|
||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
|
|
||||||
import java.io.File;
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.lang.management.ManagementFactory;
|
import java.lang.management.ManagementFactory;
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
@ -19,8 +16,6 @@ import java.nio.file.Path;
|
|||||||
import java.nio.file.Paths;
|
import java.nio.file.Paths;
|
||||||
import java.security.Security;
|
import java.security.Security;
|
||||||
import java.util.*;
|
import java.util.*;
|
||||||
import java.util.concurrent.TimeUnit;
|
|
||||||
import java.util.concurrent.locks.ReentrantLock;
|
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
import static org.qortal.controller.RestartNode.AGENTLIB_JVM_HOLDER_ARG;
|
import static org.qortal.controller.RestartNode.AGENTLIB_JVM_HOLDER_ARG;
|
||||||
@ -43,7 +38,7 @@ public class ApplyRestart {
|
|||||||
private static final String JAVA_TOOL_OPTIONS_NAME = "JAVA_TOOL_OPTIONS";
|
private static final String JAVA_TOOL_OPTIONS_NAME = "JAVA_TOOL_OPTIONS";
|
||||||
private static final String JAVA_TOOL_OPTIONS_VALUE = "";
|
private static final String JAVA_TOOL_OPTIONS_VALUE = "";
|
||||||
|
|
||||||
private static final long CHECK_INTERVAL = 30 * 1000L; // ms
|
private static final long CHECK_INTERVAL = 10 * 1000L; // ms
|
||||||
private static final int MAX_ATTEMPTS = 12;
|
private static final int MAX_ATTEMPTS = 12;
|
||||||
|
|
||||||
public static void main(String[] args) {
|
public static void main(String[] args) {
|
||||||
@ -56,38 +51,21 @@ public class ApplyRestart {
|
|||||||
else
|
else
|
||||||
Settings.getInstance();
|
Settings.getInstance();
|
||||||
|
|
||||||
LOGGER.info("Applying restart this can take up to 5 minutes...");
|
LOGGER.info("Applying restart...");
|
||||||
|
|
||||||
// Shutdown node using API
|
// Shutdown node using API
|
||||||
if (!shutdownNode())
|
if (!shutdownNode())
|
||||||
return;
|
return;
|
||||||
|
|
||||||
try {
|
|
||||||
// Give some time for shutdown
|
|
||||||
TimeUnit.SECONDS.sleep(60);
|
|
||||||
|
|
||||||
// Remove blockchain lock if exist
|
|
||||||
ReentrantLock blockchainLock = Controller.getInstance().getBlockchainLock();
|
|
||||||
if (blockchainLock.isLocked())
|
|
||||||
blockchainLock.unlock();
|
|
||||||
|
|
||||||
// Remove blockchain lock file if still exist
|
|
||||||
TimeUnit.SECONDS.sleep(60);
|
|
||||||
deleteLock();
|
|
||||||
|
|
||||||
// Restart node
|
// Restart node
|
||||||
TimeUnit.SECONDS.sleep(15);
|
|
||||||
restartNode(args);
|
restartNode(args);
|
||||||
|
|
||||||
LOGGER.info("Restarting...");
|
LOGGER.info("Restarting...");
|
||||||
} catch (InterruptedException e) {
|
|
||||||
LOGGER.error("Unable to restart", e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private static boolean shutdownNode() {
|
private static boolean shutdownNode() {
|
||||||
String baseUri = "http://localhost:" + Settings.getInstance().getApiPort() + "/";
|
String baseUri = "http://localhost:" + Settings.getInstance().getApiPort() + "/";
|
||||||
LOGGER.debug(() -> String.format("Shutting down node using API via %s", baseUri));
|
LOGGER.info(() -> String.format("Shutting down node using API via %s", baseUri));
|
||||||
|
|
||||||
// The /admin/stop endpoint requires an API key, which may or may not be already generated
|
// The /admin/stop endpoint requires an API key, which may or may not be already generated
|
||||||
boolean apiKeyNewlyGenerated = false;
|
boolean apiKeyNewlyGenerated = false;
|
||||||
@ -117,17 +95,10 @@ public class ApplyRestart {
|
|||||||
String response = ApiRequest.perform(baseUri + "admin/stop", params);
|
String response = ApiRequest.perform(baseUri + "admin/stop", params);
|
||||||
if (response == null) {
|
if (response == null) {
|
||||||
// No response - consider node shut down
|
// No response - consider node shut down
|
||||||
try {
|
|
||||||
TimeUnit.SECONDS.sleep(30);
|
|
||||||
} catch (InterruptedException e) {
|
|
||||||
throw new RuntimeException(e);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (apiKeyNewlyGenerated) {
|
if (apiKeyNewlyGenerated) {
|
||||||
// API key was newly generated for restarting node, so we need to remove it
|
// API key was newly generated for restarting node, so we need to remove it
|
||||||
ApplyRestart.removeGeneratedApiKey();
|
ApplyRestart.removeGeneratedApiKey();
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -163,22 +134,7 @@ public class ApplyRestart {
|
|||||||
apiKey.delete();
|
apiKey.delete();
|
||||||
|
|
||||||
} catch (IOException e) {
|
} catch (IOException e) {
|
||||||
LOGGER.error("Error loading or deleting API key: {}", e.getMessage());
|
LOGGER.info("Error loading or deleting API key: {}", e.getMessage());
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private static void deleteLock() {
|
|
||||||
// Get the repository path from settings
|
|
||||||
String repositoryPath = Settings.getInstance().getRepositoryPath();
|
|
||||||
LOGGER.debug(String.format("Repository path is: %s", repositoryPath));
|
|
||||||
|
|
||||||
try {
|
|
||||||
Path root = Paths.get(repositoryPath);
|
|
||||||
File lockFile = new File(root.resolve("blockchain.lck").toUri());
|
|
||||||
LOGGER.debug("Lockfile is: {}", lockFile);
|
|
||||||
FileUtils.forceDelete(FileUtils.getFile(lockFile));
|
|
||||||
} catch (IOException e) {
|
|
||||||
LOGGER.debug("Error deleting blockchain lock file: {}", e.getMessage());
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -194,10 +150,9 @@ public class ApplyRestart {
|
|||||||
|
|
||||||
List<String> javaCmd;
|
List<String> javaCmd;
|
||||||
if (Files.exists(exeLauncher)) {
|
if (Files.exists(exeLauncher)) {
|
||||||
javaCmd = List.of(exeLauncher.toString());
|
javaCmd = Arrays.asList(exeLauncher.toString());
|
||||||
} else {
|
} else {
|
||||||
javaCmd = new ArrayList<>();
|
javaCmd = new ArrayList<>();
|
||||||
|
|
||||||
// Java runtime binary itself
|
// Java runtime binary itself
|
||||||
javaCmd.add(javaBinary.toString());
|
javaCmd.add(javaBinary.toString());
|
||||||
|
|
||||||
|
@ -14,7 +14,6 @@ import org.qortal.repository.NameRepository;
|
|||||||
import org.qortal.repository.Repository;
|
import org.qortal.repository.Repository;
|
||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.utils.Base58;
|
import org.qortal.utils.Base58;
|
||||||
import org.qortal.utils.Groups;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
import javax.xml.bind.annotation.XmlAccessType;
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
import javax.xml.bind.annotation.XmlAccessorType;
|
||||||
@ -199,85 +198,66 @@ public class Account {
|
|||||||
|
|
||||||
/** Returns whether account can be considered a "minting account".
|
/** Returns whether account can be considered a "minting account".
|
||||||
* <p>
|
* <p>
|
||||||
* To be considered a "minting account", the account needs to pass some of these tests:<br>
|
* To be considered a "minting account", the account needs to pass all of these tests:<br>
|
||||||
* <ul>
|
* <ul>
|
||||||
* <li>account's level is at least <tt>minAccountLevelToMint</tt> from blockchain config</li>
|
* <li>account's level is at least <tt>minAccountLevelToMint</tt> from blockchain config</li>
|
||||||
* <li>account's address has registered a name</li>
|
* <li>account's address have registered a name</li>
|
||||||
* <li>account's address is a member of the minter group</li>
|
* <li>account's address is member of minter group</li>
|
||||||
* </ul>
|
* </ul>
|
||||||
*
|
*
|
||||||
* @param isGroupValidated true if this account has already been validated for MINTER Group membership
|
|
||||||
* @return true if account can be considered "minting account"
|
* @return true if account can be considered "minting account"
|
||||||
* @throws DataException
|
* @throws DataException
|
||||||
*/
|
*/
|
||||||
public boolean canMint(boolean isGroupValidated) throws DataException {
|
public boolean canMint() throws DataException {
|
||||||
AccountData accountData = this.repository.getAccountRepository().getAccount(this.address);
|
AccountData accountData = this.repository.getAccountRepository().getAccount(this.address);
|
||||||
NameRepository nameRepository = this.repository.getNameRepository();
|
NameRepository nameRepository = this.repository.getNameRepository();
|
||||||
GroupRepository groupRepository = this.repository.getGroupRepository();
|
GroupRepository groupRepository = this.repository.getGroupRepository();
|
||||||
String myAddress = accountData.getAddress();
|
|
||||||
|
|
||||||
int blockchainHeight = this.repository.getBlockRepository().getBlockchainHeight();
|
int blockchainHeight = this.repository.getBlockRepository().getBlockchainHeight();
|
||||||
|
|
||||||
int levelToMint;
|
|
||||||
|
|
||||||
if( blockchainHeight >= BlockChain.getInstance().getIgnoreLevelForRewardShareHeight() ) {
|
|
||||||
levelToMint = 0;
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
levelToMint = BlockChain.getInstance().getMinAccountLevelToMint();
|
|
||||||
}
|
|
||||||
|
|
||||||
int level = accountData.getLevel();
|
|
||||||
List<Integer> groupIdsToMint = Groups.getGroupIdsToMint( BlockChain.getInstance(), blockchainHeight );
|
|
||||||
int nameCheckHeight = BlockChain.getInstance().getOnlyMintWithNameHeight();
|
int nameCheckHeight = BlockChain.getInstance().getOnlyMintWithNameHeight();
|
||||||
|
int levelToMint = BlockChain.getInstance().getMinAccountLevelToMint();
|
||||||
|
int level = accountData.getLevel();
|
||||||
|
int groupIdToMint = BlockChain.getInstance().getMintingGroupId();
|
||||||
int groupCheckHeight = BlockChain.getInstance().getGroupMemberCheckHeight();
|
int groupCheckHeight = BlockChain.getInstance().getGroupMemberCheckHeight();
|
||||||
int removeNameCheckHeight = BlockChain.getInstance().getRemoveOnlyMintWithNameHeight();
|
|
||||||
|
|
||||||
// Can only mint if:
|
String myAddress = accountData.getAddress();
|
||||||
// Account's level is at least minAccountLevelToMint from blockchain config
|
|
||||||
if (blockchainHeight < nameCheckHeight) {
|
|
||||||
if (Account.isFounder(accountData.getFlags())) {
|
|
||||||
return accountData.getBlocksMintedPenalty() == 0;
|
|
||||||
} else {
|
|
||||||
return level >= levelToMint;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Can only mint on onlyMintWithNameHeight from blockchain config if:
|
|
||||||
// Account's level is at least minAccountLevelToMint from blockchain config
|
|
||||||
// Account's address has registered a name
|
|
||||||
if (blockchainHeight >= nameCheckHeight && blockchainHeight < groupCheckHeight) {
|
|
||||||
List<NameData> myName = nameRepository.getNamesByOwner(myAddress);
|
List<NameData> myName = nameRepository.getNamesByOwner(myAddress);
|
||||||
if (Account.isFounder(accountData.getFlags())) {
|
boolean isMember = groupRepository.memberExists(groupIdToMint, myAddress);
|
||||||
return accountData.getBlocksMintedPenalty() == 0 && !myName.isEmpty();
|
|
||||||
} else {
|
|
||||||
return level >= levelToMint && !myName.isEmpty();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Can only mint on groupMemberCheckHeight from blockchain config if:
|
if (accountData == null)
|
||||||
// Account's level is at least minAccountLevelToMint from blockchain config
|
return false;
|
||||||
// Account's address has registered a name
|
|
||||||
// Account's address is a member of the minter group
|
|
||||||
if (blockchainHeight >= groupCheckHeight && blockchainHeight < removeNameCheckHeight) {
|
|
||||||
List<NameData> myName = nameRepository.getNamesByOwner(myAddress);
|
|
||||||
if (Account.isFounder(accountData.getFlags())) {
|
|
||||||
return accountData.getBlocksMintedPenalty() == 0 && !myName.isEmpty() && (isGroupValidated || Groups.memberExistsInAnyGroup(groupRepository, groupIdsToMint, myAddress));
|
|
||||||
} else {
|
|
||||||
return level >= levelToMint && !myName.isEmpty() && (isGroupValidated || Groups.memberExistsInAnyGroup(groupRepository, groupIdsToMint, myAddress));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Can only mint on removeOnlyMintWithNameHeight from blockchain config if:
|
// Can only mint if level is at least minAccountLevelToMint< from blockchain config
|
||||||
// Account's level is at least minAccountLevelToMint from blockchain config
|
if (blockchainHeight < nameCheckHeight && level >= levelToMint)
|
||||||
// Account's address is a member of the minter group
|
return true;
|
||||||
if (blockchainHeight >= removeNameCheckHeight) {
|
|
||||||
if (Account.isFounder(accountData.getFlags())) {
|
// Can only mint if have registered a name
|
||||||
return accountData.getBlocksMintedPenalty() == 0 && (isGroupValidated || Groups.memberExistsInAnyGroup(groupRepository, groupIdsToMint, myAddress));
|
if (blockchainHeight >= nameCheckHeight && blockchainHeight < groupCheckHeight && level >= levelToMint && !myName.isEmpty())
|
||||||
} else {
|
return true;
|
||||||
return level >= levelToMint && (isGroupValidated || Groups.memberExistsInAnyGroup(groupRepository, groupIdsToMint, myAddress));
|
|
||||||
}
|
// Can only mint if have registered a name and is member of minter group id
|
||||||
}
|
if (blockchainHeight >= groupCheckHeight && level >= levelToMint && !myName.isEmpty() && isMember)
|
||||||
|
return true;
|
||||||
|
|
||||||
|
// Founders needs to pass same tests like minters
|
||||||
|
if (blockchainHeight < nameCheckHeight &&
|
||||||
|
Account.isFounder(accountData.getFlags()) &&
|
||||||
|
accountData.getBlocksMintedPenalty() == 0)
|
||||||
|
return true;
|
||||||
|
|
||||||
|
if (blockchainHeight >= nameCheckHeight &&
|
||||||
|
blockchainHeight < groupCheckHeight &&
|
||||||
|
Account.isFounder(accountData.getFlags()) &&
|
||||||
|
accountData.getBlocksMintedPenalty() == 0 &&
|
||||||
|
!myName.isEmpty())
|
||||||
|
return true;
|
||||||
|
|
||||||
|
if (blockchainHeight >= groupCheckHeight &&
|
||||||
|
Account.isFounder(accountData.getFlags()) &&
|
||||||
|
accountData.getBlocksMintedPenalty() == 0 &&
|
||||||
|
!myName.isEmpty() &&
|
||||||
|
isMember)
|
||||||
|
return true;
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
@ -292,6 +272,7 @@ public class Account {
|
|||||||
return this.repository.getAccountRepository().getBlocksMintedPenaltyCount(this.address);
|
return this.repository.getAccountRepository().getBlocksMintedPenaltyCount(this.address);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
/** Returns whether account can build reward-shares.
|
/** Returns whether account can build reward-shares.
|
||||||
* <p>
|
* <p>
|
||||||
* To be able to create reward-shares, the account needs to pass at least one of these tests:<br>
|
* To be able to create reward-shares, the account needs to pass at least one of these tests:<br>
|
||||||
@ -305,7 +286,6 @@ public class Account {
|
|||||||
*/
|
*/
|
||||||
public boolean canRewardShare() throws DataException {
|
public boolean canRewardShare() throws DataException {
|
||||||
AccountData accountData = this.repository.getAccountRepository().getAccount(this.address);
|
AccountData accountData = this.repository.getAccountRepository().getAccount(this.address);
|
||||||
|
|
||||||
if (accountData == null)
|
if (accountData == null)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
@ -316,9 +296,6 @@ public class Account {
|
|||||||
if (Account.isFounder(accountData.getFlags()) && accountData.getBlocksMintedPenalty() == 0)
|
if (Account.isFounder(accountData.getFlags()) && accountData.getBlocksMintedPenalty() == 0)
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getIgnoreLevelForRewardShareHeight() )
|
|
||||||
return true;
|
|
||||||
|
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -361,24 +338,6 @@ public class Account {
|
|||||||
return accountData.getLevel();
|
return accountData.getLevel();
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Returns reward-share minting address, or unknown if reward-share does not exist.
|
|
||||||
*
|
|
||||||
* @param repository
|
|
||||||
* @param rewardSharePublicKey
|
|
||||||
* @return address or unknown
|
|
||||||
* @throws DataException
|
|
||||||
*/
|
|
||||||
public static String getRewardShareMintingAddress(Repository repository, byte[] rewardSharePublicKey) throws DataException {
|
|
||||||
// Find actual minter address
|
|
||||||
RewardShareData rewardShareData = repository.getAccountRepository().getRewardShare(rewardSharePublicKey);
|
|
||||||
|
|
||||||
if (rewardShareData == null)
|
|
||||||
return "Unknown";
|
|
||||||
|
|
||||||
return rewardShareData.getMinter();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns 'effective' minting level, or zero if reward-share does not exist.
|
* Returns 'effective' minting level, or zero if reward-share does not exist.
|
||||||
*
|
*
|
||||||
@ -396,7 +355,6 @@ public class Account {
|
|||||||
Account rewardShareMinter = new Account(repository, rewardShareData.getMinter());
|
Account rewardShareMinter = new Account(repository, rewardShareData.getMinter());
|
||||||
return rewardShareMinter.getEffectiveMintingLevel();
|
return rewardShareMinter.getEffectiveMintingLevel();
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Returns 'effective' minting level, with a fix for the zero level.
|
* Returns 'effective' minting level, with a fix for the zero level.
|
||||||
* <p>
|
* <p>
|
||||||
|
@ -194,7 +194,6 @@ public class ApiService {
|
|||||||
|
|
||||||
context.addServlet(AdminStatusWebSocket.class, "/websockets/admin/status");
|
context.addServlet(AdminStatusWebSocket.class, "/websockets/admin/status");
|
||||||
context.addServlet(BlocksWebSocket.class, "/websockets/blocks");
|
context.addServlet(BlocksWebSocket.class, "/websockets/blocks");
|
||||||
context.addServlet(DataMonitorSocket.class, "/websockets/datamonitor");
|
|
||||||
context.addServlet(ActiveChatsWebSocket.class, "/websockets/chat/active/*");
|
context.addServlet(ActiveChatsWebSocket.class, "/websockets/chat/active/*");
|
||||||
context.addServlet(ChatMessagesWebSocket.class, "/websockets/chat/messages");
|
context.addServlet(ChatMessagesWebSocket.class, "/websockets/chat/messages");
|
||||||
context.addServlet(TradeOffersWebSocket.class, "/websockets/crosschain/tradeoffers");
|
context.addServlet(TradeOffersWebSocket.class, "/websockets/crosschain/tradeoffers");
|
||||||
|
@ -1,13 +1,7 @@
|
|||||||
package org.qortal.api.model;
|
package org.qortal.api.model;
|
||||||
|
|
||||||
import org.qortal.account.Account;
|
|
||||||
import org.qortal.repository.DataException;
|
|
||||||
import org.qortal.repository.RepositoryManager;
|
|
||||||
import org.qortal.repository.Repository;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
import javax.xml.bind.annotation.XmlAccessType;
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
import javax.xml.bind.annotation.XmlAccessorType;
|
||||||
import javax.xml.bind.annotation.XmlElement;
|
|
||||||
|
|
||||||
// All properties to be converted to JSON via JAXB
|
// All properties to be converted to JSON via JAXB
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
@XmlAccessorType(XmlAccessType.FIELD)
|
||||||
@ -53,31 +47,4 @@ public class ApiOnlineAccount {
|
|||||||
return this.recipientAddress;
|
return this.recipientAddress;
|
||||||
}
|
}
|
||||||
|
|
||||||
public int getMinterLevelFromPublicKey() {
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
return Account.getRewardShareEffectiveMintingLevel(repository, this.rewardSharePublicKey);
|
|
||||||
} catch (DataException e) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean getIsMember() {
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
return repository.getGroupRepository().memberExists(694, getMinterAddress());
|
|
||||||
} catch (DataException e) {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// JAXB special
|
|
||||||
|
|
||||||
@XmlElement(name = "minterLevel")
|
|
||||||
protected int getMinterLevel() {
|
|
||||||
return getMinterLevelFromPublicKey();
|
|
||||||
}
|
|
||||||
|
|
||||||
@XmlElement(name = "isMinterMember")
|
|
||||||
protected boolean getMinterMember() {
|
|
||||||
return getIsMember();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
@ -9,7 +9,6 @@ import java.math.BigInteger;
|
|||||||
public class BlockMintingInfo {
|
public class BlockMintingInfo {
|
||||||
|
|
||||||
public byte[] minterPublicKey;
|
public byte[] minterPublicKey;
|
||||||
public String minterAddress;
|
|
||||||
public int minterLevel;
|
public int minterLevel;
|
||||||
public int onlineAccountsCount;
|
public int onlineAccountsCount;
|
||||||
public BigDecimal maxDistance;
|
public BigDecimal maxDistance;
|
||||||
@ -20,4 +19,5 @@ public class BlockMintingInfo {
|
|||||||
|
|
||||||
public BlockMintingInfo() {
|
public BlockMintingInfo() {
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -1,72 +0,0 @@
|
|||||||
package org.qortal.api.model;
|
|
||||||
|
|
||||||
import io.swagger.v3.oas.annotations.media.Schema;
|
|
||||||
import org.qortal.data.crosschain.CrossChainTradeData;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
|
|
||||||
|
|
||||||
// All properties to be converted to JSON via JAXB
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class CrossChainTradeLedgerEntry {
|
|
||||||
|
|
||||||
private String market;
|
|
||||||
|
|
||||||
private String currency;
|
|
||||||
|
|
||||||
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
|
|
||||||
private long quantity;
|
|
||||||
|
|
||||||
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
|
|
||||||
private long feeAmount;
|
|
||||||
|
|
||||||
private String feeCurrency;
|
|
||||||
|
|
||||||
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
|
|
||||||
private long totalPrice;
|
|
||||||
|
|
||||||
private long tradeTimestamp;
|
|
||||||
|
|
||||||
protected CrossChainTradeLedgerEntry() {
|
|
||||||
/* For JAXB */
|
|
||||||
}
|
|
||||||
|
|
||||||
public CrossChainTradeLedgerEntry(String market, String currency, long quantity, long feeAmount, String feeCurrency, long totalPrice, long tradeTimestamp) {
|
|
||||||
this.market = market;
|
|
||||||
this.currency = currency;
|
|
||||||
this.quantity = quantity;
|
|
||||||
this.feeAmount = feeAmount;
|
|
||||||
this.feeCurrency = feeCurrency;
|
|
||||||
this.totalPrice = totalPrice;
|
|
||||||
this.tradeTimestamp = tradeTimestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getMarket() {
|
|
||||||
return market;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getCurrency() {
|
|
||||||
return currency;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getQuantity() {
|
|
||||||
return quantity;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getFeeAmount() {
|
|
||||||
return feeAmount;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getFeeCurrency() {
|
|
||||||
return feeCurrency;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getTotalPrice() {
|
|
||||||
return totalPrice;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getTradeTimestamp() {
|
|
||||||
return tradeTimestamp;
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,50 +0,0 @@
|
|||||||
package org.qortal.api.model;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
// All properties to be converted to JSON via JAXB
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class DatasetStatus {
|
|
||||||
|
|
||||||
private String name;
|
|
||||||
|
|
||||||
private long count;
|
|
||||||
|
|
||||||
public DatasetStatus() {}
|
|
||||||
|
|
||||||
public DatasetStatus(String name, long count) {
|
|
||||||
this.name = name;
|
|
||||||
this.count = count;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getName() {
|
|
||||||
return name;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getCount() {
|
|
||||||
return count;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public boolean equals(Object o) {
|
|
||||||
if (this == o) return true;
|
|
||||||
if (o == null || getClass() != o.getClass()) return false;
|
|
||||||
DatasetStatus that = (DatasetStatus) o;
|
|
||||||
return count == that.count && Objects.equals(name, that.name);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public int hashCode() {
|
|
||||||
return Objects.hash(name, count);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "DatasetStatus{" +
|
|
||||||
"name='" + name + '\'' +
|
|
||||||
", count=" + count +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -33,13 +33,9 @@ import org.qortal.controller.arbitrary.ArbitraryDataStorageManager;
|
|||||||
import org.qortal.controller.arbitrary.ArbitraryMetadataManager;
|
import org.qortal.controller.arbitrary.ArbitraryMetadataManager;
|
||||||
import org.qortal.data.account.AccountData;
|
import org.qortal.data.account.AccountData;
|
||||||
import org.qortal.data.arbitrary.ArbitraryCategoryInfo;
|
import org.qortal.data.arbitrary.ArbitraryCategoryInfo;
|
||||||
import org.qortal.data.arbitrary.ArbitraryDataIndexDetail;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryDataIndexScoreKey;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryDataIndexScorecard;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceData;
|
import org.qortal.data.arbitrary.ArbitraryResourceData;
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceMetadata;
|
import org.qortal.data.arbitrary.ArbitraryResourceMetadata;
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceStatus;
|
import org.qortal.data.arbitrary.ArbitraryResourceStatus;
|
||||||
import org.qortal.data.arbitrary.IndexCache;
|
|
||||||
import org.qortal.data.naming.NameData;
|
import org.qortal.data.naming.NameData;
|
||||||
import org.qortal.data.transaction.ArbitraryTransactionData;
|
import org.qortal.data.transaction.ArbitraryTransactionData;
|
||||||
import org.qortal.data.transaction.TransactionData;
|
import org.qortal.data.transaction.TransactionData;
|
||||||
@ -73,11 +69,8 @@ import java.nio.file.Files;
|
|||||||
import java.nio.file.Paths;
|
import java.nio.file.Paths;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
import java.util.Comparator;
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Map;
|
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
@Path("/arbitrary")
|
@Path("/arbitrary")
|
||||||
@Tag(name = "Arbitrary")
|
@Tag(name = "Arbitrary")
|
||||||
@ -179,7 +172,6 @@ public class ArbitraryResource {
|
|||||||
@Parameter(description = "Name (searches name field only)") @QueryParam("name") List<String> names,
|
@Parameter(description = "Name (searches name field only)") @QueryParam("name") List<String> names,
|
||||||
@Parameter(description = "Title (searches title metadata field only)") @QueryParam("title") String title,
|
@Parameter(description = "Title (searches title metadata field only)") @QueryParam("title") String title,
|
||||||
@Parameter(description = "Description (searches description metadata field only)") @QueryParam("description") String description,
|
@Parameter(description = "Description (searches description metadata field only)") @QueryParam("description") String description,
|
||||||
@Parameter(description = "Keyword (searches description metadata field by keywords)") @QueryParam("keywords") List<String> keywords,
|
|
||||||
@Parameter(description = "Prefix only (if true, only the beginning of fields are matched)") @QueryParam("prefix") Boolean prefixOnly,
|
@Parameter(description = "Prefix only (if true, only the beginning of fields are matched)") @QueryParam("prefix") Boolean prefixOnly,
|
||||||
@Parameter(description = "Exact match names only (if true, partial name matches are excluded)") @QueryParam("exactmatchnames") Boolean exactMatchNamesOnly,
|
@Parameter(description = "Exact match names only (if true, partial name matches are excluded)") @QueryParam("exactmatchnames") Boolean exactMatchNamesOnly,
|
||||||
@Parameter(description = "Default resources (without identifiers) only") @QueryParam("default") Boolean defaultResource,
|
@Parameter(description = "Default resources (without identifiers) only") @QueryParam("default") Boolean defaultResource,
|
||||||
@ -220,7 +212,7 @@ public class ArbitraryResource {
|
|||||||
}
|
}
|
||||||
|
|
||||||
List<ArbitraryResourceData> resources = repository.getArbitraryRepository()
|
List<ArbitraryResourceData> resources = repository.getArbitraryRepository()
|
||||||
.searchArbitraryResources(service, query, identifier, names, title, description, keywords, usePrefixOnly,
|
.searchArbitraryResources(service, query, identifier, names, title, description, usePrefixOnly,
|
||||||
exactMatchNames, defaultRes, mode, minLevel, followedOnly, excludeBlocked, includeMetadata, includeStatus,
|
exactMatchNames, defaultRes, mode, minLevel, followedOnly, excludeBlocked, includeMetadata, includeStatus,
|
||||||
before, after, limit, offset, reverse);
|
before, after, limit, offset, reverse);
|
||||||
|
|
||||||
@ -1193,90 +1185,6 @@ public class ArbitraryResource {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/indices")
|
|
||||||
@Operation(
|
|
||||||
summary = "Find matching arbitrary resource indices",
|
|
||||||
description = "",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
description = "indices",
|
|
||||||
content = @Content(
|
|
||||||
array = @ArraySchema(
|
|
||||||
schema = @Schema(
|
|
||||||
implementation = ArbitraryDataIndexScorecard.class
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
public List<ArbitraryDataIndexScorecard> searchIndices(@QueryParam("terms") String[] terms) {
|
|
||||||
|
|
||||||
List<ArbitraryDataIndexDetail> indices = new ArrayList<>();
|
|
||||||
|
|
||||||
// get index details for each term
|
|
||||||
for( String term : terms ) {
|
|
||||||
List<ArbitraryDataIndexDetail> details = IndexCache.getInstance().getIndicesByTerm().get(term);
|
|
||||||
|
|
||||||
if( details != null ) {
|
|
||||||
indices.addAll(details);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// sum up the scores for each index with identical attributes
|
|
||||||
Map<ArbitraryDataIndexScoreKey, Double> scoreForKey
|
|
||||||
= indices.stream()
|
|
||||||
.collect(
|
|
||||||
Collectors.groupingBy(
|
|
||||||
index -> new ArbitraryDataIndexScoreKey(index.name, index.category, index.link),
|
|
||||||
Collectors.summingDouble(detail -> 1.0 / detail.rank)
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
// create scorecards for each index group and put them in descending order by score
|
|
||||||
List<ArbitraryDataIndexScorecard> scorecards
|
|
||||||
= scoreForKey.entrySet().stream().map(
|
|
||||||
entry
|
|
||||||
->
|
|
||||||
new ArbitraryDataIndexScorecard(
|
|
||||||
entry.getValue(),
|
|
||||||
entry.getKey().name,
|
|
||||||
entry.getKey().category,
|
|
||||||
entry.getKey().link)
|
|
||||||
)
|
|
||||||
.sorted(Comparator.comparingDouble(ArbitraryDataIndexScorecard::getScore).reversed())
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
return scorecards;
|
|
||||||
}
|
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/indices/{name}/{idPrefix}")
|
|
||||||
@Operation(
|
|
||||||
summary = "Find matching arbitrary resource indices for a registered name and identifier prefix",
|
|
||||||
description = "",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
description = "indices",
|
|
||||||
content = @Content(
|
|
||||||
array = @ArraySchema(
|
|
||||||
schema = @Schema(
|
|
||||||
implementation = ArbitraryDataIndexDetail.class
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
public List<ArbitraryDataIndexDetail> searchIndicesByName(@PathParam("name") String name, @PathParam("idPrefix") String idPrefix) {
|
|
||||||
|
|
||||||
return
|
|
||||||
IndexCache.getInstance().getIndicesByIssuer()
|
|
||||||
.getOrDefault(name, new ArrayList<>(0)).stream()
|
|
||||||
.filter( indexDetail -> indexDetail.indexIdentifer.startsWith(idPrefix))
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
|
|
||||||
// Shared methods
|
// Shared methods
|
||||||
|
|
||||||
|
@ -16,13 +16,9 @@ import org.qortal.api.model.AggregatedOrder;
|
|||||||
import org.qortal.api.model.TradeWithOrderInfo;
|
import org.qortal.api.model.TradeWithOrderInfo;
|
||||||
import org.qortal.api.resource.TransactionsResource.ConfirmationStatus;
|
import org.qortal.api.resource.TransactionsResource.ConfirmationStatus;
|
||||||
import org.qortal.asset.Asset;
|
import org.qortal.asset.Asset;
|
||||||
import org.qortal.controller.hsqldb.HSQLDBBalanceRecorder;
|
|
||||||
import org.qortal.crypto.Crypto;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.data.account.AccountBalanceData;
|
import org.qortal.data.account.AccountBalanceData;
|
||||||
import org.qortal.data.account.AccountData;
|
import org.qortal.data.account.AccountData;
|
||||||
import org.qortal.data.account.AddressAmountData;
|
|
||||||
import org.qortal.data.account.BlockHeightRange;
|
|
||||||
import org.qortal.data.account.BlockHeightRangeAddressAmounts;
|
|
||||||
import org.qortal.data.asset.AssetData;
|
import org.qortal.data.asset.AssetData;
|
||||||
import org.qortal.data.asset.OrderData;
|
import org.qortal.data.asset.OrderData;
|
||||||
import org.qortal.data.asset.RecentTradeData;
|
import org.qortal.data.asset.RecentTradeData;
|
||||||
@ -37,7 +33,6 @@ import org.qortal.transaction.Transaction;
|
|||||||
import org.qortal.transaction.Transaction.ValidationResult;
|
import org.qortal.transaction.Transaction.ValidationResult;
|
||||||
import org.qortal.transform.TransformationException;
|
import org.qortal.transform.TransformationException;
|
||||||
import org.qortal.transform.transaction.*;
|
import org.qortal.transform.transaction.*;
|
||||||
import org.qortal.utils.BalanceRecorderUtils;
|
|
||||||
import org.qortal.utils.Base58;
|
import org.qortal.utils.Base58;
|
||||||
|
|
||||||
import javax.servlet.http.HttpServletRequest;
|
import javax.servlet.http.HttpServletRequest;
|
||||||
@ -47,7 +42,6 @@ import javax.ws.rs.core.MediaType;
|
|||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Optional;
|
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
@Path("/assets")
|
@Path("/assets")
|
||||||
@ -185,122 +179,6 @@ public class AssetsResource {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/balancedynamicranges")
|
|
||||||
@Operation(
|
|
||||||
summary = "Get balance dynamic ranges listed.",
|
|
||||||
description = ".",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
content = @Content(
|
|
||||||
array = @ArraySchema(
|
|
||||||
schema = @Schema(
|
|
||||||
implementation = BlockHeightRange.class
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
public List<BlockHeightRange> getBalanceDynamicRanges(
|
|
||||||
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
|
|
||||||
@Parameter(ref = "limit") @QueryParam("limit") Integer limit,
|
|
||||||
@Parameter(ref = "reverse") @QueryParam("reverse") Boolean reverse) {
|
|
||||||
|
|
||||||
Optional<HSQLDBBalanceRecorder> recorder = HSQLDBBalanceRecorder.getInstance();
|
|
||||||
|
|
||||||
if( recorder.isPresent()) {
|
|
||||||
return recorder.get().getRanges(offset, limit, reverse);
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
return new ArrayList<>(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/balancedynamicrange/{height}")
|
|
||||||
@Operation(
|
|
||||||
summary = "Get balance dynamic range for a given height.",
|
|
||||||
description = ".",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
content = @Content(
|
|
||||||
schema = @Schema(
|
|
||||||
implementation = BlockHeightRange.class
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
@ApiErrors({
|
|
||||||
ApiError.INVALID_CRITERIA, ApiError.INVALID_DATA
|
|
||||||
})
|
|
||||||
public BlockHeightRange getBalanceDynamicRange(@PathParam("height") int height) {
|
|
||||||
|
|
||||||
Optional<HSQLDBBalanceRecorder> recorder = HSQLDBBalanceRecorder.getInstance();
|
|
||||||
|
|
||||||
if( recorder.isPresent()) {
|
|
||||||
Optional<BlockHeightRange> range = recorder.get().getRange(height);
|
|
||||||
|
|
||||||
if( range.isPresent() ) {
|
|
||||||
return range.get();
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/balancedynamicamounts/{begin}/{end}")
|
|
||||||
@Operation(
|
|
||||||
summary = "Get balance dynamic ranges address amounts listed.",
|
|
||||||
description = ".",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
content = @Content(
|
|
||||||
array = @ArraySchema(
|
|
||||||
schema = @Schema(
|
|
||||||
implementation = AddressAmountData.class
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
@ApiErrors({
|
|
||||||
ApiError.INVALID_CRITERIA, ApiError.INVALID_DATA
|
|
||||||
})
|
|
||||||
public List<AddressAmountData> getBalanceDynamicAddressAmounts(
|
|
||||||
@PathParam("begin") int begin,
|
|
||||||
@PathParam("end") int end,
|
|
||||||
@Parameter(ref = "offset") @QueryParam("offset") Integer offset,
|
|
||||||
@Parameter(ref = "limit") @QueryParam("limit") Integer limit) {
|
|
||||||
|
|
||||||
Optional<HSQLDBBalanceRecorder> recorder = HSQLDBBalanceRecorder.getInstance();
|
|
||||||
|
|
||||||
if( recorder.isPresent()) {
|
|
||||||
Optional<BlockHeightRangeAddressAmounts> addressAmounts = recorder.get().getAddressAmounts(new BlockHeightRange(begin, end, false));
|
|
||||||
|
|
||||||
if( addressAmounts.isPresent() ) {
|
|
||||||
return addressAmounts.get().getAmounts().stream()
|
|
||||||
.sorted(BalanceRecorderUtils.ADDRESS_AMOUNT_DATA_COMPARATOR.reversed())
|
|
||||||
.skip(offset)
|
|
||||||
.limit(limit)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_DATA);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@GET
|
@GET
|
||||||
@Path("/openorders/{assetid}/{otherassetid}")
|
@Path("/openorders/{assetid}/{otherassetid}")
|
||||||
@Operation(
|
@Operation(
|
||||||
|
@ -19,8 +19,6 @@ import org.qortal.crypto.Crypto;
|
|||||||
import org.qortal.data.account.AccountData;
|
import org.qortal.data.account.AccountData;
|
||||||
import org.qortal.data.block.BlockData;
|
import org.qortal.data.block.BlockData;
|
||||||
import org.qortal.data.block.BlockSummaryData;
|
import org.qortal.data.block.BlockSummaryData;
|
||||||
import org.qortal.data.block.DecodedOnlineAccountData;
|
|
||||||
import org.qortal.data.network.OnlineAccountData;
|
|
||||||
import org.qortal.data.transaction.TransactionData;
|
import org.qortal.data.transaction.TransactionData;
|
||||||
import org.qortal.repository.BlockArchiveReader;
|
import org.qortal.repository.BlockArchiveReader;
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.repository.DataException;
|
||||||
@ -29,7 +27,6 @@ import org.qortal.repository.RepositoryManager;
|
|||||||
import org.qortal.transform.TransformationException;
|
import org.qortal.transform.TransformationException;
|
||||||
import org.qortal.transform.block.BlockTransformer;
|
import org.qortal.transform.block.BlockTransformer;
|
||||||
import org.qortal.utils.Base58;
|
import org.qortal.utils.Base58;
|
||||||
import org.qortal.utils.Blocks;
|
|
||||||
import org.qortal.utils.Triple;
|
import org.qortal.utils.Triple;
|
||||||
|
|
||||||
import javax.servlet.http.HttpServletRequest;
|
import javax.servlet.http.HttpServletRequest;
|
||||||
@ -48,7 +45,6 @@ import java.util.ArrayList;
|
|||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
import java.util.Comparator;
|
import java.util.Comparator;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Set;
|
|
||||||
|
|
||||||
@Path("/blocks")
|
@Path("/blocks")
|
||||||
@Tag(name = "Blocks")
|
@Tag(name = "Blocks")
|
||||||
@ -546,7 +542,6 @@ public class BlocksResource {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
String minterAddress = Account.getRewardShareMintingAddress(repository, blockData.getMinterPublicKey());
|
|
||||||
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
|
int minterLevel = Account.getRewardShareEffectiveMintingLevel(repository, blockData.getMinterPublicKey());
|
||||||
if (minterLevel == 0)
|
if (minterLevel == 0)
|
||||||
// This may be unavailable when requesting a trimmed block
|
// This may be unavailable when requesting a trimmed block
|
||||||
@ -559,7 +554,6 @@ public class BlocksResource {
|
|||||||
|
|
||||||
BlockMintingInfo blockMintingInfo = new BlockMintingInfo();
|
BlockMintingInfo blockMintingInfo = new BlockMintingInfo();
|
||||||
blockMintingInfo.minterPublicKey = blockData.getMinterPublicKey();
|
blockMintingInfo.minterPublicKey = blockData.getMinterPublicKey();
|
||||||
blockMintingInfo.minterAddress = minterAddress;
|
|
||||||
blockMintingInfo.minterLevel = minterLevel;
|
blockMintingInfo.minterLevel = minterLevel;
|
||||||
blockMintingInfo.onlineAccountsCount = blockData.getOnlineAccountsCount();
|
blockMintingInfo.onlineAccountsCount = blockData.getOnlineAccountsCount();
|
||||||
blockMintingInfo.maxDistance = new BigDecimal(block.MAX_DISTANCE);
|
blockMintingInfo.maxDistance = new BigDecimal(block.MAX_DISTANCE);
|
||||||
@ -894,49 +888,4 @@ public class BlocksResource {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/onlineaccounts/{height}")
|
|
||||||
@Operation(
|
|
||||||
summary = "Get online accounts for block",
|
|
||||||
description = "Returns the online accounts who submitted signatures for this block",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
description = "online accounts",
|
|
||||||
content = @Content(
|
|
||||||
array = @ArraySchema(
|
|
||||||
schema = @Schema(
|
|
||||||
implementation = DecodedOnlineAccountData.class
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
@ApiErrors({
|
|
||||||
ApiError.BLOCK_UNKNOWN, ApiError.REPOSITORY_ISSUE
|
|
||||||
})
|
|
||||||
public Set<DecodedOnlineAccountData> getOnlineAccounts(@PathParam("height") int height) {
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
|
|
||||||
// get block from database
|
|
||||||
BlockData blockData = repository.getBlockRepository().fromHeight(height);
|
|
||||||
|
|
||||||
// if block data is not in the database, then try the archive
|
|
||||||
if (blockData == null) {
|
|
||||||
blockData = repository.getBlockArchiveRepository().fromHeight(height);
|
|
||||||
|
|
||||||
// if the block is not in the database or the archive, then the block is unknown
|
|
||||||
if( blockData == null ) {
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.BLOCK_UNKNOWN);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Set<DecodedOnlineAccountData> onlineAccounts = Blocks.getDecodedOnlineAccountsForBlock(repository, blockData);
|
|
||||||
|
|
||||||
return onlineAccounts;
|
|
||||||
} catch (DataException e) {
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
@ -234,16 +234,12 @@ public class ChatResource {
|
|||||||
}
|
}
|
||||||
)
|
)
|
||||||
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.REPOSITORY_ISSUE})
|
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.INVALID_ADDRESS, ApiError.REPOSITORY_ISSUE})
|
||||||
public ActiveChats getActiveChats(
|
public ActiveChats getActiveChats(@PathParam("address") String address, @QueryParam("encoding") Encoding encoding) {
|
||||||
@PathParam("address") String address,
|
|
||||||
@QueryParam("encoding") Encoding encoding,
|
|
||||||
@QueryParam("haschatreference") Boolean hasChatReference
|
|
||||||
) {
|
|
||||||
if (address == null || !Crypto.isValidAddress(address))
|
if (address == null || !Crypto.isValidAddress(address))
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
|
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_ADDRESS);
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
return repository.getChatRepository().getActiveChats(address, encoding, hasChatReference);
|
return repository.getChatRepository().getActiveChats(address, encoding);
|
||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
|
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
|
||||||
}
|
}
|
||||||
|
@ -10,13 +10,11 @@ import io.swagger.v3.oas.annotations.parameters.RequestBody;
|
|||||||
import io.swagger.v3.oas.annotations.responses.ApiResponse;
|
import io.swagger.v3.oas.annotations.responses.ApiResponse;
|
||||||
import io.swagger.v3.oas.annotations.security.SecurityRequirement;
|
import io.swagger.v3.oas.annotations.security.SecurityRequirement;
|
||||||
import io.swagger.v3.oas.annotations.tags.Tag;
|
import io.swagger.v3.oas.annotations.tags.Tag;
|
||||||
import org.glassfish.jersey.media.multipart.ContentDisposition;
|
|
||||||
import org.qortal.api.ApiError;
|
import org.qortal.api.ApiError;
|
||||||
import org.qortal.api.ApiErrors;
|
import org.qortal.api.ApiErrors;
|
||||||
import org.qortal.api.ApiExceptionFactory;
|
import org.qortal.api.ApiExceptionFactory;
|
||||||
import org.qortal.api.Security;
|
import org.qortal.api.Security;
|
||||||
import org.qortal.api.model.CrossChainCancelRequest;
|
import org.qortal.api.model.CrossChainCancelRequest;
|
||||||
import org.qortal.api.model.CrossChainTradeLedgerEntry;
|
|
||||||
import org.qortal.api.model.CrossChainTradeSummary;
|
import org.qortal.api.model.CrossChainTradeSummary;
|
||||||
import org.qortal.controller.tradebot.TradeBot;
|
import org.qortal.controller.tradebot.TradeBot;
|
||||||
import org.qortal.crosschain.ACCT;
|
import org.qortal.crosschain.ACCT;
|
||||||
@ -46,20 +44,14 @@ import org.qortal.utils.Base58;
|
|||||||
import org.qortal.utils.ByteArray;
|
import org.qortal.utils.ByteArray;
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
import javax.servlet.ServletContext;
|
|
||||||
import javax.servlet.http.HttpServletRequest;
|
import javax.servlet.http.HttpServletRequest;
|
||||||
import javax.servlet.http.HttpServletResponse;
|
|
||||||
import javax.ws.rs.*;
|
import javax.ws.rs.*;
|
||||||
import javax.ws.rs.core.Context;
|
import javax.ws.rs.core.Context;
|
||||||
import javax.ws.rs.core.HttpHeaders;
|
|
||||||
import javax.ws.rs.core.MediaType;
|
import javax.ws.rs.core.MediaType;
|
||||||
import java.io.IOException;
|
|
||||||
import java.util.*;
|
import java.util.*;
|
||||||
import java.util.function.Supplier;
|
import java.util.function.Supplier;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
@Path("/crosschain")
|
@Path("/crosschain")
|
||||||
@Tag(name = "Cross-Chain")
|
@Tag(name = "Cross-Chain")
|
||||||
public class CrossChainResource {
|
public class CrossChainResource {
|
||||||
@ -67,13 +59,6 @@ public class CrossChainResource {
|
|||||||
@Context
|
@Context
|
||||||
HttpServletRequest request;
|
HttpServletRequest request;
|
||||||
|
|
||||||
@Context
|
|
||||||
HttpServletResponse response;
|
|
||||||
|
|
||||||
@Context
|
|
||||||
ServletContext context;
|
|
||||||
|
|
||||||
|
|
||||||
@GET
|
@GET
|
||||||
@Path("/tradeoffers")
|
@Path("/tradeoffers")
|
||||||
@Operation(
|
@Operation(
|
||||||
@ -270,12 +255,6 @@ public class CrossChainResource {
|
|||||||
description = "Only return trades that completed on/after this timestamp (milliseconds since epoch)",
|
description = "Only return trades that completed on/after this timestamp (milliseconds since epoch)",
|
||||||
example = "1597310000000"
|
example = "1597310000000"
|
||||||
) @QueryParam("minimumTimestamp") Long minimumTimestamp,
|
) @QueryParam("minimumTimestamp") Long minimumTimestamp,
|
||||||
@Parameter(
|
|
||||||
description = "Optionally filter by buyer Qortal public key"
|
|
||||||
) @QueryParam("buyerPublicKey") String buyerPublicKey58,
|
|
||||||
@Parameter(
|
|
||||||
description = "Optionally filter by seller Qortal public key"
|
|
||||||
) @QueryParam("sellerPublicKey") String sellerPublicKey58,
|
|
||||||
@Parameter( ref = "limit") @QueryParam("limit") Integer limit,
|
@Parameter( ref = "limit") @QueryParam("limit") Integer limit,
|
||||||
@Parameter( ref = "offset" ) @QueryParam("offset") Integer offset,
|
@Parameter( ref = "offset" ) @QueryParam("offset") Integer offset,
|
||||||
@Parameter( ref = "reverse" ) @QueryParam("reverse") Boolean reverse) {
|
@Parameter( ref = "reverse" ) @QueryParam("reverse") Boolean reverse) {
|
||||||
@ -287,10 +266,6 @@ public class CrossChainResource {
|
|||||||
if (minimumTimestamp != null && minimumTimestamp <= 0)
|
if (minimumTimestamp != null && minimumTimestamp <= 0)
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
|
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
|
||||||
|
|
||||||
// Decode public keys
|
|
||||||
byte[] buyerPublicKey = decodePublicKey(buyerPublicKey58);
|
|
||||||
byte[] sellerPublicKey = decodePublicKey(sellerPublicKey58);
|
|
||||||
|
|
||||||
final Boolean isFinished = Boolean.TRUE;
|
final Boolean isFinished = Boolean.TRUE;
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
@ -321,7 +296,7 @@ public class CrossChainResource {
|
|||||||
byte[] codeHash = acctInfo.getKey().value;
|
byte[] codeHash = acctInfo.getKey().value;
|
||||||
ACCT acct = acctInfo.getValue().get();
|
ACCT acct = acctInfo.getValue().get();
|
||||||
|
|
||||||
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStates(codeHash, buyerPublicKey, sellerPublicKey,
|
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStates(codeHash,
|
||||||
isFinished, acct.getModeByteOffset(), (long) AcctMode.REDEEMED.value, minimumFinalHeight,
|
isFinished, acct.getModeByteOffset(), (long) AcctMode.REDEEMED.value, minimumFinalHeight,
|
||||||
limit, offset, reverse);
|
limit, offset, reverse);
|
||||||
|
|
||||||
@ -360,120 +335,6 @@ public class CrossChainResource {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Decode Public Key
|
|
||||||
*
|
|
||||||
* @param publicKey58 the public key in a string
|
|
||||||
*
|
|
||||||
* @return the public key in bytes
|
|
||||||
*/
|
|
||||||
private byte[] decodePublicKey(String publicKey58) {
|
|
||||||
|
|
||||||
if( publicKey58 == null ) return null;
|
|
||||||
if( publicKey58.isEmpty() ) return new byte[0];
|
|
||||||
|
|
||||||
byte[] publicKey;
|
|
||||||
try {
|
|
||||||
publicKey = Base58.decode(publicKey58);
|
|
||||||
} catch (NumberFormatException e) {
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PUBLIC_KEY, e);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Correct size for public key?
|
|
||||||
if (publicKey.length != Transformer.PUBLIC_KEY_LENGTH)
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_PUBLIC_KEY);
|
|
||||||
|
|
||||||
return publicKey;
|
|
||||||
}
|
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/ledger/{publicKey}")
|
|
||||||
@Operation(
|
|
||||||
summary = "Accounting entries for all trades.",
|
|
||||||
description = "Returns accounting entries for all completed cross-chain trades",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
content = @Content(
|
|
||||||
schema = @Schema(
|
|
||||||
type = "string",
|
|
||||||
format = "byte"
|
|
||||||
)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
@ApiErrors({ApiError.INVALID_CRITERIA, ApiError.REPOSITORY_ISSUE})
|
|
||||||
public HttpServletResponse getLedgerEntries(
|
|
||||||
@PathParam("publicKey") String publicKey58,
|
|
||||||
@Parameter(
|
|
||||||
description = "Only return trades that completed on/after this timestamp (milliseconds since epoch)",
|
|
||||||
example = "1597310000000"
|
|
||||||
) @QueryParam("minimumTimestamp") Long minimumTimestamp) {
|
|
||||||
|
|
||||||
byte[] publicKey = decodePublicKey(publicKey58);
|
|
||||||
|
|
||||||
// minimumTimestamp (if given) needs to be positive
|
|
||||||
if (minimumTimestamp != null && minimumTimestamp <= 0)
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.INVALID_CRITERIA);
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
Integer minimumFinalHeight = null;
|
|
||||||
|
|
||||||
if (minimumTimestamp != null) {
|
|
||||||
minimumFinalHeight = repository.getBlockRepository().getHeightFromTimestamp(minimumTimestamp);
|
|
||||||
// If not found in the block repository it will return either 0 or 1
|
|
||||||
if (minimumFinalHeight == 0 || minimumFinalHeight == 1) {
|
|
||||||
// Try the archive
|
|
||||||
minimumFinalHeight = repository.getBlockArchiveRepository().getHeightFromTimestamp(minimumTimestamp);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (minimumFinalHeight == 0)
|
|
||||||
// We don't have any blocks since minimumTimestamp, let alone trades, so nothing to return
|
|
||||||
return response;
|
|
||||||
|
|
||||||
// height returned from repository is for block BEFORE timestamp
|
|
||||||
// but we want trades AFTER timestamp so bump height accordingly
|
|
||||||
minimumFinalHeight++;
|
|
||||||
}
|
|
||||||
|
|
||||||
List<CrossChainTradeLedgerEntry> crossChainTradeLedgerEntries = new ArrayList<>();
|
|
||||||
|
|
||||||
Map<ByteArray, Supplier<ACCT>> acctsByCodeHash = SupportedBlockchain.getAcctMap();
|
|
||||||
|
|
||||||
// collect ledger entries for each ACCT
|
|
||||||
for (Map.Entry<ByteArray, Supplier<ACCT>> acctInfo : acctsByCodeHash.entrySet()) {
|
|
||||||
byte[] codeHash = acctInfo.getKey().value;
|
|
||||||
ACCT acct = acctInfo.getValue().get();
|
|
||||||
|
|
||||||
// collect buys and sells
|
|
||||||
CrossChainUtils.collectLedgerEntries(publicKey, repository, minimumFinalHeight, crossChainTradeLedgerEntries, codeHash, acct, true);
|
|
||||||
CrossChainUtils.collectLedgerEntries(publicKey, repository, minimumFinalHeight, crossChainTradeLedgerEntries, codeHash, acct, false);
|
|
||||||
}
|
|
||||||
|
|
||||||
crossChainTradeLedgerEntries.sort((a, b) -> Longs.compare(a.getTradeTimestamp(), b.getTradeTimestamp()));
|
|
||||||
|
|
||||||
response.setStatus(HttpServletResponse.SC_OK);
|
|
||||||
response.setContentType("text/csv");
|
|
||||||
response.setHeader(
|
|
||||||
HttpHeaders.CONTENT_DISPOSITION,
|
|
||||||
ContentDisposition
|
|
||||||
.type("attachment")
|
|
||||||
.fileName(CrossChainUtils.createLedgerFileName(Crypto.toAddress(publicKey)))
|
|
||||||
.build()
|
|
||||||
.toString()
|
|
||||||
);
|
|
||||||
|
|
||||||
CrossChainUtils.writeToLedger( response.getWriter(), crossChainTradeLedgerEntries);
|
|
||||||
|
|
||||||
return response;
|
|
||||||
} catch (DataException e) {
|
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.REPOSITORY_ISSUE, e);
|
|
||||||
} catch (IOException e) {
|
|
||||||
response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
|
|
||||||
return response;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@GET
|
@GET
|
||||||
@Path("/price/{blockchain}")
|
@Path("/price/{blockchain}")
|
||||||
@Operation(
|
@Operation(
|
||||||
|
@ -10,36 +10,21 @@ import org.bitcoinj.script.ScriptBuilder;
|
|||||||
|
|
||||||
import org.bouncycastle.util.Strings;
|
import org.bouncycastle.util.Strings;
|
||||||
import org.json.simple.JSONObject;
|
import org.json.simple.JSONObject;
|
||||||
import org.qortal.api.model.CrossChainTradeLedgerEntry;
|
|
||||||
import org.qortal.api.model.crosschain.BitcoinyTBDRequest;
|
import org.qortal.api.model.crosschain.BitcoinyTBDRequest;
|
||||||
import org.qortal.crosschain.*;
|
import org.qortal.crosschain.*;
|
||||||
import org.qortal.data.at.ATData;
|
import org.qortal.data.at.ATData;
|
||||||
import org.qortal.data.at.ATStateData;
|
|
||||||
import org.qortal.data.crosschain.*;
|
import org.qortal.data.crosschain.*;
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.repository.DataException;
|
||||||
import org.qortal.repository.Repository;
|
import org.qortal.repository.Repository;
|
||||||
import org.qortal.utils.Amounts;
|
|
||||||
import org.qortal.utils.BitTwiddling;
|
import org.qortal.utils.BitTwiddling;
|
||||||
|
|
||||||
import java.io.BufferedWriter;
|
|
||||||
import java.io.IOException;
|
|
||||||
import java.io.OutputStreamWriter;
|
|
||||||
import java.io.PrintWriter;
|
|
||||||
import java.io.Writer;
|
|
||||||
import java.text.DateFormat;
|
|
||||||
import java.text.SimpleDateFormat;
|
|
||||||
import java.time.Instant;
|
|
||||||
import java.time.ZoneId;
|
|
||||||
import java.time.ZonedDateTime;
|
|
||||||
import java.util.*;
|
import java.util.*;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
|
|
||||||
public class CrossChainUtils {
|
public class CrossChainUtils {
|
||||||
public static final String QORT_CURRENCY_CODE = "QORT";
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(CrossChainUtils.class);
|
private static final Logger LOGGER = LogManager.getLogger(CrossChainUtils.class);
|
||||||
public static final String CORE_API_CALL = "Core API Call";
|
public static final String CORE_API_CALL = "Core API Call";
|
||||||
public static final String QORTAL_EXCHANGE_LABEL = "Qortal";
|
|
||||||
|
|
||||||
public static ServerConfigurationInfo buildServerConfigurationInfo(Bitcoiny blockchain) {
|
public static ServerConfigurationInfo buildServerConfigurationInfo(Bitcoiny blockchain) {
|
||||||
|
|
||||||
@ -647,128 +632,4 @@ public class CrossChainUtils {
|
|||||||
byte[] lockTimeABytes = BitTwiddling.toBEByteArray((long) lockTimeA);
|
byte[] lockTimeABytes = BitTwiddling.toBEByteArray((long) lockTimeA);
|
||||||
return Bytes.concat(partnerBitcoinPKH, hashOfSecretA, lockTimeABytes);
|
return Bytes.concat(partnerBitcoinPKH, hashOfSecretA, lockTimeABytes);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Write To Ledger
|
|
||||||
*
|
|
||||||
* @param writer the writer to the ledger
|
|
||||||
* @param entries the entries to write to the ledger
|
|
||||||
*
|
|
||||||
* @throws IOException
|
|
||||||
*/
|
|
||||||
public static void writeToLedger(Writer writer, List<CrossChainTradeLedgerEntry> entries) throws IOException {
|
|
||||||
|
|
||||||
BufferedWriter bufferedWriter = new BufferedWriter(writer);
|
|
||||||
|
|
||||||
StringJoiner header = new StringJoiner(",");
|
|
||||||
header.add("Market");
|
|
||||||
header.add("Currency");
|
|
||||||
header.add("Quantity");
|
|
||||||
header.add("Commission Paid");
|
|
||||||
header.add("Commission Currency");
|
|
||||||
header.add("Total Price");
|
|
||||||
header.add("Date Time");
|
|
||||||
header.add("Exchange");
|
|
||||||
|
|
||||||
bufferedWriter.append(header.toString());
|
|
||||||
|
|
||||||
DateFormat dateFormatter = new SimpleDateFormat("yyyyMMdd HH:mm");
|
|
||||||
dateFormatter.setTimeZone(TimeZone.getTimeZone("UTC"));
|
|
||||||
|
|
||||||
for( CrossChainTradeLedgerEntry entry : entries ) {
|
|
||||||
StringJoiner joiner = new StringJoiner(",");
|
|
||||||
|
|
||||||
joiner.add(entry.getMarket());
|
|
||||||
joiner.add(entry.getCurrency());
|
|
||||||
joiner.add(String.valueOf(Amounts.prettyAmount(entry.getQuantity())));
|
|
||||||
joiner.add(String.valueOf(Amounts.prettyAmount(entry.getFeeAmount())));
|
|
||||||
joiner.add(entry.getFeeCurrency());
|
|
||||||
joiner.add(String.valueOf(Amounts.prettyAmount(entry.getTotalPrice())));
|
|
||||||
joiner.add(dateFormatter.format(new Date(entry.getTradeTimestamp())));
|
|
||||||
joiner.add(QORTAL_EXCHANGE_LABEL);
|
|
||||||
|
|
||||||
bufferedWriter.newLine();
|
|
||||||
bufferedWriter.append(joiner.toString());
|
|
||||||
}
|
|
||||||
|
|
||||||
bufferedWriter.newLine();
|
|
||||||
bufferedWriter.flush();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Create Ledger File Name
|
|
||||||
*
|
|
||||||
* Create a file name the includes timestamp and address.
|
|
||||||
*
|
|
||||||
* @param address the address
|
|
||||||
*
|
|
||||||
* @return the file name created
|
|
||||||
*/
|
|
||||||
public static String createLedgerFileName(String address) {
|
|
||||||
DateFormat dateFormatter = new SimpleDateFormat("yyyyMMddHHmmss");
|
|
||||||
String fileName = "ledger-" + address + "-" + dateFormatter.format(new Date());
|
|
||||||
return fileName;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Collect Ledger Entries
|
|
||||||
*
|
|
||||||
* @param publicKey the public key for the ledger entries, buy and sell
|
|
||||||
* @param repository the data repository
|
|
||||||
* @param minimumFinalHeight the minimum block height for entries to be collected
|
|
||||||
* @param entries the ledger entries to add to
|
|
||||||
* @param codeHash code hash for the entry blockchain
|
|
||||||
* @param acct the ACCT for the entry blockchain
|
|
||||||
* @param isBuy true collecting entries for a buy, otherwise false
|
|
||||||
*
|
|
||||||
* @throws DataException
|
|
||||||
*/
|
|
||||||
public static void collectLedgerEntries(
|
|
||||||
byte[] publicKey,
|
|
||||||
Repository repository,
|
|
||||||
Integer minimumFinalHeight,
|
|
||||||
List<CrossChainTradeLedgerEntry> entries,
|
|
||||||
byte[] codeHash,
|
|
||||||
ACCT acct,
|
|
||||||
boolean isBuy) throws DataException {
|
|
||||||
|
|
||||||
// get all the final AT states for the code hash (foreign coin)
|
|
||||||
List<ATStateData> atStates
|
|
||||||
= repository.getATRepository().getMatchingFinalATStates(
|
|
||||||
codeHash,
|
|
||||||
isBuy ? publicKey : null,
|
|
||||||
!isBuy ? publicKey : null,
|
|
||||||
Boolean.TRUE, acct.getModeByteOffset(),
|
|
||||||
(long) AcctMode.REDEEMED.value,
|
|
||||||
minimumFinalHeight,
|
|
||||||
null, null, false
|
|
||||||
);
|
|
||||||
|
|
||||||
String foreignBlockchainCurrencyCode = acct.getBlockchain().getCurrencyCode();
|
|
||||||
|
|
||||||
// for each trade, build ledger entry, collect ledger entry
|
|
||||||
for (ATStateData atState : atStates) {
|
|
||||||
CrossChainTradeData crossChainTradeData = acct.populateTradeData(repository, atState);
|
|
||||||
|
|
||||||
// We also need block timestamp for use as trade timestamp
|
|
||||||
long localTimestamp = repository.getBlockRepository().getTimestampFromHeight(atState.getHeight());
|
|
||||||
|
|
||||||
if (localTimestamp == 0) {
|
|
||||||
// Try the archive
|
|
||||||
localTimestamp = repository.getBlockArchiveRepository().getTimestampFromHeight(atState.getHeight());
|
|
||||||
}
|
|
||||||
|
|
||||||
CrossChainTradeLedgerEntry ledgerEntry
|
|
||||||
= new CrossChainTradeLedgerEntry(
|
|
||||||
isBuy ? QORT_CURRENCY_CODE : foreignBlockchainCurrencyCode,
|
|
||||||
isBuy ? foreignBlockchainCurrencyCode : QORT_CURRENCY_CODE,
|
|
||||||
isBuy ? crossChainTradeData.qortAmount : crossChainTradeData.expectedForeignAmount,
|
|
||||||
0,
|
|
||||||
foreignBlockchainCurrencyCode,
|
|
||||||
isBuy ? crossChainTradeData.expectedForeignAmount : crossChainTradeData.qortAmount,
|
|
||||||
localTimestamp);
|
|
||||||
|
|
||||||
entries.add(ledgerEntry);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
@ -32,7 +32,6 @@ import org.qortal.controller.Synchronizer.SynchronizationResult;
|
|||||||
import org.qortal.controller.repository.BlockArchiveRebuilder;
|
import org.qortal.controller.repository.BlockArchiveRebuilder;
|
||||||
import org.qortal.data.account.MintingAccountData;
|
import org.qortal.data.account.MintingAccountData;
|
||||||
import org.qortal.data.account.RewardShareData;
|
import org.qortal.data.account.RewardShareData;
|
||||||
import org.qortal.data.system.DbConnectionInfo;
|
|
||||||
import org.qortal.network.Network;
|
import org.qortal.network.Network;
|
||||||
import org.qortal.network.Peer;
|
import org.qortal.network.Peer;
|
||||||
import org.qortal.network.PeerAddress;
|
import org.qortal.network.PeerAddress;
|
||||||
@ -41,7 +40,6 @@ import org.qortal.repository.DataException;
|
|||||||
import org.qortal.repository.Repository;
|
import org.qortal.repository.Repository;
|
||||||
import org.qortal.repository.RepositoryManager;
|
import org.qortal.repository.RepositoryManager;
|
||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.data.system.SystemInfo;
|
|
||||||
import org.qortal.utils.Base58;
|
import org.qortal.utils.Base58;
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
@ -54,7 +52,6 @@ import java.net.InetSocketAddress;
|
|||||||
import java.net.UnknownHostException;
|
import java.net.UnknownHostException;
|
||||||
import java.nio.file.Files;
|
import java.nio.file.Files;
|
||||||
import java.nio.file.Paths;
|
import java.nio.file.Paths;
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.concurrent.TimeUnit;
|
import java.util.concurrent.TimeUnit;
|
||||||
import java.util.concurrent.TimeoutException;
|
import java.util.concurrent.TimeoutException;
|
||||||
@ -462,7 +459,7 @@ public class AdminResource {
|
|||||||
|
|
||||||
// Qortal: check reward-share's minting account is still allowed to mint
|
// Qortal: check reward-share's minting account is still allowed to mint
|
||||||
Account rewardShareMintingAccount = new Account(repository, rewardShareData.getMinter());
|
Account rewardShareMintingAccount = new Account(repository, rewardShareData.getMinter());
|
||||||
if (!rewardShareMintingAccount.canMint(false))
|
if (!rewardShareMintingAccount.canMint())
|
||||||
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.CANNOT_MINT);
|
throw ApiExceptionFactory.INSTANCE.createException(request, ApiError.CANNOT_MINT);
|
||||||
|
|
||||||
MintingAccountData mintingAccountData = new MintingAccountData(mintingAccount.getPrivateKey(), mintingAccount.getPublicKey());
|
MintingAccountData mintingAccountData = new MintingAccountData(mintingAccount.getPrivateKey(), mintingAccount.getPublicKey());
|
||||||
@ -1067,50 +1064,4 @@ public class AdminResource {
|
|||||||
return "true";
|
return "true";
|
||||||
}
|
}
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/systeminfo")
|
|
||||||
@Operation(
|
|
||||||
summary = "System Information",
|
|
||||||
description = "System memory usage and available processors.",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
description = "memory usage and available processors",
|
|
||||||
content = @Content(mediaType = MediaType.APPLICATION_JSON, schema = @Schema(implementation = SystemInfo.class))
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
@ApiErrors({ApiError.REPOSITORY_ISSUE})
|
|
||||||
public SystemInfo getSystemInformation() {
|
|
||||||
|
|
||||||
SystemInfo info
|
|
||||||
= new SystemInfo(
|
|
||||||
Runtime.getRuntime().freeMemory(),
|
|
||||||
Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(),
|
|
||||||
Runtime.getRuntime().totalMemory(),
|
|
||||||
Runtime.getRuntime().maxMemory(),
|
|
||||||
Runtime.getRuntime().availableProcessors());
|
|
||||||
|
|
||||||
return info;
|
|
||||||
}
|
|
||||||
|
|
||||||
@GET
|
|
||||||
@Path("/dbstates")
|
|
||||||
@Operation(
|
|
||||||
summary = "Get DB States",
|
|
||||||
description = "Get DB States",
|
|
||||||
responses = {
|
|
||||||
@ApiResponse(
|
|
||||||
content = @Content(mediaType = MediaType.APPLICATION_JSON, array = @ArraySchema(schema = @Schema(implementation = DbConnectionInfo.class)))
|
|
||||||
)
|
|
||||||
}
|
|
||||||
)
|
|
||||||
public List<DbConnectionInfo> getDbConnectionsStates() {
|
|
||||||
|
|
||||||
try {
|
|
||||||
return Controller.REPOSITORY_FACTORY.getDbConnectionsStates();
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
return new ArrayList<>(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
@ -77,9 +77,7 @@ public class ActiveChatsWebSocket extends ApiWebSocket {
|
|||||||
}
|
}
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
Boolean hasChatReference = getHasChatReference(session);
|
ActiveChats activeChats = repository.getChatRepository().getActiveChats(ourAddress, getTargetEncoding(session));
|
||||||
|
|
||||||
ActiveChats activeChats = repository.getChatRepository().getActiveChats(ourAddress, getTargetEncoding(session), hasChatReference);
|
|
||||||
|
|
||||||
StringWriter stringWriter = new StringWriter();
|
StringWriter stringWriter = new StringWriter();
|
||||||
|
|
||||||
@ -105,20 +103,4 @@ public class ActiveChatsWebSocket extends ApiWebSocket {
|
|||||||
return Encoding.valueOf(encoding);
|
return Encoding.valueOf(encoding);
|
||||||
}
|
}
|
||||||
|
|
||||||
private Boolean getHasChatReference(Session session) {
|
|
||||||
Map<String, List<String>> queryParams = session.getUpgradeRequest().getParameterMap();
|
|
||||||
List<String> hasChatReferenceList = queryParams.get("haschatreference");
|
|
||||||
|
|
||||||
// Return null if not specified
|
|
||||||
if (hasChatReferenceList != null && hasChatReferenceList.size() == 1) {
|
|
||||||
String value = hasChatReferenceList.get(0).toLowerCase();
|
|
||||||
if (value.equals("true")) {
|
|
||||||
return true;
|
|
||||||
} else if (value.equals("false")) {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return null; // Ignored if not present
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -1,102 +0,0 @@
|
|||||||
package org.qortal.api.websocket;
|
|
||||||
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
|
||||||
import org.apache.logging.log4j.Logger;
|
|
||||||
import org.eclipse.jetty.websocket.api.Session;
|
|
||||||
import org.eclipse.jetty.websocket.api.WebSocketException;
|
|
||||||
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketClose;
|
|
||||||
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketConnect;
|
|
||||||
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketError;
|
|
||||||
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketMessage;
|
|
||||||
import org.eclipse.jetty.websocket.api.annotations.WebSocket;
|
|
||||||
import org.eclipse.jetty.websocket.servlet.WebSocketServletFactory;
|
|
||||||
import org.qortal.api.ApiError;
|
|
||||||
import org.qortal.controller.Controller;
|
|
||||||
import org.qortal.data.arbitrary.DataMonitorInfo;
|
|
||||||
import org.qortal.event.DataMonitorEvent;
|
|
||||||
import org.qortal.event.Event;
|
|
||||||
import org.qortal.event.EventBus;
|
|
||||||
import org.qortal.event.Listener;
|
|
||||||
import org.qortal.repository.DataException;
|
|
||||||
import org.qortal.repository.Repository;
|
|
||||||
import org.qortal.repository.RepositoryManager;
|
|
||||||
import org.qortal.utils.Base58;
|
|
||||||
|
|
||||||
import java.io.IOException;
|
|
||||||
import java.io.StringWriter;
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
@WebSocket
|
|
||||||
@SuppressWarnings("serial")
|
|
||||||
public class DataMonitorSocket extends ApiWebSocket implements Listener {
|
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(DataMonitorSocket.class);
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void configure(WebSocketServletFactory factory) {
|
|
||||||
LOGGER.info("configure");
|
|
||||||
|
|
||||||
factory.register(DataMonitorSocket.class);
|
|
||||||
|
|
||||||
EventBus.INSTANCE.addListener(this);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void listen(Event event) {
|
|
||||||
if (!(event instanceof DataMonitorEvent))
|
|
||||||
return;
|
|
||||||
|
|
||||||
DataMonitorEvent dataMonitorEvent = (DataMonitorEvent) event;
|
|
||||||
|
|
||||||
for (Session session : getSessions())
|
|
||||||
sendDataEventSummary(session, buildInfo(dataMonitorEvent));
|
|
||||||
}
|
|
||||||
|
|
||||||
private DataMonitorInfo buildInfo(DataMonitorEvent dataMonitorEvent) {
|
|
||||||
|
|
||||||
return new DataMonitorInfo(
|
|
||||||
dataMonitorEvent.getTimestamp(),
|
|
||||||
dataMonitorEvent.getIdentifier(),
|
|
||||||
dataMonitorEvent.getName(),
|
|
||||||
dataMonitorEvent.getService(),
|
|
||||||
dataMonitorEvent.getDescription(),
|
|
||||||
dataMonitorEvent.getTransactionTimestamp(),
|
|
||||||
dataMonitorEvent.getLatestPutTimestamp()
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
@OnWebSocketConnect
|
|
||||||
@Override
|
|
||||||
public void onWebSocketConnect(Session session) {
|
|
||||||
super.onWebSocketConnect(session);
|
|
||||||
}
|
|
||||||
|
|
||||||
@OnWebSocketClose
|
|
||||||
@Override
|
|
||||||
public void onWebSocketClose(Session session, int statusCode, String reason) {
|
|
||||||
super.onWebSocketClose(session, statusCode, reason);
|
|
||||||
}
|
|
||||||
|
|
||||||
@OnWebSocketError
|
|
||||||
public void onWebSocketError(Session session, Throwable throwable) {
|
|
||||||
/* We ignore errors for now, but method here to silence log spam */
|
|
||||||
}
|
|
||||||
|
|
||||||
@OnWebSocketMessage
|
|
||||||
public void onWebSocketMessage(Session session, String message) {
|
|
||||||
LOGGER.info("onWebSocketMessage: message = " + message);
|
|
||||||
}
|
|
||||||
|
|
||||||
private void sendDataEventSummary(Session session, DataMonitorInfo dataMonitorInfo) {
|
|
||||||
StringWriter stringWriter = new StringWriter();
|
|
||||||
|
|
||||||
try {
|
|
||||||
marshall(stringWriter, dataMonitorInfo);
|
|
||||||
|
|
||||||
session.getRemote().sendStringByFuture(stringWriter.toString());
|
|
||||||
} catch (IOException | WebSocketException e) {
|
|
||||||
// No output this time
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
@ -98,7 +98,7 @@ public class TradeOffersWebSocket extends ApiWebSocket implements Listener {
|
|||||||
byte[] codeHash = acctInfo.getKey().value;
|
byte[] codeHash = acctInfo.getKey().value;
|
||||||
ACCT acct = acctInfo.getValue().get();
|
ACCT acct = acctInfo.getValue().get();
|
||||||
|
|
||||||
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStates(codeHash, null, null,
|
List<ATStateData> atStates = repository.getATRepository().getMatchingFinalATStates(codeHash,
|
||||||
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
|
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
|
||||||
null, null, null);
|
null, null, null);
|
||||||
|
|
||||||
@ -259,7 +259,7 @@ public class TradeOffersWebSocket extends ApiWebSocket implements Listener {
|
|||||||
ACCT acct = acctInfo.getValue().get();
|
ACCT acct = acctInfo.getValue().get();
|
||||||
|
|
||||||
Integer dataByteOffset = acct.getModeByteOffset();
|
Integer dataByteOffset = acct.getModeByteOffset();
|
||||||
List<ATStateData> initialAtStates = repository.getATRepository().getMatchingFinalATStates(codeHash, null, null,
|
List<ATStateData> initialAtStates = repository.getATRepository().getMatchingFinalATStates(codeHash,
|
||||||
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
|
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
|
||||||
null, null, null);
|
null, null, null);
|
||||||
|
|
||||||
@ -298,7 +298,7 @@ public class TradeOffersWebSocket extends ApiWebSocket implements Listener {
|
|||||||
byte[] codeHash = acctInfo.getKey().value;
|
byte[] codeHash = acctInfo.getKey().value;
|
||||||
ACCT acct = acctInfo.getValue().get();
|
ACCT acct = acctInfo.getValue().get();
|
||||||
|
|
||||||
List<ATStateData> historicAtStates = repository.getATRepository().getMatchingFinalATStates(codeHash, null, null,
|
List<ATStateData> historicAtStates = repository.getATRepository().getMatchingFinalATStates(codeHash,
|
||||||
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
|
isFinished, dataByteOffset, expectedValue, minimumFinalHeight,
|
||||||
null, null, null);
|
null, null, null);
|
||||||
|
|
||||||
|
@ -439,15 +439,7 @@ public class ArbitraryDataReader {
|
|||||||
// Ensure the complete hash matches the joined chunks
|
// Ensure the complete hash matches the joined chunks
|
||||||
if (!Arrays.equals(arbitraryDataFile.digest(), transactionData.getData())) {
|
if (!Arrays.equals(arbitraryDataFile.digest(), transactionData.getData())) {
|
||||||
// Delete the invalid file
|
// Delete the invalid file
|
||||||
LOGGER.info("Deleting invalid file: path = " + arbitraryDataFile.getFilePath());
|
arbitraryDataFile.delete();
|
||||||
|
|
||||||
if( arbitraryDataFile.delete() ) {
|
|
||||||
LOGGER.info("Deleted invalid file successfully: path = " + arbitraryDataFile.getFilePath());
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.warn("Could not delete invalid file: path = " + arbitraryDataFile.getFilePath());
|
|
||||||
}
|
|
||||||
|
|
||||||
throw new DataException("Unable to validate complete file hash");
|
throw new DataException("Unable to validate complete file hash");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -168,7 +168,7 @@ public class ArbitraryDataRenderer {
|
|||||||
byte[] data = Files.readAllBytes(filePath); // TODO: limit file size that can be read into memory
|
byte[] data = Files.readAllBytes(filePath); // TODO: limit file size that can be read into memory
|
||||||
HTMLParser htmlParser = new HTMLParser(resourceId, inPath, prefix, includeResourceIdInPrefix, data, qdnContext, service, identifier, theme, usingCustomRouting);
|
HTMLParser htmlParser = new HTMLParser(resourceId, inPath, prefix, includeResourceIdInPrefix, data, qdnContext, service, identifier, theme, usingCustomRouting);
|
||||||
htmlParser.addAdditionalHeaderTags();
|
htmlParser.addAdditionalHeaderTags();
|
||||||
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline' 'unsafe-eval'; font-src 'self' data:; media-src 'self' data: blob:; img-src 'self' data: blob:; connect-src 'self' wss:;");
|
response.addHeader("Content-Security-Policy", "default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' data: blob:; img-src 'self' data: blob:;");
|
||||||
response.setContentType(context.getMimeType(filename));
|
response.setContentType(context.getMimeType(filename));
|
||||||
response.setContentLength(htmlParser.getData().length);
|
response.setContentLength(htmlParser.getData().length);
|
||||||
response.getOutputStream().write(htmlParser.getData());
|
response.getOutputStream().write(htmlParser.getData());
|
||||||
|
@ -23,11 +23,12 @@ import org.qortal.data.at.ATStateData;
|
|||||||
import org.qortal.data.block.BlockData;
|
import org.qortal.data.block.BlockData;
|
||||||
import org.qortal.data.block.BlockSummaryData;
|
import org.qortal.data.block.BlockSummaryData;
|
||||||
import org.qortal.data.block.BlockTransactionData;
|
import org.qortal.data.block.BlockTransactionData;
|
||||||
import org.qortal.data.group.GroupAdminData;
|
|
||||||
import org.qortal.data.network.OnlineAccountData;
|
import org.qortal.data.network.OnlineAccountData;
|
||||||
import org.qortal.data.transaction.TransactionData;
|
import org.qortal.data.transaction.TransactionData;
|
||||||
import org.qortal.group.Group;
|
import org.qortal.repository.ATRepository;
|
||||||
import org.qortal.repository.*;
|
import org.qortal.repository.DataException;
|
||||||
|
import org.qortal.repository.Repository;
|
||||||
|
import org.qortal.repository.TransactionRepository;
|
||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.transaction.AtTransaction;
|
import org.qortal.transaction.AtTransaction;
|
||||||
import org.qortal.transaction.Transaction;
|
import org.qortal.transaction.Transaction;
|
||||||
@ -39,7 +40,6 @@ import org.qortal.transform.block.BlockTransformer;
|
|||||||
import org.qortal.transform.transaction.TransactionTransformer;
|
import org.qortal.transform.transaction.TransactionTransformer;
|
||||||
import org.qortal.utils.Amounts;
|
import org.qortal.utils.Amounts;
|
||||||
import org.qortal.utils.Base58;
|
import org.qortal.utils.Base58;
|
||||||
import org.qortal.utils.Groups;
|
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
import java.io.ByteArrayOutputStream;
|
import java.io.ByteArrayOutputStream;
|
||||||
@ -144,14 +144,11 @@ public class Block {
|
|||||||
private final Account mintingAccount;
|
private final Account mintingAccount;
|
||||||
private final AccountData mintingAccountData;
|
private final AccountData mintingAccountData;
|
||||||
private final boolean isMinterFounder;
|
private final boolean isMinterFounder;
|
||||||
private final boolean isMinterMember;
|
|
||||||
|
|
||||||
private final Account recipientAccount;
|
private final Account recipientAccount;
|
||||||
private final AccountData recipientAccountData;
|
private final AccountData recipientAccountData;
|
||||||
|
|
||||||
final BlockChain blockChain = BlockChain.getInstance();
|
ExpandedAccount(Repository repository, RewardShareData rewardShareData) throws DataException {
|
||||||
|
|
||||||
ExpandedAccount(Repository repository, RewardShareData rewardShareData, int blockHeight) throws DataException {
|
|
||||||
this.rewardShareData = rewardShareData;
|
this.rewardShareData = rewardShareData;
|
||||||
this.sharePercent = this.rewardShareData.getSharePercent();
|
this.sharePercent = this.rewardShareData.getSharePercent();
|
||||||
|
|
||||||
@ -160,12 +157,6 @@ public class Block {
|
|||||||
this.isMinterFounder = Account.isFounder(mintingAccountData.getFlags());
|
this.isMinterFounder = Account.isFounder(mintingAccountData.getFlags());
|
||||||
|
|
||||||
this.isRecipientAlsoMinter = this.rewardShareData.getRecipient().equals(this.mintingAccount.getAddress());
|
this.isRecipientAlsoMinter = this.rewardShareData.getRecipient().equals(this.mintingAccount.getAddress());
|
||||||
this.isMinterMember
|
|
||||||
= Groups.memberExistsInAnyGroup(
|
|
||||||
repository.getGroupRepository(),
|
|
||||||
Groups.getGroupIdsToMint(BlockChain.getInstance(), blockHeight),
|
|
||||||
this.mintingAccount.getAddress()
|
|
||||||
);
|
|
||||||
|
|
||||||
if (this.isRecipientAlsoMinter) {
|
if (this.isRecipientAlsoMinter) {
|
||||||
// Self-share: minter is also recipient
|
// Self-share: minter is also recipient
|
||||||
@ -178,19 +169,6 @@ public class Block {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Get Effective Minting Level
|
|
||||||
*
|
|
||||||
* @return the effective minting level, if a data exception is thrown, it catches the exception and returns a zero
|
|
||||||
*/
|
|
||||||
public int getEffectiveMintingLevel() {
|
|
||||||
try {
|
|
||||||
return this.mintingAccount.getEffectiveMintingLevel();
|
|
||||||
} catch (DataException e) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public Account getMintingAccount() {
|
public Account getMintingAccount() {
|
||||||
return this.mintingAccount;
|
return this.mintingAccount;
|
||||||
}
|
}
|
||||||
@ -207,19 +185,15 @@ public class Block {
|
|||||||
* @return account-level share "bin" from blockchain config, or null if founder / none found
|
* @return account-level share "bin" from blockchain config, or null if founder / none found
|
||||||
*/
|
*/
|
||||||
public AccountLevelShareBin getShareBin(int blockHeight) {
|
public AccountLevelShareBin getShareBin(int blockHeight) {
|
||||||
if (this.isMinterFounder && blockHeight < BlockChain.getInstance().getAdminsReplaceFoundersHeight())
|
if (this.isMinterFounder)
|
||||||
return null;
|
return null;
|
||||||
|
|
||||||
final int accountLevel = this.mintingAccountData.getLevel();
|
final int accountLevel = this.mintingAccountData.getLevel();
|
||||||
if (accountLevel <= 0)
|
if (accountLevel <= 0)
|
||||||
return null; // level 0 isn't included in any share bins
|
return null; // level 0 isn't included in any share bins
|
||||||
|
|
||||||
if (blockHeight >= blockChain.getFixBatchRewardHeight()) {
|
|
||||||
if (!this.isMinterMember)
|
|
||||||
return null; // not member of minter group isn't included in any share bins
|
|
||||||
}
|
|
||||||
|
|
||||||
// Select the correct set of share bins based on block height
|
// Select the correct set of share bins based on block height
|
||||||
|
final BlockChain blockChain = BlockChain.getInstance();
|
||||||
final AccountLevelShareBin[] shareBinsByLevel = (blockHeight >= blockChain.getSharesByLevelV2Height()) ?
|
final AccountLevelShareBin[] shareBinsByLevel = (blockHeight >= blockChain.getSharesByLevelV2Height()) ?
|
||||||
blockChain.getShareBinsByAccountLevelV2() : blockChain.getShareBinsByAccountLevelV1();
|
blockChain.getShareBinsByAccountLevelV2() : blockChain.getShareBinsByAccountLevelV1();
|
||||||
|
|
||||||
@ -424,9 +398,7 @@ public class Block {
|
|||||||
onlineAccounts.removeIf(a -> a.getNonce() == null || a.getNonce() < 0);
|
onlineAccounts.removeIf(a -> a.getNonce() == null || a.getNonce() < 0);
|
||||||
|
|
||||||
// After feature trigger, remove any online accounts that are level 0
|
// After feature trigger, remove any online accounts that are level 0
|
||||||
// but only if they are before the ignore level feature trigger
|
if (height >= BlockChain.getInstance().getOnlineAccountMinterLevelValidationHeight()) {
|
||||||
if (height < BlockChain.getInstance().getIgnoreLevelForRewardShareHeight() &&
|
|
||||||
height >= BlockChain.getInstance().getOnlineAccountMinterLevelValidationHeight()) {
|
|
||||||
onlineAccounts.removeIf(a -> {
|
onlineAccounts.removeIf(a -> {
|
||||||
try {
|
try {
|
||||||
return Account.getRewardShareEffectiveMintingLevel(repository, a.getPublicKey()) == 0;
|
return Account.getRewardShareEffectiveMintingLevel(repository, a.getPublicKey()) == 0;
|
||||||
@ -437,21 +409,6 @@ public class Block {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
// After feature trigger, remove any online accounts that are not minter group member
|
|
||||||
if (height >= BlockChain.getInstance().getGroupMemberCheckHeight()) {
|
|
||||||
onlineAccounts.removeIf(a -> {
|
|
||||||
try {
|
|
||||||
List<Integer> groupIdsToMint = Groups.getGroupIdsToMint(BlockChain.getInstance(), height);
|
|
||||||
String address = Account.getRewardShareMintingAddress(repository, a.getPublicKey());
|
|
||||||
boolean isMinterGroupMember = Groups.memberExistsInAnyGroup(repository.getGroupRepository(), groupIdsToMint, address);
|
|
||||||
return !isMinterGroupMember;
|
|
||||||
} catch (DataException e) {
|
|
||||||
// Something went wrong, so remove the account
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (onlineAccounts.isEmpty()) {
|
if (onlineAccounts.isEmpty()) {
|
||||||
LOGGER.debug("No online accounts - not even our own?");
|
LOGGER.debug("No online accounts - not even our own?");
|
||||||
return null;
|
return null;
|
||||||
@ -758,12 +715,10 @@ public class Block {
|
|||||||
|
|
||||||
List<ExpandedAccount> expandedAccounts = new ArrayList<>();
|
List<ExpandedAccount> expandedAccounts = new ArrayList<>();
|
||||||
|
|
||||||
for (RewardShareData rewardShare : this.cachedOnlineRewardShares) {
|
for (RewardShareData rewardShare : this.cachedOnlineRewardShares)
|
||||||
expandedAccounts.add(new ExpandedAccount(repository, rewardShare, this.blockData.getHeight()));
|
expandedAccounts.add(new ExpandedAccount(repository, rewardShare));
|
||||||
}
|
|
||||||
|
|
||||||
this.cachedExpandedAccounts = expandedAccounts;
|
this.cachedExpandedAccounts = expandedAccounts;
|
||||||
LOGGER.trace(() -> String.format("Online reward-shares after expanded accounts %s", this.cachedOnlineRewardShares));
|
|
||||||
|
|
||||||
return this.cachedExpandedAccounts;
|
return this.cachedExpandedAccounts;
|
||||||
}
|
}
|
||||||
@ -1169,32 +1124,14 @@ public class Block {
|
|||||||
if (onlineRewardShares == null)
|
if (onlineRewardShares == null)
|
||||||
return ValidationResult.ONLINE_ACCOUNT_UNKNOWN;
|
return ValidationResult.ONLINE_ACCOUNT_UNKNOWN;
|
||||||
|
|
||||||
// After feature trigger, require all online account minters to be greater than level 0,
|
// After feature trigger, require all online account minters to be greater than level 0
|
||||||
// but only if it is before the feature trigger where we ignore level again
|
if (this.getBlockData().getHeight() >= BlockChain.getInstance().getOnlineAccountMinterLevelValidationHeight()) {
|
||||||
if (this.blockData.getHeight() < BlockChain.getInstance().getIgnoreLevelForRewardShareHeight() &&
|
List<ExpandedAccount> expandedAccounts = this.getExpandedAccounts();
|
||||||
this.getBlockData().getHeight() >= BlockChain.getInstance().getOnlineAccountMinterLevelValidationHeight()) {
|
|
||||||
List<ExpandedAccount> expandedAccounts
|
|
||||||
= this.getExpandedAccounts().stream()
|
|
||||||
.filter(expandedAccount -> expandedAccount.isMinterMember)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
for (ExpandedAccount account : expandedAccounts) {
|
for (ExpandedAccount account : expandedAccounts) {
|
||||||
if (account.getMintingAccount().getEffectiveMintingLevel() == 0)
|
if (account.getMintingAccount().getEffectiveMintingLevel() == 0)
|
||||||
return ValidationResult.ONLINE_ACCOUNTS_INVALID;
|
return ValidationResult.ONLINE_ACCOUNTS_INVALID;
|
||||||
|
|
||||||
if (this.getBlockData().getHeight() >= BlockChain.getInstance().getFixBatchRewardHeight()) {
|
|
||||||
if (!account.isMinterMember)
|
|
||||||
return ValidationResult.ONLINE_ACCOUNTS_INVALID;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
else if (this.blockData.getHeight() >= BlockChain.getInstance().getIgnoreLevelForRewardShareHeight()){
|
|
||||||
Optional<ExpandedAccount> anyInvalidAccount
|
|
||||||
= this.getExpandedAccounts().stream()
|
|
||||||
.filter(account -> !account.isMinterMember)
|
|
||||||
.findAny();
|
|
||||||
if( anyInvalidAccount.isPresent() ) return ValidationResult.ONLINE_ACCOUNTS_INVALID;
|
|
||||||
}
|
|
||||||
|
|
||||||
// If block is past a certain age then we simply assume the signatures were correct
|
// If block is past a certain age then we simply assume the signatures were correct
|
||||||
long signatureRequirementThreshold = NTP.getTime() - BlockChain.getInstance().getOnlineAccountSignaturesMinLifetime();
|
long signatureRequirementThreshold = NTP.getTime() - BlockChain.getInstance().getOnlineAccountSignaturesMinLifetime();
|
||||||
@ -1321,7 +1258,6 @@ public class Block {
|
|||||||
|
|
||||||
// Online Accounts
|
// Online Accounts
|
||||||
ValidationResult onlineAccountsResult = this.areOnlineAccountsValid();
|
ValidationResult onlineAccountsResult = this.areOnlineAccountsValid();
|
||||||
LOGGER.trace("Accounts valid = {}", onlineAccountsResult);
|
|
||||||
if (onlineAccountsResult != ValidationResult.OK)
|
if (onlineAccountsResult != ValidationResult.OK)
|
||||||
return onlineAccountsResult;
|
return onlineAccountsResult;
|
||||||
|
|
||||||
@ -1410,7 +1346,7 @@ public class Block {
|
|||||||
// Check transaction can even be processed
|
// Check transaction can even be processed
|
||||||
validationResult = transaction.isProcessable();
|
validationResult = transaction.isProcessable();
|
||||||
if (validationResult != Transaction.ValidationResult.OK) {
|
if (validationResult != Transaction.ValidationResult.OK) {
|
||||||
LOGGER.debug(String.format("Error during transaction validation, tx %s: %s", Base58.encode(transactionData.getSignature()), validationResult.name()));
|
LOGGER.info(String.format("Error during transaction validation, tx %s: %s", Base58.encode(transactionData.getSignature()), validationResult.name()));
|
||||||
return ValidationResult.TRANSACTION_INVALID;
|
return ValidationResult.TRANSACTION_INVALID;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1582,7 +1518,7 @@ public class Block {
|
|||||||
return false;
|
return false;
|
||||||
|
|
||||||
Account mintingAccount = new PublicKeyAccount(this.repository, rewardShareData.getMinterPublicKey());
|
Account mintingAccount = new PublicKeyAccount(this.repository, rewardShareData.getMinterPublicKey());
|
||||||
return mintingAccount.canMint(false);
|
return mintingAccount.canMint();
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@ -1611,7 +1547,6 @@ public class Block {
|
|||||||
this.blockData.setHeight(blockchainHeight + 1);
|
this.blockData.setHeight(blockchainHeight + 1);
|
||||||
|
|
||||||
LOGGER.trace(() -> String.format("Processing block %d", this.blockData.getHeight()));
|
LOGGER.trace(() -> String.format("Processing block %d", this.blockData.getHeight()));
|
||||||
LOGGER.trace(() -> String.format("Online Reward Shares in process %s", this.cachedOnlineRewardShares));
|
|
||||||
|
|
||||||
if (this.blockData.getHeight() > 1) {
|
if (this.blockData.getHeight() > 1) {
|
||||||
|
|
||||||
@ -1683,17 +1618,7 @@ public class Block {
|
|||||||
final List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
|
final List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
|
||||||
final int maximumLevel = cumulativeBlocksByLevel.size() - 1;
|
final int maximumLevel = cumulativeBlocksByLevel.size() - 1;
|
||||||
|
|
||||||
final List<ExpandedAccount> expandedAccounts;
|
final List<ExpandedAccount> expandedAccounts = this.getExpandedAccounts();
|
||||||
|
|
||||||
if (this.getBlockData().getHeight() < BlockChain.getInstance().getFixBatchRewardHeight()) {
|
|
||||||
expandedAccounts = this.getExpandedAccounts().stream().collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
expandedAccounts
|
|
||||||
= this.getExpandedAccounts().stream()
|
|
||||||
.filter(expandedAccount -> expandedAccount.isMinterMember)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
|
|
||||||
Set<AccountData> allUniqueExpandedAccounts = new HashSet<>();
|
Set<AccountData> allUniqueExpandedAccounts = new HashSet<>();
|
||||||
for (ExpandedAccount expandedAccount : expandedAccounts) {
|
for (ExpandedAccount expandedAccount : expandedAccounts) {
|
||||||
@ -2093,17 +2018,7 @@ public class Block {
|
|||||||
final List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
|
final List<Integer> cumulativeBlocksByLevel = BlockChain.getInstance().getCumulativeBlocksByLevel();
|
||||||
final int maximumLevel = cumulativeBlocksByLevel.size() - 1;
|
final int maximumLevel = cumulativeBlocksByLevel.size() - 1;
|
||||||
|
|
||||||
final List<ExpandedAccount> expandedAccounts;
|
final List<ExpandedAccount> expandedAccounts = this.getExpandedAccounts();
|
||||||
|
|
||||||
if (this.getBlockData().getHeight() < BlockChain.getInstance().getFixBatchRewardHeight()) {
|
|
||||||
expandedAccounts = this.getExpandedAccounts().stream().collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
expandedAccounts
|
|
||||||
= this.getExpandedAccounts().stream()
|
|
||||||
.filter(expandedAccount -> expandedAccount.isMinterMember)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
|
|
||||||
Set<AccountData> allUniqueExpandedAccounts = new HashSet<>();
|
Set<AccountData> allUniqueExpandedAccounts = new HashSet<>();
|
||||||
for (ExpandedAccount expandedAccount : expandedAccounts) {
|
for (ExpandedAccount expandedAccount : expandedAccounts) {
|
||||||
@ -2298,7 +2213,6 @@ public class Block {
|
|||||||
List<AccountBalanceData> accountBalanceDeltas = balanceChanges.entrySet().stream()
|
List<AccountBalanceData> accountBalanceDeltas = balanceChanges.entrySet().stream()
|
||||||
.map(entry -> new AccountBalanceData(entry.getKey(), Asset.QORT, entry.getValue()))
|
.map(entry -> new AccountBalanceData(entry.getKey(), Asset.QORT, entry.getValue()))
|
||||||
.collect(Collectors.toList());
|
.collect(Collectors.toList());
|
||||||
LOGGER.trace("Account Balance Deltas: {}", accountBalanceDeltas);
|
|
||||||
this.repository.getAccountRepository().modifyAssetBalances(accountBalanceDeltas);
|
this.repository.getAccountRepository().modifyAssetBalances(accountBalanceDeltas);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -2307,17 +2221,7 @@ public class Block {
|
|||||||
List<BlockRewardCandidate> rewardCandidates = new ArrayList<>();
|
List<BlockRewardCandidate> rewardCandidates = new ArrayList<>();
|
||||||
|
|
||||||
// All online accounts
|
// All online accounts
|
||||||
final List<ExpandedAccount> expandedAccounts;
|
final List<ExpandedAccount> expandedAccounts = this.getExpandedAccounts();
|
||||||
|
|
||||||
if (this.getBlockData().getHeight() < BlockChain.getInstance().getFixBatchRewardHeight()) {
|
|
||||||
expandedAccounts = this.getExpandedAccounts().stream().collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
expandedAccounts
|
|
||||||
= this.getExpandedAccounts().stream()
|
|
||||||
.filter(expandedAccount -> expandedAccount.isMinterMember)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Distribution rules:
|
* Distribution rules:
|
||||||
@ -2360,6 +2264,7 @@ public class Block {
|
|||||||
// Select the correct set of share bins based on block height
|
// Select the correct set of share bins based on block height
|
||||||
List<AccountLevelShareBin> accountLevelShareBinsForBlock = (this.blockData.getHeight() >= BlockChain.getInstance().getSharesByLevelV2Height()) ?
|
List<AccountLevelShareBin> accountLevelShareBinsForBlock = (this.blockData.getHeight() >= BlockChain.getInstance().getSharesByLevelV2Height()) ?
|
||||||
BlockChain.getInstance().getAccountLevelShareBinsV2() : BlockChain.getInstance().getAccountLevelShareBinsV1();
|
BlockChain.getInstance().getAccountLevelShareBinsV2() : BlockChain.getInstance().getAccountLevelShareBinsV1();
|
||||||
|
|
||||||
// Determine reward candidates based on account level
|
// Determine reward candidates based on account level
|
||||||
// This needs a deep copy, so the shares can be modified when tiers aren't activated yet
|
// This needs a deep copy, so the shares can be modified when tiers aren't activated yet
|
||||||
List<AccountLevelShareBin> accountLevelShareBins = new ArrayList<>();
|
List<AccountLevelShareBin> accountLevelShareBins = new ArrayList<>();
|
||||||
@ -2442,7 +2347,7 @@ public class Block {
|
|||||||
final long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShareAtHeight(this.blockData.getHeight());
|
final long qoraHoldersShare = BlockChain.getInstance().getQoraHoldersShareAtHeight(this.blockData.getHeight());
|
||||||
|
|
||||||
// Perform account-level-based reward scaling if appropriate
|
// Perform account-level-based reward scaling if appropriate
|
||||||
if (!haveFounders && this.blockData.getHeight() < BlockChain.getInstance().getAdminsReplaceFoundersHeight() ) {
|
if (!haveFounders) {
|
||||||
// Recalculate distribution ratios based on candidates
|
// Recalculate distribution ratios based on candidates
|
||||||
|
|
||||||
// Nothing shared? This shouldn't happen
|
// Nothing shared? This shouldn't happen
|
||||||
@ -2478,103 +2383,18 @@ public class Block {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Add founders as reward candidate if appropriate
|
// Add founders as reward candidate if appropriate
|
||||||
if (haveFounders && this.blockData.getHeight() < BlockChain.getInstance().getAdminsReplaceFoundersHeight()) {
|
if (haveFounders) {
|
||||||
// Yes: add to reward candidates list
|
// Yes: add to reward candidates list
|
||||||
BlockRewardDistributor founderDistributor = (distributionAmount, balanceChanges) -> distributeBlockRewardShare(distributionAmount, onlineFounderAccounts, balanceChanges);
|
BlockRewardDistributor founderDistributor = (distributionAmount, balanceChanges) -> distributeBlockRewardShare(distributionAmount, onlineFounderAccounts, balanceChanges);
|
||||||
|
|
||||||
final long foundersShare = 1_00000000 - totalShares;
|
final long foundersShare = 1_00000000 - totalShares;
|
||||||
BlockRewardCandidate rewardCandidate = new BlockRewardCandidate("Founders", foundersShare, founderDistributor);
|
BlockRewardCandidate rewardCandidate = new BlockRewardCandidate("Founders", foundersShare, founderDistributor);
|
||||||
rewardCandidates.add(rewardCandidate);
|
rewardCandidates.add(rewardCandidate);
|
||||||
LOGGER.info("logging foundersShare prior to reward modifications {}",foundersShare);
|
|
||||||
}
|
|
||||||
else if (this.blockData.getHeight() >= BlockChain.getInstance().getAdminsReplaceFoundersHeight()) {
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
GroupRepository groupRepository = repository.getGroupRepository();
|
|
||||||
|
|
||||||
List<Integer> mintingGroupIds = Groups.getGroupIdsToMint(BlockChain.getInstance(), this.blockData.getHeight());
|
|
||||||
|
|
||||||
// all minter admins
|
|
||||||
List<String> minterAdmins = Groups.getAllAdmins(groupRepository, mintingGroupIds);
|
|
||||||
|
|
||||||
// all minter admins that are online
|
|
||||||
List<ExpandedAccount> onlineMinterAdminAccounts
|
|
||||||
= expandedAccounts.stream()
|
|
||||||
.filter(expandedAccount -> minterAdmins.contains(expandedAccount.getMintingAccount().getAddress()))
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
long minterAdminShare;
|
|
||||||
|
|
||||||
if( onlineMinterAdminAccounts.isEmpty() ) {
|
|
||||||
minterAdminShare = 0;
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
BlockRewardDistributor minterAdminDistributor
|
|
||||||
= (distributionAmount, balanceChanges)
|
|
||||||
->
|
|
||||||
distributeBlockRewardShare(distributionAmount, onlineMinterAdminAccounts, balanceChanges);
|
|
||||||
|
|
||||||
long adminShare = 1_00000000 - totalShares;
|
|
||||||
LOGGER.info("initial total Shares: {}", totalShares);
|
|
||||||
LOGGER.info("logging adminShare after hardfork, this is the primary reward that will be split {}", adminShare);
|
|
||||||
|
|
||||||
minterAdminShare = adminShare / 2;
|
|
||||||
BlockRewardCandidate minterAdminRewardCandidate
|
|
||||||
= new BlockRewardCandidate("Minter Admins", minterAdminShare, minterAdminDistributor);
|
|
||||||
rewardCandidates.add(minterAdminRewardCandidate);
|
|
||||||
|
|
||||||
totalShares += minterAdminShare;
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.info("MINTER ADMIN SHARE: {}",minterAdminShare);
|
|
||||||
|
|
||||||
// all dev admins
|
|
||||||
List<String> devAdminAddresses
|
|
||||||
= groupRepository.getGroupAdmins(1).stream()
|
|
||||||
.map(GroupAdminData::getAdmin)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
LOGGER.info("Removing NULL Account Address, Dev Admin Count = {}", devAdminAddresses.size());
|
|
||||||
devAdminAddresses.removeIf( address -> Group.NULL_OWNER_ADDRESS.equals(address) );
|
|
||||||
LOGGER.info("Removed NULL Account Address, Dev Admin Count = {}", devAdminAddresses.size());
|
|
||||||
|
|
||||||
BlockRewardDistributor devAdminDistributor
|
|
||||||
= (distributionAmount, balanceChanges) -> distributeToAccounts(distributionAmount, devAdminAddresses, balanceChanges);
|
|
||||||
|
|
||||||
long devAdminShare = 1_00000000 - totalShares;
|
|
||||||
LOGGER.info("DEV ADMIN SHARE: {}",devAdminShare);
|
|
||||||
BlockRewardCandidate devAdminRewardCandidate
|
|
||||||
= new BlockRewardCandidate("Dev Admins", devAdminShare,devAdminDistributor);
|
|
||||||
rewardCandidates.add(devAdminRewardCandidate);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return rewardCandidates;
|
return rewardCandidates;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Distribute To Accounts
|
|
||||||
*
|
|
||||||
* Merges distribute shares to a map of distribution shares.
|
|
||||||
*
|
|
||||||
* @param distributionAmount the amount to distribute
|
|
||||||
* @param accountAddressess the addresses to distribute to
|
|
||||||
* @param balanceChanges the map of distribution shares, this gets appended to
|
|
||||||
*
|
|
||||||
* @return the total amount mapped to addresses for distribution
|
|
||||||
*/
|
|
||||||
public static long distributeToAccounts(long distributionAmount, List<String> accountAddressess, Map<String, Long> balanceChanges) {
|
|
||||||
|
|
||||||
if( accountAddressess.isEmpty() ) return 0;
|
|
||||||
|
|
||||||
long distibutionShare = distributionAmount / accountAddressess.size();
|
|
||||||
|
|
||||||
for(String accountAddress : accountAddressess ) {
|
|
||||||
balanceChanges.merge(accountAddress, distibutionShare, Long::sum);
|
|
||||||
}
|
|
||||||
|
|
||||||
return distibutionShare * accountAddressess.size();
|
|
||||||
}
|
|
||||||
|
|
||||||
private static long distributeBlockRewardShare(long distributionAmount, List<ExpandedAccount> accounts, Map<String, Long> balanceChanges) {
|
private static long distributeBlockRewardShare(long distributionAmount, List<ExpandedAccount> accounts, Map<String, Long> balanceChanges) {
|
||||||
// Collate all expanded accounts by minting account
|
// Collate all expanded accounts by minting account
|
||||||
Map<String, List<ExpandedAccount>> accountsByMinter = new HashMap<>();
|
Map<String, List<ExpandedAccount>> accountsByMinter = new HashMap<>();
|
||||||
@ -2734,11 +2554,9 @@ public class Block {
|
|||||||
return;
|
return;
|
||||||
|
|
||||||
int minterLevel = Account.getRewardShareEffectiveMintingLevel(this.repository, this.getMinter().getPublicKey());
|
int minterLevel = Account.getRewardShareEffectiveMintingLevel(this.repository, this.getMinter().getPublicKey());
|
||||||
String minterAddress = Account.getRewardShareMintingAddress(this.repository, this.getMinter().getPublicKey());
|
|
||||||
|
|
||||||
LOGGER.debug(String.format("======= BLOCK %d (%.8s) =======", this.getBlockData().getHeight(), Base58.encode(this.getSignature())));
|
LOGGER.debug(String.format("======= BLOCK %d (%.8s) =======", this.getBlockData().getHeight(), Base58.encode(this.getSignature())));
|
||||||
LOGGER.debug(String.format("Timestamp: %d", this.getBlockData().getTimestamp()));
|
LOGGER.debug(String.format("Timestamp: %d", this.getBlockData().getTimestamp()));
|
||||||
LOGGER.debug(String.format("Minter address: %s", minterAddress));
|
|
||||||
LOGGER.debug(String.format("Minter level: %d", minterLevel));
|
LOGGER.debug(String.format("Minter level: %d", minterLevel));
|
||||||
LOGGER.debug(String.format("Online accounts: %d", this.getBlockData().getOnlineAccountsCount()));
|
LOGGER.debug(String.format("Online accounts: %d", this.getBlockData().getOnlineAccountsCount()));
|
||||||
LOGGER.debug(String.format("AT count: %d", this.getBlockData().getATCount()));
|
LOGGER.debug(String.format("AT count: %d", this.getBlockData().getATCount()));
|
||||||
|
@ -71,7 +71,6 @@ public class BlockChain {
|
|||||||
transactionV6Timestamp,
|
transactionV6Timestamp,
|
||||||
disableReferenceTimestamp,
|
disableReferenceTimestamp,
|
||||||
increaseOnlineAccountsDifficultyTimestamp,
|
increaseOnlineAccountsDifficultyTimestamp,
|
||||||
decreaseOnlineAccountsDifficultyTimestamp,
|
|
||||||
onlineAccountMinterLevelValidationHeight,
|
onlineAccountMinterLevelValidationHeight,
|
||||||
selfSponsorshipAlgoV1Height,
|
selfSponsorshipAlgoV1Height,
|
||||||
selfSponsorshipAlgoV2Height,
|
selfSponsorshipAlgoV2Height,
|
||||||
@ -86,13 +85,7 @@ public class BlockChain {
|
|||||||
disableRewardshareHeight,
|
disableRewardshareHeight,
|
||||||
enableRewardshareHeight,
|
enableRewardshareHeight,
|
||||||
onlyMintWithNameHeight,
|
onlyMintWithNameHeight,
|
||||||
removeOnlyMintWithNameHeight,
|
groupMemberCheckHeight
|
||||||
groupMemberCheckHeight,
|
|
||||||
fixBatchRewardHeight,
|
|
||||||
adminsReplaceFoundersHeight,
|
|
||||||
nullGroupMembershipHeight,
|
|
||||||
ignoreLevelForRewardShareHeight,
|
|
||||||
adminQueryFixHeight
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Custom transaction fees
|
// Custom transaction fees
|
||||||
@ -212,13 +205,7 @@ public class BlockChain {
|
|||||||
private int minAccountLevelToRewardShare;
|
private int minAccountLevelToRewardShare;
|
||||||
private int maxRewardSharesPerFounderMintingAccount;
|
private int maxRewardSharesPerFounderMintingAccount;
|
||||||
private int founderEffectiveMintingLevel;
|
private int founderEffectiveMintingLevel;
|
||||||
|
private int mintingGroupId;
|
||||||
public static class IdsForHeight {
|
|
||||||
public int height;
|
|
||||||
public List<Integer> ids;
|
|
||||||
}
|
|
||||||
|
|
||||||
private List<IdsForHeight> mintingGroupIds;
|
|
||||||
|
|
||||||
/** Minimum time to retain online account signatures (ms) for block validity checks. */
|
/** Minimum time to retain online account signatures (ms) for block validity checks. */
|
||||||
private long onlineAccountSignaturesMinLifetime;
|
private long onlineAccountSignaturesMinLifetime;
|
||||||
@ -230,10 +217,6 @@ public class BlockChain {
|
|||||||
* featureTriggers because unit tests need to set this value via Reflection. */
|
* featureTriggers because unit tests need to set this value via Reflection. */
|
||||||
private long onlineAccountsModulusV2Timestamp;
|
private long onlineAccountsModulusV2Timestamp;
|
||||||
|
|
||||||
/** Feature trigger timestamp for ONLINE_ACCOUNTS_MODULUS time interval decrease. Can't use
|
|
||||||
* featureTriggers because unit tests need to set this value via Reflection. */
|
|
||||||
private long onlineAccountsModulusV3Timestamp;
|
|
||||||
|
|
||||||
/** Snapshot timestamp for self sponsorship algo V1 */
|
/** Snapshot timestamp for self sponsorship algo V1 */
|
||||||
private long selfSponsorshipAlgoV1SnapshotTimestamp;
|
private long selfSponsorshipAlgoV1SnapshotTimestamp;
|
||||||
|
|
||||||
@ -420,10 +403,6 @@ public class BlockChain {
|
|||||||
return this.onlineAccountsModulusV2Timestamp;
|
return this.onlineAccountsModulusV2Timestamp;
|
||||||
}
|
}
|
||||||
|
|
||||||
public long getOnlineAccountsModulusV3Timestamp() {
|
|
||||||
return this.onlineAccountsModulusV3Timestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Block reward batching */
|
/* Block reward batching */
|
||||||
public long getBlockRewardBatchStartHeight() {
|
public long getBlockRewardBatchStartHeight() {
|
||||||
return this.blockRewardBatchStartHeight;
|
return this.blockRewardBatchStartHeight;
|
||||||
@ -550,8 +529,8 @@ public class BlockChain {
|
|||||||
return this.onlineAccountSignaturesMaxLifetime;
|
return this.onlineAccountSignaturesMaxLifetime;
|
||||||
}
|
}
|
||||||
|
|
||||||
public List<IdsForHeight> getMintingGroupIds() {
|
public int getMintingGroupId() {
|
||||||
return mintingGroupIds;
|
return this.mintingGroupId;
|
||||||
}
|
}
|
||||||
|
|
||||||
public CiyamAtSettings getCiyamAtSettings() {
|
public CiyamAtSettings getCiyamAtSettings() {
|
||||||
@ -600,10 +579,6 @@ public class BlockChain {
|
|||||||
return this.featureTriggers.get(FeatureTrigger.increaseOnlineAccountsDifficultyTimestamp.name()).longValue();
|
return this.featureTriggers.get(FeatureTrigger.increaseOnlineAccountsDifficultyTimestamp.name()).longValue();
|
||||||
}
|
}
|
||||||
|
|
||||||
public long getDecreaseOnlineAccountsDifficultyTimestamp() {
|
|
||||||
return this.featureTriggers.get(FeatureTrigger.decreaseOnlineAccountsDifficultyTimestamp.name()).longValue();
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getSelfSponsorshipAlgoV1Height() {
|
public int getSelfSponsorshipAlgoV1Height() {
|
||||||
return this.featureTriggers.get(FeatureTrigger.selfSponsorshipAlgoV1Height.name()).intValue();
|
return this.featureTriggers.get(FeatureTrigger.selfSponsorshipAlgoV1Height.name()).intValue();
|
||||||
}
|
}
|
||||||
@ -660,34 +635,10 @@ public class BlockChain {
|
|||||||
return this.featureTriggers.get(FeatureTrigger.onlyMintWithNameHeight.name()).intValue();
|
return this.featureTriggers.get(FeatureTrigger.onlyMintWithNameHeight.name()).intValue();
|
||||||
}
|
}
|
||||||
|
|
||||||
public int getRemoveOnlyMintWithNameHeight() {
|
|
||||||
return this.featureTriggers.get(FeatureTrigger.removeOnlyMintWithNameHeight.name()).intValue();
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getGroupMemberCheckHeight() {
|
public int getGroupMemberCheckHeight() {
|
||||||
return this.featureTriggers.get(FeatureTrigger.groupMemberCheckHeight.name()).intValue();
|
return this.featureTriggers.get(FeatureTrigger.groupMemberCheckHeight.name()).intValue();
|
||||||
}
|
}
|
||||||
|
|
||||||
public int getFixBatchRewardHeight() {
|
|
||||||
return this.featureTriggers.get(FeatureTrigger.fixBatchRewardHeight.name()).intValue();
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getAdminsReplaceFoundersHeight() {
|
|
||||||
return this.featureTriggers.get(FeatureTrigger.adminsReplaceFoundersHeight.name()).intValue();
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getNullGroupMembershipHeight() {
|
|
||||||
return this.featureTriggers.get(FeatureTrigger.nullGroupMembershipHeight.name()).intValue();
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getIgnoreLevelForRewardShareHeight() {
|
|
||||||
return this.featureTriggers.get(FeatureTrigger.ignoreLevelForRewardShareHeight.name()).intValue();
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getAdminQueryFixHeight() {
|
|
||||||
return this.featureTriggers.get(FeatureTrigger.adminQueryFixHeight.name()).intValue();
|
|
||||||
}
|
|
||||||
|
|
||||||
// More complex getters for aspects that change by height or timestamp
|
// More complex getters for aspects that change by height or timestamp
|
||||||
|
|
||||||
public long getRewardAtHeight(int ourHeight) {
|
public long getRewardAtHeight(int ourHeight) {
|
||||||
|
@ -64,7 +64,6 @@ public class BlockMinter extends Thread {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("BlockMinter");
|
Thread.currentThread().setName("BlockMinter");
|
||||||
Thread.currentThread().setPriority(MAX_PRIORITY);
|
|
||||||
|
|
||||||
if (Settings.getInstance().isTopOnly() || Settings.getInstance().isLite()) {
|
if (Settings.getInstance().isTopOnly() || Settings.getInstance().isLite()) {
|
||||||
// Top only and lite nodes do not sign blocks
|
// Top only and lite nodes do not sign blocks
|
||||||
@ -97,27 +96,21 @@ public class BlockMinter extends Thread {
|
|||||||
|
|
||||||
final boolean isSingleNodeTestnet = Settings.getInstance().isSingleNodeTestnet();
|
final boolean isSingleNodeTestnet = Settings.getInstance().isSingleNodeTestnet();
|
||||||
|
|
||||||
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
|
// Going to need this a lot...
|
||||||
|
BlockRepository blockRepository = repository.getBlockRepository();
|
||||||
|
|
||||||
// Flags for tracking change in whether minting is possible,
|
// Flags for tracking change in whether minting is possible,
|
||||||
// so we can notify Controller, and further update SysTray, etc.
|
// so we can notify Controller, and further update SysTray, etc.
|
||||||
boolean isMintingPossible = false;
|
boolean isMintingPossible = false;
|
||||||
boolean wasMintingPossible = isMintingPossible;
|
boolean wasMintingPossible = isMintingPossible;
|
||||||
try {
|
|
||||||
while (running) {
|
while (running) {
|
||||||
// recreate repository for new loop iteration
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
|
|
||||||
// Going to need this a lot...
|
|
||||||
BlockRepository blockRepository = repository.getBlockRepository();
|
|
||||||
|
|
||||||
if (isMintingPossible != wasMintingPossible)
|
if (isMintingPossible != wasMintingPossible)
|
||||||
Controller.getInstance().onMintingPossibleChange(isMintingPossible);
|
Controller.getInstance().onMintingPossibleChange(isMintingPossible);
|
||||||
|
|
||||||
wasMintingPossible = isMintingPossible;
|
wasMintingPossible = isMintingPossible;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
// reset the repository, to the repository recreated for this loop iteration
|
|
||||||
for( Block newBlock : newBlocks ) newBlock.setRepository(repository);
|
|
||||||
|
|
||||||
// Free up any repository locks
|
// Free up any repository locks
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
|
|
||||||
@ -154,7 +147,7 @@ public class BlockMinter extends Thread {
|
|||||||
}
|
}
|
||||||
|
|
||||||
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
|
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
|
||||||
if (!mintingAccount.canMint(true)) {
|
if (!mintingAccount.canMint()) {
|
||||||
// Minting-account component of reward-share can no longer mint - disregard
|
// Minting-account component of reward-share can no longer mint - disregard
|
||||||
madi.remove();
|
madi.remove();
|
||||||
continue;
|
continue;
|
||||||
@ -389,7 +382,7 @@ public class BlockMinter extends Thread {
|
|||||||
// Add unconfirmed transactions
|
// Add unconfirmed transactions
|
||||||
addUnconfirmedTransactions(repository, newBlock);
|
addUnconfirmedTransactions(repository, newBlock);
|
||||||
|
|
||||||
LOGGER.info(String.format("Adding %d unconfirmed transactions took %d ms", newBlock.getTransactions().size(), (NTP.getTime() - unconfirmedStartTime)));
|
LOGGER.info(String.format("Adding %d unconfirmed transactions took %d ms", newBlock.getTransactions().size(), (NTP.getTime()-unconfirmedStartTime)));
|
||||||
|
|
||||||
// Sign to create block's signature
|
// Sign to create block's signature
|
||||||
newBlock.sign();
|
newBlock.sign();
|
||||||
@ -458,14 +451,9 @@ public class BlockMinter extends Thread {
|
|||||||
// We've been interrupted - time to exit
|
// We've been interrupted - time to exit
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
}
|
||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
LOGGER.warn("Repository issue while running block minter - NO LONGER MINTING", e);
|
LOGGER.warn("Repository issue while running block minter - NO LONGER MINTING", e);
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -13,8 +13,6 @@ import org.qortal.block.Block;
|
|||||||
import org.qortal.block.BlockChain;
|
import org.qortal.block.BlockChain;
|
||||||
import org.qortal.block.BlockChain.BlockTimingByHeight;
|
import org.qortal.block.BlockChain.BlockTimingByHeight;
|
||||||
import org.qortal.controller.arbitrary.*;
|
import org.qortal.controller.arbitrary.*;
|
||||||
import org.qortal.controller.hsqldb.HSQLDBBalanceRecorder;
|
|
||||||
import org.qortal.controller.hsqldb.HSQLDBDataCacheManager;
|
|
||||||
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
|
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
|
||||||
import org.qortal.controller.repository.PruneManager;
|
import org.qortal.controller.repository.PruneManager;
|
||||||
import org.qortal.controller.tradebot.TradeBot;
|
import org.qortal.controller.tradebot.TradeBot;
|
||||||
@ -73,8 +71,6 @@ import java.util.stream.Collectors;
|
|||||||
|
|
||||||
public class Controller extends Thread {
|
public class Controller extends Thread {
|
||||||
|
|
||||||
public static HSQLDBRepositoryFactory REPOSITORY_FACTORY;
|
|
||||||
|
|
||||||
static {
|
static {
|
||||||
// This must go before any calls to LogManager/Logger
|
// This must go before any calls to LogManager/Logger
|
||||||
System.setProperty("log4j2.formatMsgNoLookups", "true");
|
System.setProperty("log4j2.formatMsgNoLookups", "true");
|
||||||
@ -103,7 +99,7 @@ public class Controller extends Thread {
|
|||||||
private final long buildTimestamp; // seconds
|
private final long buildTimestamp; // seconds
|
||||||
private final String[] savedArgs;
|
private final String[] savedArgs;
|
||||||
|
|
||||||
private ExecutorService callbackExecutor = Executors.newFixedThreadPool(4);
|
private ExecutorService callbackExecutor = Executors.newFixedThreadPool(3);
|
||||||
private volatile boolean notifyGroupMembershipChange = false;
|
private volatile boolean notifyGroupMembershipChange = false;
|
||||||
|
|
||||||
/** Latest blocks on our chain. Note: tail/last is the latest block. */
|
/** Latest blocks on our chain. Note: tail/last is the latest block. */
|
||||||
@ -405,44 +401,14 @@ public class Controller extends Thread {
|
|||||||
|
|
||||||
LOGGER.info("Starting repository");
|
LOGGER.info("Starting repository");
|
||||||
try {
|
try {
|
||||||
REPOSITORY_FACTORY = new HSQLDBRepositoryFactory(getRepositoryUrl());
|
RepositoryFactory repositoryFactory = new HSQLDBRepositoryFactory(getRepositoryUrl());
|
||||||
RepositoryManager.setRepositoryFactory(REPOSITORY_FACTORY);
|
RepositoryManager.setRepositoryFactory(repositoryFactory);
|
||||||
RepositoryManager.setRequestedCheckpoint(Boolean.TRUE);
|
RepositoryManager.setRequestedCheckpoint(Boolean.TRUE);
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
// RepositoryManager.rebuildTransactionSequences(repository);
|
RepositoryManager.rebuildTransactionSequences(repository);
|
||||||
ArbitraryDataCacheManager.getInstance().buildArbitraryResourcesCache(repository, false);
|
ArbitraryDataCacheManager.getInstance().buildArbitraryResourcesCache(repository, false);
|
||||||
}
|
}
|
||||||
|
|
||||||
if( Settings.getInstance().isDbCacheEnabled() ) {
|
|
||||||
LOGGER.info("Db Cache Starting ...");
|
|
||||||
HSQLDBDataCacheManager hsqldbDataCacheManager = new HSQLDBDataCacheManager();
|
|
||||||
hsqldbDataCacheManager.start();
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.info("Db Cache Disabled");
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.info("Arbitrary Indexing Starting ...");
|
|
||||||
ArbitraryIndexUtils.startCaching(
|
|
||||||
Settings.getInstance().getArbitraryIndexingPriority(),
|
|
||||||
Settings.getInstance().getArbitraryIndexingFrequency()
|
|
||||||
);
|
|
||||||
|
|
||||||
if( Settings.getInstance().isBalanceRecorderEnabled() ) {
|
|
||||||
Optional<HSQLDBBalanceRecorder> recorder = HSQLDBBalanceRecorder.getInstance();
|
|
||||||
|
|
||||||
if( recorder.isPresent() ) {
|
|
||||||
LOGGER.info("Balance Recorder Starting ...");
|
|
||||||
recorder.get().start();
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.info("Balance Recorder won't start.");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.info("Balance Recorder Disabled");
|
|
||||||
}
|
|
||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
// If exception has no cause or message then repository is in use by some other process.
|
// If exception has no cause or message then repository is in use by some other process.
|
||||||
if (e.getCause() == null && e.getMessage() == null) {
|
if (e.getCause() == null && e.getMessage() == null) {
|
||||||
@ -523,6 +489,7 @@ public class Controller extends Thread {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Shutdown hook");
|
Thread.currentThread().setName("Shutdown hook");
|
||||||
|
|
||||||
Controller.getInstance().shutdown();
|
Controller.getInstance().shutdown();
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
@ -547,16 +514,6 @@ public class Controller extends Thread {
|
|||||||
ArbitraryDataStorageManager.getInstance().start();
|
ArbitraryDataStorageManager.getInstance().start();
|
||||||
ArbitraryDataRenderManager.getInstance().start();
|
ArbitraryDataRenderManager.getInstance().start();
|
||||||
|
|
||||||
// start rebuild arbitrary resource cache timer task
|
|
||||||
if( Settings.getInstance().isRebuildArbitraryResourceCacheTaskEnabled() ) {
|
|
||||||
new Timer().schedule(
|
|
||||||
new RebuildArbitraryResourceCacheTask(),
|
|
||||||
Settings.getInstance().getRebuildArbitraryResourceCacheTaskDelay() * RebuildArbitraryResourceCacheTask.MILLIS_IN_MINUTE,
|
|
||||||
Settings.getInstance().getRebuildArbitraryResourceCacheTaskPeriod() * RebuildArbitraryResourceCacheTask.MILLIS_IN_HOUR
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
LOGGER.info("Starting online accounts manager");
|
LOGGER.info("Starting online accounts manager");
|
||||||
OnlineAccountsManager.getInstance().start();
|
OnlineAccountsManager.getInstance().start();
|
||||||
|
|
||||||
@ -612,33 +569,10 @@ public class Controller extends Thread {
|
|||||||
// If GUI is enabled, we're no longer starting up but actually running now
|
// If GUI is enabled, we're no longer starting up but actually running now
|
||||||
Gui.getInstance().notifyRunning();
|
Gui.getInstance().notifyRunning();
|
||||||
|
|
||||||
if (Settings.getInstance().isAutoRestartEnabled()) {
|
|
||||||
// Check every 10 minutes if we have enough connected peers
|
|
||||||
Timer checkConnectedPeers = new Timer();
|
|
||||||
|
|
||||||
checkConnectedPeers.schedule(new TimerTask() {
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
// Get the connected peers
|
|
||||||
int myConnectedPeers = Network.getInstance().getImmutableHandshakedPeers().size();
|
|
||||||
LOGGER.debug("Node have {} connected peers", myConnectedPeers);
|
|
||||||
if (myConnectedPeers == 0) {
|
|
||||||
// Restart node if we have 0 peers
|
|
||||||
LOGGER.info("Node have no connected peers, restarting node");
|
|
||||||
try {
|
|
||||||
RestartNode.attemptToRestart();
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("Unable to restart the node", e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}, 10*60*1000, 10*60*1000);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check every 10 minutes to see if the block minter is running
|
// Check every 10 minutes to see if the block minter is running
|
||||||
Timer checkBlockMinter = new Timer();
|
Timer timer = new Timer();
|
||||||
|
|
||||||
checkBlockMinter.schedule(new TimerTask() {
|
timer.schedule(new TimerTask() {
|
||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
if (blockMinter.isAlive()) {
|
if (blockMinter.isAlive()) {
|
||||||
@ -672,8 +606,10 @@ public class Controller extends Thread {
|
|||||||
boolean canBootstrap = Settings.getInstance().getBootstrap();
|
boolean canBootstrap = Settings.getInstance().getBootstrap();
|
||||||
boolean needsArchiveRebuild = false;
|
boolean needsArchiveRebuild = false;
|
||||||
int checkHeight = 0;
|
int checkHeight = 0;
|
||||||
|
Repository repository = null;
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()){
|
try {
|
||||||
|
repository = RepositoryManager.getRepository();
|
||||||
needsArchiveRebuild = (repository.getBlockArchiveRepository().fromHeight(2) == null);
|
needsArchiveRebuild = (repository.getBlockArchiveRepository().fromHeight(2) == null);
|
||||||
checkHeight = repository.getBlockRepository().getBlockchainHeight();
|
checkHeight = repository.getBlockRepository().getBlockchainHeight();
|
||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
|
@ -13,7 +13,6 @@ import org.qortal.crypto.MemoryPoW;
|
|||||||
import org.qortal.crypto.Qortal25519Extras;
|
import org.qortal.crypto.Qortal25519Extras;
|
||||||
import org.qortal.data.account.MintingAccountData;
|
import org.qortal.data.account.MintingAccountData;
|
||||||
import org.qortal.data.account.RewardShareData;
|
import org.qortal.data.account.RewardShareData;
|
||||||
import org.qortal.data.group.GroupMemberData;
|
|
||||||
import org.qortal.data.network.OnlineAccountData;
|
import org.qortal.data.network.OnlineAccountData;
|
||||||
import org.qortal.network.Network;
|
import org.qortal.network.Network;
|
||||||
import org.qortal.network.Peer;
|
import org.qortal.network.Peer;
|
||||||
@ -25,7 +24,6 @@ import org.qortal.repository.Repository;
|
|||||||
import org.qortal.repository.RepositoryManager;
|
import org.qortal.repository.RepositoryManager;
|
||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.utils.Base58;
|
import org.qortal.utils.Base58;
|
||||||
import org.qortal.utils.Groups;
|
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
import org.qortal.utils.NamedThreadFactory;
|
import org.qortal.utils.NamedThreadFactory;
|
||||||
|
|
||||||
@ -46,7 +44,6 @@ public class OnlineAccountsManager {
|
|||||||
*/
|
*/
|
||||||
private static final long ONLINE_TIMESTAMP_MODULUS_V1 = 5 * 60 * 1000L;
|
private static final long ONLINE_TIMESTAMP_MODULUS_V1 = 5 * 60 * 1000L;
|
||||||
private static final long ONLINE_TIMESTAMP_MODULUS_V2 = 30 * 60 * 1000L;
|
private static final long ONLINE_TIMESTAMP_MODULUS_V2 = 30 * 60 * 1000L;
|
||||||
private static final long ONLINE_TIMESTAMP_MODULUS_V3 = 10 * 60 * 1000L;
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* How many 'current' timestamp-sets of online accounts we cache.
|
* How many 'current' timestamp-sets of online accounts we cache.
|
||||||
@ -70,13 +67,12 @@ public class OnlineAccountsManager {
|
|||||||
private static final long ONLINE_ACCOUNTS_COMPUTE_INITIAL_SLEEP_INTERVAL = 30 * 1000L; // ms
|
private static final long ONLINE_ACCOUNTS_COMPUTE_INITIAL_SLEEP_INTERVAL = 30 * 1000L; // ms
|
||||||
|
|
||||||
// MemoryPoW - mainnet
|
// MemoryPoW - mainnet
|
||||||
public static final int POW_BUFFER_SIZE = 1024 * 1024; // bytes
|
public static final int POW_BUFFER_SIZE = 1 * 1024 * 1024; // bytes
|
||||||
public static final int POW_DIFFICULTY_V1 = 18; // leading zero bits
|
public static final int POW_DIFFICULTY_V1 = 18; // leading zero bits
|
||||||
public static final int POW_DIFFICULTY_V2 = 19; // leading zero bits
|
public static final int POW_DIFFICULTY_V2 = 19; // leading zero bits
|
||||||
public static final int POW_DIFFICULTY_V3 = 6; // leading zero bits
|
|
||||||
|
|
||||||
// MemoryPoW - testnet
|
// MemoryPoW - testnet
|
||||||
public static final int POW_BUFFER_SIZE_TESTNET = 1024 * 1024; // bytes
|
public static final int POW_BUFFER_SIZE_TESTNET = 1 * 1024 * 1024; // bytes
|
||||||
public static final int POW_DIFFICULTY_TESTNET = 5; // leading zero bits
|
public static final int POW_DIFFICULTY_TESTNET = 5; // leading zero bits
|
||||||
|
|
||||||
// IMPORTANT: if we ever need to dynamically modify the buffer size using a feature trigger, the
|
// IMPORTANT: if we ever need to dynamically modify the buffer size using a feature trigger, the
|
||||||
@ -84,7 +80,7 @@ public class OnlineAccountsManager {
|
|||||||
// one for the transition period.
|
// one for the transition period.
|
||||||
private static long[] POW_VERIFY_WORK_BUFFER = new long[getPoWBufferSize() / 8];
|
private static long[] POW_VERIFY_WORK_BUFFER = new long[getPoWBufferSize() / 8];
|
||||||
|
|
||||||
private final ScheduledExecutorService executor = Executors.newScheduledThreadPool(4, new NamedThreadFactory("OnlineAccounts", Thread.NORM_PRIORITY));
|
private final ScheduledExecutorService executor = Executors.newScheduledThreadPool(4, new NamedThreadFactory("OnlineAccounts"));
|
||||||
private volatile boolean isStopping = false;
|
private volatile boolean isStopping = false;
|
||||||
|
|
||||||
private final Set<OnlineAccountData> onlineAccountsImportQueue = ConcurrentHashMap.newKeySet();
|
private final Set<OnlineAccountData> onlineAccountsImportQueue = ConcurrentHashMap.newKeySet();
|
||||||
@ -110,15 +106,11 @@ public class OnlineAccountsManager {
|
|||||||
|
|
||||||
public static long getOnlineTimestampModulus() {
|
public static long getOnlineTimestampModulus() {
|
||||||
Long now = NTP.getTime();
|
Long now = NTP.getTime();
|
||||||
if (now != null && now >= BlockChain.getInstance().getOnlineAccountsModulusV2Timestamp() && now < BlockChain.getInstance().getOnlineAccountsModulusV3Timestamp()) {
|
if (now != null && now >= BlockChain.getInstance().getOnlineAccountsModulusV2Timestamp()) {
|
||||||
return ONLINE_TIMESTAMP_MODULUS_V2;
|
return ONLINE_TIMESTAMP_MODULUS_V2;
|
||||||
}
|
}
|
||||||
if (now != null && now >= BlockChain.getInstance().getOnlineAccountsModulusV3Timestamp()) {
|
|
||||||
return ONLINE_TIMESTAMP_MODULUS_V3;
|
|
||||||
}
|
|
||||||
return ONLINE_TIMESTAMP_MODULUS_V1;
|
return ONLINE_TIMESTAMP_MODULUS_V1;
|
||||||
}
|
}
|
||||||
|
|
||||||
public static Long getCurrentOnlineAccountTimestamp() {
|
public static Long getCurrentOnlineAccountTimestamp() {
|
||||||
Long now = NTP.getTime();
|
Long now = NTP.getTime();
|
||||||
if (now == null)
|
if (now == null)
|
||||||
@ -143,12 +135,9 @@ public class OnlineAccountsManager {
|
|||||||
if (Settings.getInstance().isTestNet())
|
if (Settings.getInstance().isTestNet())
|
||||||
return POW_DIFFICULTY_TESTNET;
|
return POW_DIFFICULTY_TESTNET;
|
||||||
|
|
||||||
if (timestamp >= BlockChain.getInstance().getIncreaseOnlineAccountsDifficultyTimestamp() && timestamp < BlockChain.getInstance().getDecreaseOnlineAccountsDifficultyTimestamp())
|
if (timestamp >= BlockChain.getInstance().getIncreaseOnlineAccountsDifficultyTimestamp())
|
||||||
return POW_DIFFICULTY_V2;
|
return POW_DIFFICULTY_V2;
|
||||||
|
|
||||||
if (timestamp >= BlockChain.getInstance().getDecreaseOnlineAccountsDifficultyTimestamp())
|
|
||||||
return POW_DIFFICULTY_V3;
|
|
||||||
|
|
||||||
return POW_DIFFICULTY_V1;
|
return POW_DIFFICULTY_V1;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -226,15 +215,6 @@ public class OnlineAccountsManager {
|
|||||||
Set<OnlineAccountData> onlineAccountsToAdd = new HashSet<>();
|
Set<OnlineAccountData> onlineAccountsToAdd = new HashSet<>();
|
||||||
Set<OnlineAccountData> onlineAccountsToRemove = new HashSet<>();
|
Set<OnlineAccountData> onlineAccountsToRemove = new HashSet<>();
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
|
|
||||||
int blockHeight = repository.getBlockRepository().getBlockchainHeight();
|
|
||||||
|
|
||||||
List<String> mintingGroupMemberAddresses
|
|
||||||
= Groups.getAllMembers(
|
|
||||||
repository.getGroupRepository(),
|
|
||||||
Groups.getGroupIdsToMint(BlockChain.getInstance(), blockHeight)
|
|
||||||
);
|
|
||||||
|
|
||||||
for (OnlineAccountData onlineAccountData : this.onlineAccountsImportQueue) {
|
for (OnlineAccountData onlineAccountData : this.onlineAccountsImportQueue) {
|
||||||
if (isStopping)
|
if (isStopping)
|
||||||
return;
|
return;
|
||||||
@ -247,7 +227,7 @@ public class OnlineAccountsManager {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
boolean isValid = this.isValidCurrentAccount(repository, mintingGroupMemberAddresses, onlineAccountData);
|
boolean isValid = this.isValidCurrentAccount(repository, onlineAccountData);
|
||||||
if (isValid)
|
if (isValid)
|
||||||
onlineAccountsToAdd.add(onlineAccountData);
|
onlineAccountsToAdd.add(onlineAccountData);
|
||||||
|
|
||||||
@ -326,7 +306,7 @@ public class OnlineAccountsManager {
|
|||||||
return inplaceArray;
|
return inplaceArray;
|
||||||
}
|
}
|
||||||
|
|
||||||
private static boolean isValidCurrentAccount(Repository repository, List<String> mintingGroupMemberAddresses, OnlineAccountData onlineAccountData) throws DataException {
|
private static boolean isValidCurrentAccount(Repository repository, OnlineAccountData onlineAccountData) throws DataException {
|
||||||
final Long now = NTP.getTime();
|
final Long now = NTP.getTime();
|
||||||
if (now == null)
|
if (now == null)
|
||||||
return false;
|
return false;
|
||||||
@ -361,14 +341,9 @@ public class OnlineAccountsManager {
|
|||||||
LOGGER.trace(() -> String.format("Rejecting unknown online reward-share public key %s", Base58.encode(rewardSharePublicKey)));
|
LOGGER.trace(() -> String.format("Rejecting unknown online reward-share public key %s", Base58.encode(rewardSharePublicKey)));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
// reject account address that are not in the MINTER Group
|
|
||||||
else if( !mintingGroupMemberAddresses.contains(rewardShareData.getMinter())) {
|
|
||||||
LOGGER.trace(() -> String.format("Rejecting online reward-share that is not in MINTER Group, account %s", rewardShareData.getMinter()));
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
|
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
|
||||||
if (!mintingAccount.canMint(true)) { // group validation is a few lines above
|
if (!mintingAccount.canMint()) {
|
||||||
// Minting-account component of reward-share can no longer mint - disregard
|
// Minting-account component of reward-share can no longer mint - disregard
|
||||||
LOGGER.trace(() -> String.format("Rejecting online reward-share with non-minting account %s", mintingAccount.getAddress()));
|
LOGGER.trace(() -> String.format("Rejecting online reward-share with non-minting account %s", mintingAccount.getAddress()));
|
||||||
return false;
|
return false;
|
||||||
@ -555,7 +530,7 @@ public class OnlineAccountsManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
|
Account mintingAccount = new Account(repository, rewardShareData.getMinter());
|
||||||
if (!mintingAccount.canMint(true)) {
|
if (!mintingAccount.canMint()) {
|
||||||
// Minting-account component of reward-share can no longer mint - disregard
|
// Minting-account component of reward-share can no longer mint - disregard
|
||||||
iterator.remove();
|
iterator.remove();
|
||||||
continue;
|
continue;
|
||||||
|
@ -65,7 +65,6 @@ public class PirateChainWalletController extends Thread {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Pirate Chain Wallet Controller");
|
Thread.currentThread().setName("Pirate Chain Wallet Controller");
|
||||||
Thread.currentThread().setPriority(MIN_PRIORITY);
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
while (running && !Controller.isStopping()) {
|
while (running && !Controller.isStopping()) {
|
||||||
|
@ -118,12 +118,8 @@ public class Synchronizer extends Thread {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public static Synchronizer getInstance() {
|
public static Synchronizer getInstance() {
|
||||||
if (instance == null) {
|
if (instance == null)
|
||||||
instance = new Synchronizer();
|
instance = new Synchronizer();
|
||||||
instance.setPriority(Settings.getInstance().getSynchronizerThreadPriority());
|
|
||||||
|
|
||||||
LOGGER.info("thread priority = " + instance.getPriority());
|
|
||||||
}
|
|
||||||
|
|
||||||
return instance;
|
return instance;
|
||||||
}
|
}
|
||||||
|
@ -14,7 +14,6 @@ import java.io.IOException;
|
|||||||
import java.util.Comparator;
|
import java.util.Comparator;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
|
|
||||||
import static java.lang.Thread.NORM_PRIORITY;
|
|
||||||
import static org.qortal.data.arbitrary.ArbitraryResourceStatus.Status.NOT_PUBLISHED;
|
import static org.qortal.data.arbitrary.ArbitraryResourceStatus.Status.NOT_PUBLISHED;
|
||||||
|
|
||||||
|
|
||||||
@ -29,7 +28,6 @@ public class ArbitraryDataBuilderThread implements Runnable {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Arbitrary Data Builder Thread");
|
Thread.currentThread().setName("Arbitrary Data Builder Thread");
|
||||||
Thread.currentThread().setPriority(NORM_PRIORITY);
|
|
||||||
ArbitraryDataBuildManager buildManager = ArbitraryDataBuildManager.getInstance();
|
ArbitraryDataBuildManager buildManager = ArbitraryDataBuildManager.getInstance();
|
||||||
|
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
|
@ -2,30 +2,22 @@ package org.qortal.controller.arbitrary;
|
|||||||
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
import org.apache.logging.log4j.LogManager;
|
||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
|
import org.qortal.api.resource.TransactionsResource;
|
||||||
import org.qortal.controller.Controller;
|
import org.qortal.controller.Controller;
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceData;
|
import org.qortal.data.arbitrary.ArbitraryResourceData;
|
||||||
import org.qortal.data.transaction.ArbitraryTransactionData;
|
import org.qortal.data.transaction.ArbitraryTransactionData;
|
||||||
import org.qortal.event.DataMonitorEvent;
|
|
||||||
import org.qortal.event.EventBus;
|
|
||||||
import org.qortal.gui.SplashFrame;
|
import org.qortal.gui.SplashFrame;
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.repository.DataException;
|
||||||
import org.qortal.repository.Repository;
|
import org.qortal.repository.Repository;
|
||||||
import org.qortal.repository.RepositoryManager;
|
import org.qortal.repository.RepositoryManager;
|
||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.transaction.ArbitraryTransaction;
|
import org.qortal.transaction.ArbitraryTransaction;
|
||||||
|
import org.qortal.transaction.Transaction;
|
||||||
import org.qortal.utils.Base58;
|
import org.qortal.utils.Base58;
|
||||||
|
|
||||||
import java.text.NumberFormat;
|
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.HashMap;
|
|
||||||
import java.util.HashSet;
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Map;
|
|
||||||
import java.util.Optional;
|
|
||||||
import java.util.Set;
|
|
||||||
import java.util.function.Function;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
public class ArbitraryDataCacheManager extends Thread {
|
public class ArbitraryDataCacheManager extends Thread {
|
||||||
|
|
||||||
@ -37,11 +29,6 @@ public class ArbitraryDataCacheManager extends Thread {
|
|||||||
/** Queue of arbitrary transactions that require cache updates */
|
/** Queue of arbitrary transactions that require cache updates */
|
||||||
private final List<ArbitraryTransactionData> updateQueue = Collections.synchronizedList(new ArrayList<>());
|
private final List<ArbitraryTransactionData> updateQueue = Collections.synchronizedList(new ArrayList<>());
|
||||||
|
|
||||||
private static final NumberFormat FORMATTER = NumberFormat.getNumberInstance();
|
|
||||||
|
|
||||||
static {
|
|
||||||
FORMATTER.setGroupingUsed(true);
|
|
||||||
}
|
|
||||||
|
|
||||||
public static synchronized ArbitraryDataCacheManager getInstance() {
|
public static synchronized ArbitraryDataCacheManager getInstance() {
|
||||||
if (instance == null) {
|
if (instance == null) {
|
||||||
@ -54,26 +41,20 @@ public class ArbitraryDataCacheManager extends Thread {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Arbitrary Data Cache Manager");
|
Thread.currentThread().setName("Arbitrary Data Cache Manager");
|
||||||
Thread.currentThread().setPriority(NORM_PRIORITY);
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
try {
|
|
||||||
Thread.sleep(500L);
|
Thread.sleep(500L);
|
||||||
|
|
||||||
// Process queue
|
// Process queue
|
||||||
processResourceQueue();
|
processResourceQueue();
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
Thread.sleep(600_000L); // wait 10 minutes to continue
|
|
||||||
}
|
}
|
||||||
|
} catch (InterruptedException e) {
|
||||||
|
// Fall through to exit thread
|
||||||
}
|
}
|
||||||
|
|
||||||
// Clear queue before terminating thread
|
// Clear queue before terminating thread
|
||||||
processResourceQueue();
|
processResourceQueue();
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public void shutdown() {
|
public void shutdown() {
|
||||||
@ -103,25 +84,14 @@ public class ArbitraryDataCacheManager extends Thread {
|
|||||||
// Update arbitrary resource caches
|
// Update arbitrary resource caches
|
||||||
try {
|
try {
|
||||||
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
|
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
|
||||||
arbitraryTransaction.updateArbitraryResourceCacheIncludingMetadata(repository, new HashSet<>(0), new HashMap<>(0));
|
arbitraryTransaction.updateArbitraryResourceCache(repository);
|
||||||
|
arbitraryTransaction.updateArbitraryMetadataCache(repository);
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
|
|
||||||
// Update status as separate commit, as this is more prone to failure
|
// Update status as separate commit, as this is more prone to failure
|
||||||
arbitraryTransaction.updateArbitraryResourceStatus(repository);
|
arbitraryTransaction.updateArbitraryResourceStatus(repository);
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
transactionData.getIdentifier(),
|
|
||||||
transactionData.getName(),
|
|
||||||
transactionData.getService().name(),
|
|
||||||
"updated resource cache and status, queue",
|
|
||||||
transactionData.getTimestamp(),
|
|
||||||
transactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
LOGGER.debug(() -> String.format("Finished processing transaction %.8s in arbitrary resource queue...", Base58.encode(transactionData.getSignature())));
|
LOGGER.debug(() -> String.format("Finished processing transaction %.8s in arbitrary resource queue...", Base58.encode(transactionData.getSignature())));
|
||||||
|
|
||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
@ -132,9 +102,6 @@ public class ArbitraryDataCacheManager extends Thread {
|
|||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
LOGGER.error("Repository issue while processing arbitrary resource cache updates", e);
|
LOGGER.error("Repository issue while processing arbitrary resource cache updates", e);
|
||||||
}
|
}
|
||||||
catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public void addToUpdateQueue(ArbitraryTransactionData transactionData) {
|
public void addToUpdateQueue(ArbitraryTransactionData transactionData) {
|
||||||
@ -180,66 +147,34 @@ public class ArbitraryDataCacheManager extends Thread {
|
|||||||
LOGGER.info("Building arbitrary resources cache...");
|
LOGGER.info("Building arbitrary resources cache...");
|
||||||
SplashFrame.getInstance().updateStatus("Building QDN cache - please wait...");
|
SplashFrame.getInstance().updateStatus("Building QDN cache - please wait...");
|
||||||
|
|
||||||
final int batchSize = Settings.getInstance().getBuildArbitraryResourcesBatchSize();
|
final int batchSize = 100;
|
||||||
int offset = 0;
|
int offset = 0;
|
||||||
|
|
||||||
List<ArbitraryTransactionData> allArbitraryTransactionsInDescendingOrder
|
|
||||||
= repository.getArbitraryRepository().getLatestArbitraryTransactions();
|
|
||||||
|
|
||||||
LOGGER.info("arbitrary transactions: count = " + allArbitraryTransactionsInDescendingOrder.size());
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> resources = repository.getArbitraryRepository().getArbitraryResources(null, null, true);
|
|
||||||
|
|
||||||
Map<ArbitraryTransactionDataHashWrapper, ArbitraryResourceData> resourceByWrapper = new HashMap<>(resources.size());
|
|
||||||
for( ArbitraryResourceData resource : resources ) {
|
|
||||||
resourceByWrapper.put(
|
|
||||||
new ArbitraryTransactionDataHashWrapper(resource.service.value, resource.name, resource.identifier),
|
|
||||||
resource
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.info("arbitrary resources: count = " + resourceByWrapper.size());
|
|
||||||
|
|
||||||
Set<ArbitraryTransactionDataHashWrapper> latestTransactionsWrapped = new HashSet<>(allArbitraryTransactionsInDescendingOrder.size());
|
|
||||||
|
|
||||||
// Loop through all ARBITRARY transactions, and determine latest state
|
// Loop through all ARBITRARY transactions, and determine latest state
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
LOGGER.info(
|
LOGGER.info("Fetching arbitrary transactions {} - {}", offset, offset+batchSize-1);
|
||||||
"Fetching arbitrary transactions {} - {} / {} Total",
|
|
||||||
FORMATTER.format(offset),
|
|
||||||
FORMATTER.format(offset+batchSize-1),
|
|
||||||
FORMATTER.format(allArbitraryTransactionsInDescendingOrder.size())
|
|
||||||
);
|
|
||||||
|
|
||||||
List<ArbitraryTransactionData> transactionsToProcess
|
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, List.of(Transaction.TransactionType.ARBITRARY), null, null, null, TransactionsResource.ConfirmationStatus.BOTH, batchSize, offset, false);
|
||||||
= allArbitraryTransactionsInDescendingOrder.stream()
|
if (signatures.isEmpty()) {
|
||||||
.skip(offset)
|
|
||||||
.limit(batchSize)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
if (transactionsToProcess.isEmpty()) {
|
|
||||||
// Complete
|
// Complete
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
// Expand signatures to transactions
|
||||||
for( ArbitraryTransactionData transactionData : transactionsToProcess) {
|
for (byte[] signature : signatures) {
|
||||||
|
ArbitraryTransactionData transactionData = (ArbitraryTransactionData) repository
|
||||||
|
.getTransactionRepository().fromSignature(signature);
|
||||||
|
|
||||||
if (transactionData.getService() == null) {
|
if (transactionData.getService() == null) {
|
||||||
// Unsupported service - ignore this resource
|
// Unsupported service - ignore this resource
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
latestTransactionsWrapped.add(new ArbitraryTransactionDataHashWrapper(transactionData));
|
|
||||||
|
|
||||||
// Update arbitrary resource caches
|
// Update arbitrary resource caches
|
||||||
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
|
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
|
||||||
arbitraryTransaction.updateArbitraryResourceCacheIncludingMetadata(repository, latestTransactionsWrapped, resourceByWrapper);
|
arbitraryTransaction.updateArbitraryResourceCache(repository);
|
||||||
}
|
arbitraryTransaction.updateArbitraryMetadataCache(repository);
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
} catch (DataException e) {
|
|
||||||
repository.discardChanges();
|
|
||||||
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
}
|
||||||
offset += batchSize;
|
offset += batchSize;
|
||||||
}
|
}
|
||||||
@ -257,11 +192,6 @@ public class ArbitraryDataCacheManager extends Thread {
|
|||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
throw new DataException("Build of arbitrary resources cache failed.");
|
throw new DataException("Build of arbitrary resources cache failed.");
|
||||||
}
|
}
|
||||||
catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private boolean refreshArbitraryStatuses(Repository repository) throws DataException {
|
private boolean refreshArbitraryStatuses(Repository repository) throws DataException {
|
||||||
@ -269,48 +199,27 @@ public class ArbitraryDataCacheManager extends Thread {
|
|||||||
LOGGER.info("Refreshing arbitrary resource statuses for locally hosted transactions...");
|
LOGGER.info("Refreshing arbitrary resource statuses for locally hosted transactions...");
|
||||||
SplashFrame.getInstance().updateStatus("Refreshing statuses - please wait...");
|
SplashFrame.getInstance().updateStatus("Refreshing statuses - please wait...");
|
||||||
|
|
||||||
final int batchSize = Settings.getInstance().getBuildArbitraryResourcesBatchSize();
|
final int batchSize = 100;
|
||||||
int offset = 0;
|
int offset = 0;
|
||||||
|
|
||||||
List<ArbitraryTransactionData> allHostedTransactions
|
|
||||||
= ArbitraryDataStorageManager.getInstance()
|
|
||||||
.listAllHostedTransactions(repository, null, null);
|
|
||||||
|
|
||||||
// Loop through all ARBITRARY transactions, and determine latest state
|
// Loop through all ARBITRARY transactions, and determine latest state
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
LOGGER.info(
|
LOGGER.info("Fetching hosted transactions {} - {}", offset, offset+batchSize-1);
|
||||||
"Fetching hosted transactions {} - {} / {} Total",
|
|
||||||
FORMATTER.format(offset),
|
|
||||||
FORMATTER.format(offset+batchSize-1),
|
|
||||||
FORMATTER.format(allHostedTransactions.size())
|
|
||||||
);
|
|
||||||
|
|
||||||
List<ArbitraryTransactionData> hostedTransactions
|
|
||||||
= allHostedTransactions.stream()
|
|
||||||
.skip(offset)
|
|
||||||
.limit(batchSize)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
|
List<ArbitraryTransactionData> hostedTransactions = ArbitraryDataStorageManager.getInstance().listAllHostedTransactions(repository, batchSize, offset);
|
||||||
if (hostedTransactions.isEmpty()) {
|
if (hostedTransactions.isEmpty()) {
|
||||||
// Complete
|
// Complete
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
try {
|
|
||||||
// Loop through hosted transactions
|
// Loop through hosted transactions
|
||||||
for (ArbitraryTransactionData transactionData : hostedTransactions) {
|
for (ArbitraryTransactionData transactionData : hostedTransactions) {
|
||||||
|
|
||||||
// Determine status and update cache
|
// Determine status and update cache
|
||||||
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
|
ArbitraryTransaction arbitraryTransaction = new ArbitraryTransaction(repository, transactionData);
|
||||||
arbitraryTransaction.updateArbitraryResourceStatus(repository);
|
arbitraryTransaction.updateArbitraryResourceStatus(repository);
|
||||||
}
|
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
} catch (DataException e) {
|
|
||||||
repository.discardChanges();
|
|
||||||
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
offset += batchSize;
|
offset += batchSize;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -324,11 +233,6 @@ public class ArbitraryDataCacheManager extends Thread {
|
|||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
throw new DataException("Refresh of arbitrary resource statuses failed.");
|
throw new DataException("Refresh of arbitrary resource statuses failed.");
|
||||||
}
|
}
|
||||||
catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -2,10 +2,9 @@ package org.qortal.controller.arbitrary;
|
|||||||
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
import org.apache.logging.log4j.LogManager;
|
||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
|
import org.qortal.api.resource.TransactionsResource.ConfirmationStatus;
|
||||||
import org.qortal.data.transaction.ArbitraryTransactionData;
|
import org.qortal.data.transaction.ArbitraryTransactionData;
|
||||||
import org.qortal.data.transaction.TransactionData;
|
import org.qortal.data.transaction.TransactionData;
|
||||||
import org.qortal.event.DataMonitorEvent;
|
|
||||||
import org.qortal.event.EventBus;
|
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.repository.DataException;
|
||||||
import org.qortal.repository.Repository;
|
import org.qortal.repository.Repository;
|
||||||
import org.qortal.repository.RepositoryManager;
|
import org.qortal.repository.RepositoryManager;
|
||||||
@ -22,12 +21,8 @@ import java.nio.file.Paths;
|
|||||||
import java.security.SecureRandom;
|
import java.security.SecureRandom;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
import java.util.HashSet;
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
import java.util.Optional;
|
|
||||||
import java.util.Set;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
import static org.qortal.controller.arbitrary.ArbitraryDataStorageManager.DELETION_THRESHOLD;
|
import static org.qortal.controller.arbitrary.ArbitraryDataStorageManager.DELETION_THRESHOLD;
|
||||||
|
|
||||||
@ -76,25 +71,11 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Arbitrary Data Cleanup Manager");
|
Thread.currentThread().setName("Arbitrary Data Cleanup Manager");
|
||||||
Thread.currentThread().setPriority(NORM_PRIORITY);
|
|
||||||
|
|
||||||
// Paginate queries when fetching arbitrary transactions
|
// Paginate queries when fetching arbitrary transactions
|
||||||
final int limit = 100;
|
final int limit = 100;
|
||||||
int offset = 0;
|
int offset = 0;
|
||||||
|
|
||||||
List<ArbitraryTransactionData> allArbitraryTransactionsInDescendingOrder;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
allArbitraryTransactionsInDescendingOrder
|
|
||||||
= repository.getArbitraryRepository()
|
|
||||||
.getLatestArbitraryTransactions();
|
|
||||||
} catch( Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
allArbitraryTransactionsInDescendingOrder = new ArrayList<>(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
Set<ArbitraryTransactionData> processedTransactions = new HashSet<>();
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
while (!isStopping) {
|
while (!isStopping) {
|
||||||
Thread.sleep(30000);
|
Thread.sleep(30000);
|
||||||
@ -125,31 +106,27 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
|
|
||||||
// Any arbitrary transactions we want to fetch data for?
|
// Any arbitrary transactions we want to fetch data for?
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
List<ArbitraryTransactionData> transactions = allArbitraryTransactionsInDescendingOrder.stream().skip(offset).limit(limit).collect(Collectors.toList());
|
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, ARBITRARY_TX_TYPE, null, null, null, ConfirmationStatus.BOTH, limit, offset, true);
|
||||||
|
// LOGGER.info("Found {} arbitrary transactions at offset: {}, limit: {}", signatures.size(), offset, limit);
|
||||||
if (isStopping) {
|
if (isStopping) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (transactions == null || transactions.isEmpty()) {
|
if (signatures == null || signatures.isEmpty()) {
|
||||||
offset = 0;
|
offset = 0;
|
||||||
allArbitraryTransactionsInDescendingOrder
|
continue;
|
||||||
= repository.getArbitraryRepository()
|
|
||||||
.getLatestArbitraryTransactions();
|
|
||||||
transactions = allArbitraryTransactionsInDescendingOrder.stream().limit(limit).collect(Collectors.toList());
|
|
||||||
processedTransactions.clear();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
offset += limit;
|
offset += limit;
|
||||||
now = NTP.getTime();
|
now = NTP.getTime();
|
||||||
|
|
||||||
// Loop through the signatures in this batch
|
// Loop through the signatures in this batch
|
||||||
for (int i=0; i<transactions.size(); i++) {
|
for (int i=0; i<signatures.size(); i++) {
|
||||||
if (isStopping) {
|
if (isStopping) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
ArbitraryTransactionData arbitraryTransactionData = transactions.get(i);
|
byte[] signature = signatures.get(i);
|
||||||
if (arbitraryTransactionData == null) {
|
if (signature == null) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -158,7 +135,9 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
Thread.sleep(5000);
|
Thread.sleep(5000);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (arbitraryTransactionData.getService() == null) {
|
// Fetch the transaction data
|
||||||
|
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
|
||||||
|
if (arbitraryTransactionData == null || arbitraryTransactionData.getService() == null) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -167,8 +146,6 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
boolean mostRecentTransaction = processedTransactions.add(arbitraryTransactionData);
|
|
||||||
|
|
||||||
// Check if we have the complete file
|
// Check if we have the complete file
|
||||||
boolean completeFileExists = ArbitraryTransactionUtils.completeFileExists(arbitraryTransactionData);
|
boolean completeFileExists = ArbitraryTransactionUtils.completeFileExists(arbitraryTransactionData);
|
||||||
|
|
||||||
@ -189,54 +166,20 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
LOGGER.info("Deleting transaction {} because we can't host its data",
|
LOGGER.info("Deleting transaction {} because we can't host its data",
|
||||||
Base58.encode(arbitraryTransactionData.getSignature()));
|
Base58.encode(arbitraryTransactionData.getSignature()));
|
||||||
ArbitraryTransactionUtils.deleteCompleteFileAndChunks(arbitraryTransactionData);
|
ArbitraryTransactionUtils.deleteCompleteFileAndChunks(arbitraryTransactionData);
|
||||||
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
"can't store data, deleting",
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
arbitraryTransactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check to see if we have had a more recent PUT
|
// Check to see if we have had a more recent PUT
|
||||||
if (!mostRecentTransaction) {
|
boolean hasMoreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
|
||||||
|
if (hasMoreRecentPutTransaction) {
|
||||||
// There is a more recent PUT transaction than the one we are currently processing.
|
// There is a more recent PUT transaction than the one we are currently processing.
|
||||||
// When a PUT is issued, it replaces any layers that would have been there before.
|
// When a PUT is issued, it replaces any layers that would have been there before.
|
||||||
// Therefore any data relating to this older transaction is no longer needed.
|
// Therefore any data relating to this older transaction is no longer needed.
|
||||||
LOGGER.info(String.format("Newer PUT found for %s %s since transaction %s. " +
|
LOGGER.info(String.format("Newer PUT found for %s %s since transaction %s. " +
|
||||||
"Deleting all files associated with the earlier transaction.", arbitraryTransactionData.getService(),
|
"Deleting all files associated with the earlier transaction.", arbitraryTransactionData.getService(),
|
||||||
arbitraryTransactionData.getName(), Base58.encode(arbitraryTransactionData.getSignature())));
|
arbitraryTransactionData.getName(), Base58.encode(signature)));
|
||||||
|
|
||||||
ArbitraryTransactionUtils.deleteCompleteFileAndChunks(arbitraryTransactionData);
|
ArbitraryTransactionUtils.deleteCompleteFileAndChunks(arbitraryTransactionData);
|
||||||
|
|
||||||
Optional<ArbitraryTransactionData> moreRecentPutTransaction
|
|
||||||
= processedTransactions.stream()
|
|
||||||
.filter(data -> data.equals(arbitraryTransactionData))
|
|
||||||
.findAny();
|
|
||||||
|
|
||||||
if( moreRecentPutTransaction.isPresent() ) {
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
"deleting data due to replacement",
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
moreRecentPutTransaction.get().getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.warn("Something went wrong with the most recent put transaction determination!");
|
|
||||||
}
|
|
||||||
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -255,21 +198,7 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
LOGGER.debug(String.format("Transaction %s has complete file and all chunks",
|
LOGGER.debug(String.format("Transaction %s has complete file and all chunks",
|
||||||
Base58.encode(arbitraryTransactionData.getSignature())));
|
Base58.encode(arbitraryTransactionData.getSignature())));
|
||||||
|
|
||||||
boolean wasDeleted = ArbitraryTransactionUtils.deleteCompleteFile(arbitraryTransactionData, now, STALE_FILE_TIMEOUT);
|
ArbitraryTransactionUtils.deleteCompleteFile(arbitraryTransactionData, now, STALE_FILE_TIMEOUT);
|
||||||
|
|
||||||
if( wasDeleted ) {
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
"deleting file, retaining chunks",
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
arbitraryTransactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -307,6 +236,17 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
this.storageLimitReached(repository);
|
this.storageLimitReached(repository);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Delete random data associated with name if we're over our storage limit for this name
|
||||||
|
// Use the DELETION_THRESHOLD, for the same reasons as above
|
||||||
|
for (String followedName : ListUtils.followedNames()) {
|
||||||
|
if (isStopping) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (!storageManager.isStorageSpaceAvailableForName(repository, followedName, DELETION_THRESHOLD)) {
|
||||||
|
this.storageLimitReachedForName(repository, followedName);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
LOGGER.error("Repository issue when cleaning up arbitrary transaction data", e);
|
LOGGER.error("Repository issue when cleaning up arbitrary transaction data", e);
|
||||||
}
|
}
|
||||||
@ -385,6 +325,25 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
// FUTURE: consider reducing the expiry time of the reader cache
|
// FUTURE: consider reducing the expiry time of the reader cache
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void storageLimitReachedForName(Repository repository, String name) throws InterruptedException {
|
||||||
|
// We think that the storage limit has been reached for supplied name - but we should double check
|
||||||
|
if (ArbitraryDataStorageManager.getInstance().isStorageSpaceAvailableForName(repository, name, DELETION_THRESHOLD)) {
|
||||||
|
// We have space available for this name, so don't delete anything
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delete a batch of random chunks associated with this name
|
||||||
|
// This reduces the chance of too many nodes deleting the same chunk
|
||||||
|
// when they reach their storage limit
|
||||||
|
Path dataPath = Paths.get(Settings.getInstance().getDataPath());
|
||||||
|
for (int i=0; i<CHUNK_DELETION_BATCH_SIZE; i++) {
|
||||||
|
if (isStopping) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
this.deleteRandomFile(repository, dataPath.toFile(), name);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Iteratively walk through given directory and delete a single random file
|
* Iteratively walk through given directory and delete a single random file
|
||||||
*
|
*
|
||||||
@ -463,7 +422,6 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
}
|
}
|
||||||
|
|
||||||
LOGGER.info("Deleting random file {} because we have reached max storage capacity...", randomItem.toString());
|
LOGGER.info("Deleting random file {} because we have reached max storage capacity...", randomItem.toString());
|
||||||
fireRandomItemDeletionNotification(randomItem, repository, "Deleting random file, because we have reached max storage capacity");
|
|
||||||
boolean success = randomItem.delete();
|
boolean success = randomItem.delete();
|
||||||
if (success) {
|
if (success) {
|
||||||
try {
|
try {
|
||||||
@ -478,35 +436,6 @@ public class ArbitraryDataCleanupManager extends Thread {
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
private void fireRandomItemDeletionNotification(File randomItem, Repository repository, String reason) {
|
|
||||||
try {
|
|
||||||
Path parentFileNamePath = randomItem.toPath().toAbsolutePath().getParent().getFileName();
|
|
||||||
if (parentFileNamePath != null) {
|
|
||||||
String signature58 = parentFileNamePath.toString();
|
|
||||||
byte[] signature = Base58.decode(signature58);
|
|
||||||
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
|
|
||||||
if (transactionData != null && transactionData.getType() == Transaction.TransactionType.ARBITRARY) {
|
|
||||||
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) transactionData;
|
|
||||||
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
reason,
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
arbitraryTransactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private void cleanupTempDirectory(String folder, long now, long minAge) {
|
private void cleanupTempDirectory(String folder, long now, long minAge) {
|
||||||
String baseDir = Settings.getInstance().getTempDataPath();
|
String baseDir = Settings.getInstance().getTempDataPath();
|
||||||
Path tempDir = Paths.get(baseDir, folder);
|
Path tempDir = Paths.get(baseDir, folder);
|
||||||
|
@ -1,21 +0,0 @@
|
|||||||
package org.qortal.controller.arbitrary;
|
|
||||||
|
|
||||||
public class ArbitraryDataExamination {
|
|
||||||
|
|
||||||
private boolean pass;
|
|
||||||
|
|
||||||
private String notes;
|
|
||||||
|
|
||||||
public ArbitraryDataExamination(boolean pass, String notes) {
|
|
||||||
this.pass = pass;
|
|
||||||
this.notes = notes;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean isPass() {
|
|
||||||
return pass;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getNotes() {
|
|
||||||
return notes;
|
|
||||||
}
|
|
||||||
}
|
|
@ -5,8 +5,6 @@ import org.apache.logging.log4j.Logger;
|
|||||||
import org.qortal.controller.Controller;
|
import org.qortal.controller.Controller;
|
||||||
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
|
import org.qortal.data.arbitrary.ArbitraryFileListResponseInfo;
|
||||||
import org.qortal.data.transaction.ArbitraryTransactionData;
|
import org.qortal.data.transaction.ArbitraryTransactionData;
|
||||||
import org.qortal.event.DataMonitorEvent;
|
|
||||||
import org.qortal.event.EventBus;
|
|
||||||
import org.qortal.network.Peer;
|
import org.qortal.network.Peer;
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.repository.DataException;
|
||||||
import org.qortal.repository.Repository;
|
import org.qortal.repository.Repository;
|
||||||
@ -19,8 +17,6 @@ import java.util.Arrays;
|
|||||||
import java.util.Comparator;
|
import java.util.Comparator;
|
||||||
import java.util.Iterator;
|
import java.util.Iterator;
|
||||||
|
|
||||||
import static java.lang.Thread.NORM_PRIORITY;
|
|
||||||
|
|
||||||
public class ArbitraryDataFileRequestThread implements Runnable {
|
public class ArbitraryDataFileRequestThread implements Runnable {
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(ArbitraryDataFileRequestThread.class);
|
private static final Logger LOGGER = LogManager.getLogger(ArbitraryDataFileRequestThread.class);
|
||||||
@ -32,7 +28,6 @@ public class ArbitraryDataFileRequestThread implements Runnable {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Arbitrary Data File Request Thread");
|
Thread.currentThread().setName("Arbitrary Data File Request Thread");
|
||||||
Thread.currentThread().setPriority(NORM_PRIORITY);
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
|
@ -10,8 +10,6 @@ import org.qortal.arbitrary.misc.Service;
|
|||||||
import org.qortal.controller.Controller;
|
import org.qortal.controller.Controller;
|
||||||
import org.qortal.data.transaction.ArbitraryTransactionData;
|
import org.qortal.data.transaction.ArbitraryTransactionData;
|
||||||
import org.qortal.data.transaction.TransactionData;
|
import org.qortal.data.transaction.TransactionData;
|
||||||
import org.qortal.event.DataMonitorEvent;
|
|
||||||
import org.qortal.event.EventBus;
|
|
||||||
import org.qortal.network.Network;
|
import org.qortal.network.Network;
|
||||||
import org.qortal.network.Peer;
|
import org.qortal.network.Peer;
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.repository.DataException;
|
||||||
@ -30,7 +28,6 @@ import java.nio.file.Files;
|
|||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.nio.file.Paths;
|
import java.nio.file.Paths;
|
||||||
import java.util.*;
|
import java.util.*;
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
public class ArbitraryDataManager extends Thread {
|
public class ArbitraryDataManager extends Thread {
|
||||||
|
|
||||||
@ -94,7 +91,6 @@ public class ArbitraryDataManager extends Thread {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Arbitrary Data Manager");
|
Thread.currentThread().setName("Arbitrary Data Manager");
|
||||||
Thread.currentThread().setPriority(NORM_PRIORITY);
|
|
||||||
|
|
||||||
// Create data directory in case it doesn't exist yet
|
// Create data directory in case it doesn't exist yet
|
||||||
this.createDataDirectory();
|
this.createDataDirectory();
|
||||||
@ -198,35 +194,13 @@ public class ArbitraryDataManager extends Thread {
|
|||||||
final int limit = 100;
|
final int limit = 100;
|
||||||
int offset = 0;
|
int offset = 0;
|
||||||
|
|
||||||
List<ArbitraryTransactionData> allArbitraryTransactionsInDescendingOrder;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
|
|
||||||
if( name == null ) {
|
|
||||||
allArbitraryTransactionsInDescendingOrder
|
|
||||||
= repository.getArbitraryRepository()
|
|
||||||
.getLatestArbitraryTransactions();
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
allArbitraryTransactionsInDescendingOrder
|
|
||||||
= repository.getArbitraryRepository()
|
|
||||||
.getLatestArbitraryTransactionsByName(name);
|
|
||||||
}
|
|
||||||
} catch( Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
allArbitraryTransactionsInDescendingOrder = new ArrayList<>(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
// collect processed transactions in a set to ensure outdated data transactions do not get fetched
|
|
||||||
Set<ArbitraryTransactionDataHashWrapper> processedTransactions = new HashSet<>();
|
|
||||||
|
|
||||||
while (!isStopping) {
|
while (!isStopping) {
|
||||||
Thread.sleep(1000L);
|
Thread.sleep(1000L);
|
||||||
|
|
||||||
// Any arbitrary transactions we want to fetch data for?
|
// Any arbitrary transactions we want to fetch data for?
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
List<byte[]> signatures = processTransactionsForSignatures(limit, offset, allArbitraryTransactionsInDescendingOrder, processedTransactions);
|
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, ARBITRARY_TX_TYPE, null, name, null, ConfirmationStatus.BOTH, limit, offset, true);
|
||||||
|
// LOGGER.trace("Found {} arbitrary transactions at offset: {}, limit: {}", signatures.size(), offset, limit);
|
||||||
if (signatures == null || signatures.isEmpty()) {
|
if (signatures == null || signatures.isEmpty()) {
|
||||||
offset = 0;
|
offset = 0;
|
||||||
break;
|
break;
|
||||||
@ -248,38 +222,14 @@ public class ArbitraryDataManager extends Thread {
|
|||||||
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) arbitraryTransaction.getTransactionData();
|
ArbitraryTransactionData arbitraryTransactionData = (ArbitraryTransactionData) arbitraryTransaction.getTransactionData();
|
||||||
|
|
||||||
// Skip transactions that we don't need to proactively store data for
|
// Skip transactions that we don't need to proactively store data for
|
||||||
ArbitraryDataExamination arbitraryDataExamination = storageManager.shouldPreFetchData(repository, arbitraryTransactionData);
|
if (!storageManager.shouldPreFetchData(repository, arbitraryTransactionData)) {
|
||||||
if (!arbitraryDataExamination.isPass()) {
|
|
||||||
iterator.remove();
|
iterator.remove();
|
||||||
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
arbitraryDataExamination.getNotes(),
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
arbitraryTransactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Remove transactions that we already have local data for
|
// Remove transactions that we already have local data for
|
||||||
if (hasLocalData(arbitraryTransaction)) {
|
if (hasLocalData(arbitraryTransaction)) {
|
||||||
iterator.remove();
|
iterator.remove();
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
"already have local data, skipping",
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
arbitraryTransactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -297,21 +247,8 @@ public class ArbitraryDataManager extends Thread {
|
|||||||
|
|
||||||
// Check to see if we have had a more recent PUT
|
// Check to see if we have had a more recent PUT
|
||||||
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
|
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
|
||||||
|
boolean hasMoreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
|
||||||
Optional<ArbitraryTransactionData> moreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
|
if (hasMoreRecentPutTransaction) {
|
||||||
|
|
||||||
if (moreRecentPutTransaction.isPresent()) {
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
"not fetching old data",
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
moreRecentPutTransaction.get().getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
// There is a more recent PUT transaction than the one we are currently processing.
|
// There is a more recent PUT transaction than the one we are currently processing.
|
||||||
// When a PUT is issued, it replaces any layers that would have been there before.
|
// When a PUT is issued, it replaces any layers that would have been there before.
|
||||||
// Therefore any data relating to this older transaction is no longer needed and we
|
// Therefore any data relating to this older transaction is no longer needed and we
|
||||||
@ -319,34 +256,10 @@ public class ArbitraryDataManager extends Thread {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
"fetching data",
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
arbitraryTransactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
// Ask our connected peers if they have files for this signature
|
// Ask our connected peers if they have files for this signature
|
||||||
// This process automatically then fetches the files themselves if a peer is found
|
// This process automatically then fetches the files themselves if a peer is found
|
||||||
fetchData(arbitraryTransactionData);
|
fetchData(arbitraryTransactionData);
|
||||||
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
"fetched data",
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
arbitraryTransactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
LOGGER.error("Repository issue when fetching arbitrary transaction data", e);
|
LOGGER.error("Repository issue when fetching arbitrary transaction data", e);
|
||||||
}
|
}
|
||||||
@ -360,20 +273,6 @@ public class ArbitraryDataManager extends Thread {
|
|||||||
final int limit = 100;
|
final int limit = 100;
|
||||||
int offset = 0;
|
int offset = 0;
|
||||||
|
|
||||||
List<ArbitraryTransactionData> allArbitraryTransactionsInDescendingOrder;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
allArbitraryTransactionsInDescendingOrder
|
|
||||||
= repository.getArbitraryRepository()
|
|
||||||
.getLatestArbitraryTransactions();
|
|
||||||
} catch( Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
allArbitraryTransactionsInDescendingOrder = new ArrayList<>(0);
|
|
||||||
}
|
|
||||||
|
|
||||||
// collect processed transactions in a set to ensure outdated data transactions do not get fetched
|
|
||||||
Set<ArbitraryTransactionDataHashWrapper> processedTransactions = new HashSet<>();
|
|
||||||
|
|
||||||
while (!isStopping) {
|
while (!isStopping) {
|
||||||
final int minSeconds = 3;
|
final int minSeconds = 3;
|
||||||
final int maxSeconds = 10;
|
final int maxSeconds = 10;
|
||||||
@ -382,8 +281,8 @@ public class ArbitraryDataManager extends Thread {
|
|||||||
|
|
||||||
// Any arbitrary transactions we want to fetch data for?
|
// Any arbitrary transactions we want to fetch data for?
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
List<byte[]> signatures = processTransactionsForSignatures(limit, offset, allArbitraryTransactionsInDescendingOrder, processedTransactions);
|
List<byte[]> signatures = repository.getTransactionRepository().getSignaturesMatchingCriteria(null, null, null, ARBITRARY_TX_TYPE, null, null, null, ConfirmationStatus.BOTH, limit, offset, true);
|
||||||
|
// LOGGER.trace("Found {} arbitrary transactions at offset: {}, limit: {}", signatures.size(), offset, limit);
|
||||||
if (signatures == null || signatures.isEmpty()) {
|
if (signatures == null || signatures.isEmpty()) {
|
||||||
offset = 0;
|
offset = 0;
|
||||||
break;
|
break;
|
||||||
@ -428,74 +327,26 @@ public class ArbitraryDataManager extends Thread {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
// No longer need to see if we have had a more recent PUT since we compared the transactions to process
|
// Check to see if we have had a more recent PUT
|
||||||
// to the transactions previously processed, so we can fetch the transactiondata, notify the event bus,
|
|
||||||
// fetch the metadata and notify the event bus again
|
|
||||||
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
|
ArbitraryTransactionData arbitraryTransactionData = ArbitraryTransactionUtils.fetchTransactionData(repository, signature);
|
||||||
|
boolean hasMoreRecentPutTransaction = ArbitraryTransactionUtils.hasMoreRecentPutTransaction(repository, arbitraryTransactionData);
|
||||||
|
if (hasMoreRecentPutTransaction) {
|
||||||
|
// There is a more recent PUT transaction than the one we are currently processing.
|
||||||
|
// When a PUT is issued, it replaces any layers that would have been there before.
|
||||||
|
// Therefore any data relating to this older transaction is no longer needed and we
|
||||||
|
// shouldn't fetch it from the network.
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
// Ask our connected peers if they have metadata for this signature
|
// Ask our connected peers if they have metadata for this signature
|
||||||
fetchMetadata(arbitraryTransactionData);
|
fetchMetadata(arbitraryTransactionData);
|
||||||
|
|
||||||
EventBus.INSTANCE.notify(
|
|
||||||
new DataMonitorEvent(
|
|
||||||
System.currentTimeMillis(),
|
|
||||||
arbitraryTransactionData.getIdentifier(),
|
|
||||||
arbitraryTransactionData.getName(),
|
|
||||||
arbitraryTransactionData.getService().name(),
|
|
||||||
"fetched metadata",
|
|
||||||
arbitraryTransactionData.getTimestamp(),
|
|
||||||
arbitraryTransactionData.getTimestamp()
|
|
||||||
)
|
|
||||||
);
|
|
||||||
} catch (DataException e) {
|
} catch (DataException e) {
|
||||||
LOGGER.error("Repository issue when fetching arbitrary transaction data", e);
|
LOGGER.error("Repository issue when fetching arbitrary transaction data", e);
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private static List<byte[]> processTransactionsForSignatures(
|
|
||||||
int limit,
|
|
||||||
int offset,
|
|
||||||
List<ArbitraryTransactionData> transactionsInDescendingOrder,
|
|
||||||
Set<ArbitraryTransactionDataHashWrapper> processedTransactions) {
|
|
||||||
// these transactions are in descending order, latest transactions come first
|
|
||||||
List<ArbitraryTransactionData> transactions
|
|
||||||
= transactionsInDescendingOrder.stream()
|
|
||||||
.skip(offset)
|
|
||||||
.limit(limit)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
// wrap the transactions, so they can be used for hashing and comparing
|
|
||||||
// Class ArbitraryTransactionDataHashWrapper supports hashCode() and equals(...) for this purpose
|
|
||||||
List<ArbitraryTransactionDataHashWrapper> wrappedTransactions
|
|
||||||
= transactions.stream()
|
|
||||||
.map(transaction -> new ArbitraryTransactionDataHashWrapper(transaction))
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
// create a set of wrappers and populate it first to last, so that all outdated transactions get rejected
|
|
||||||
Set<ArbitraryTransactionDataHashWrapper> transactionsToProcess = new HashSet<>(wrappedTransactions.size());
|
|
||||||
for(ArbitraryTransactionDataHashWrapper wrappedTransaction : wrappedTransactions) {
|
|
||||||
transactionsToProcess.add(wrappedTransaction);
|
|
||||||
}
|
|
||||||
|
|
||||||
// remove the matches for previously processed transactions,
|
|
||||||
// because these transactions have had updates that have already been processed
|
|
||||||
transactionsToProcess.removeAll(processedTransactions);
|
|
||||||
|
|
||||||
// add to processed transactions to compare and remove matches from future processing iterations
|
|
||||||
processedTransactions.addAll(transactionsToProcess);
|
|
||||||
|
|
||||||
List<byte[]> signatures
|
|
||||||
= transactionsToProcess.stream()
|
|
||||||
.map(transactionToProcess -> transactionToProcess.getData()
|
|
||||||
.getSignature())
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
return signatures;
|
|
||||||
}
|
|
||||||
|
|
||||||
private ArbitraryTransaction fetchTransaction(final Repository repository, byte[] signature) {
|
private ArbitraryTransaction fetchTransaction(final Repository repository, byte[] signature) {
|
||||||
try {
|
try {
|
||||||
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
|
TransactionData transactionData = repository.getTransactionRepository().fromSignature(signature);
|
||||||
|
@ -36,7 +36,6 @@ public class ArbitraryDataRenderManager extends Thread {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Arbitrary Data Render Manager");
|
Thread.currentThread().setName("Arbitrary Data Render Manager");
|
||||||
Thread.currentThread().setPriority(NORM_PRIORITY);
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
while (!isStopping) {
|
while (!isStopping) {
|
||||||
|
@ -72,8 +72,6 @@ public class ArbitraryDataStorageManager extends Thread {
|
|||||||
@Override
|
@Override
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Arbitrary Data Storage Manager");
|
Thread.currentThread().setName("Arbitrary Data Storage Manager");
|
||||||
Thread.currentThread().setPriority(NORM_PRIORITY);
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
while (!isStopping) {
|
while (!isStopping) {
|
||||||
Thread.sleep(1000);
|
Thread.sleep(1000);
|
||||||
@ -155,24 +153,31 @@ public class ArbitraryDataStorageManager extends Thread {
|
|||||||
* @param arbitraryTransactionData - the transaction
|
* @param arbitraryTransactionData - the transaction
|
||||||
* @return boolean - whether to prefetch or not
|
* @return boolean - whether to prefetch or not
|
||||||
*/
|
*/
|
||||||
public ArbitraryDataExamination shouldPreFetchData(Repository repository, ArbitraryTransactionData arbitraryTransactionData) {
|
public boolean shouldPreFetchData(Repository repository, ArbitraryTransactionData arbitraryTransactionData) {
|
||||||
String name = arbitraryTransactionData.getName();
|
String name = arbitraryTransactionData.getName();
|
||||||
|
|
||||||
// Only fetch data associated with hashes, as we already have RAW_DATA
|
// Only fetch data associated with hashes, as we already have RAW_DATA
|
||||||
if (arbitraryTransactionData.getDataType() != ArbitraryTransactionData.DataType.DATA_HASH) {
|
if (arbitraryTransactionData.getDataType() != ArbitraryTransactionData.DataType.DATA_HASH) {
|
||||||
return new ArbitraryDataExamination(false, "Only fetch data associated with hashes");
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Don't fetch anything more if we're (nearly) out of space
|
// Don't fetch anything more if we're (nearly) out of space
|
||||||
// Make sure to keep STORAGE_FULL_THRESHOLD considerably less than 1, to
|
// Make sure to keep STORAGE_FULL_THRESHOLD considerably less than 1, to
|
||||||
// avoid a fetch/delete loop
|
// avoid a fetch/delete loop
|
||||||
if (!this.isStorageSpaceAvailable(STORAGE_FULL_THRESHOLD)) {
|
if (!this.isStorageSpaceAvailable(STORAGE_FULL_THRESHOLD)) {
|
||||||
return new ArbitraryDataExamination(false,"Don't fetch anything more if we're (nearly) out of space");
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Don't fetch anything if we're (nearly) out of space for this name
|
||||||
|
// Again, make sure to keep STORAGE_FULL_THRESHOLD considerably less than 1, to
|
||||||
|
// avoid a fetch/delete loop
|
||||||
|
if (!this.isStorageSpaceAvailableForName(repository, arbitraryTransactionData.getName(), STORAGE_FULL_THRESHOLD)) {
|
||||||
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Don't store data unless it's an allowed type (public/private)
|
// Don't store data unless it's an allowed type (public/private)
|
||||||
if (!this.isDataTypeAllowed(arbitraryTransactionData)) {
|
if (!this.isDataTypeAllowed(arbitraryTransactionData)) {
|
||||||
return new ArbitraryDataExamination(false, "Don't store data unless it's an allowed type (public/private)");
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Handle transactions without names differently
|
// Handle transactions without names differently
|
||||||
@ -182,21 +187,21 @@ public class ArbitraryDataStorageManager extends Thread {
|
|||||||
|
|
||||||
// Never fetch data from blocked names, even if they are followed
|
// Never fetch data from blocked names, even if they are followed
|
||||||
if (ListUtils.isNameBlocked(name)) {
|
if (ListUtils.isNameBlocked(name)) {
|
||||||
return new ArbitraryDataExamination(false, "blocked name");
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
switch (Settings.getInstance().getStoragePolicy()) {
|
switch (Settings.getInstance().getStoragePolicy()) {
|
||||||
case FOLLOWED:
|
case FOLLOWED:
|
||||||
case FOLLOWED_OR_VIEWED:
|
case FOLLOWED_OR_VIEWED:
|
||||||
return new ArbitraryDataExamination(ListUtils.isFollowingName(name), Settings.getInstance().getStoragePolicy().name());
|
return ListUtils.isFollowingName(name);
|
||||||
|
|
||||||
case ALL:
|
case ALL:
|
||||||
return new ArbitraryDataExamination(true, Settings.getInstance().getStoragePolicy().name());
|
return true;
|
||||||
|
|
||||||
case NONE:
|
case NONE:
|
||||||
case VIEWED:
|
case VIEWED:
|
||||||
default:
|
default:
|
||||||
return new ArbitraryDataExamination(false, Settings.getInstance().getStoragePolicy().name());
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -207,17 +212,17 @@ public class ArbitraryDataStorageManager extends Thread {
|
|||||||
*
|
*
|
||||||
* @return boolean - whether the storage policy allows for unnamed data
|
* @return boolean - whether the storage policy allows for unnamed data
|
||||||
*/
|
*/
|
||||||
private ArbitraryDataExamination shouldPreFetchDataWithoutName() {
|
private boolean shouldPreFetchDataWithoutName() {
|
||||||
switch (Settings.getInstance().getStoragePolicy()) {
|
switch (Settings.getInstance().getStoragePolicy()) {
|
||||||
case ALL:
|
case ALL:
|
||||||
return new ArbitraryDataExamination(true, "Fetching all data");
|
return true;
|
||||||
|
|
||||||
case NONE:
|
case NONE:
|
||||||
case VIEWED:
|
case VIEWED:
|
||||||
case FOLLOWED:
|
case FOLLOWED:
|
||||||
case FOLLOWED_OR_VIEWED:
|
case FOLLOWED_OR_VIEWED:
|
||||||
default:
|
default:
|
||||||
return new ArbitraryDataExamination(false, Settings.getInstance().getStoragePolicy().name());
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -477,6 +482,51 @@ public class ArbitraryDataStorageManager extends Thread {
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public boolean isStorageSpaceAvailableForName(Repository repository, String name, double threshold) {
|
||||||
|
if (!this.isStorageSpaceAvailable(threshold)) {
|
||||||
|
// No storage space available at all, so no need to check this name
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (Settings.getInstance().getStoragePolicy() == StoragePolicy.ALL) {
|
||||||
|
// Using storage policy ALL, so don't limit anything per name
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (name == null) {
|
||||||
|
// This transaction doesn't have a name, so fall back to total space limitations
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
int followedNamesCount = ListUtils.followedNamesCount();
|
||||||
|
if (followedNamesCount == 0) {
|
||||||
|
// Not following any names, so we have space
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
long totalSizeForName = 0;
|
||||||
|
long maxStoragePerName = this.storageCapacityPerName(threshold);
|
||||||
|
|
||||||
|
// Fetch all hosted transactions
|
||||||
|
List<ArbitraryTransactionData> hostedTransactions = this.listAllHostedTransactions(repository, null, null);
|
||||||
|
for (ArbitraryTransactionData transactionData : hostedTransactions) {
|
||||||
|
String transactionName = transactionData.getName();
|
||||||
|
if (!Objects.equals(name, transactionName)) {
|
||||||
|
// Transaction relates to a different name
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
totalSizeForName += transactionData.getSize();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Have we reached the limit for this name?
|
||||||
|
if (totalSizeForName > maxStoragePerName) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
public long storageCapacityPerName(double threshold) {
|
public long storageCapacityPerName(double threshold) {
|
||||||
int followedNamesCount = ListUtils.followedNamesCount();
|
int followedNamesCount = ListUtils.followedNamesCount();
|
||||||
if (followedNamesCount == 0) {
|
if (followedNamesCount == 0) {
|
||||||
|
@ -1,48 +0,0 @@
|
|||||||
package org.qortal.controller.arbitrary;
|
|
||||||
|
|
||||||
import org.qortal.arbitrary.misc.Service;
|
|
||||||
import org.qortal.data.transaction.ArbitraryTransactionData;
|
|
||||||
|
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
public class ArbitraryTransactionDataHashWrapper {
|
|
||||||
|
|
||||||
private ArbitraryTransactionData data;
|
|
||||||
|
|
||||||
private int service;
|
|
||||||
|
|
||||||
private String name;
|
|
||||||
|
|
||||||
private String identifier;
|
|
||||||
|
|
||||||
public ArbitraryTransactionDataHashWrapper(ArbitraryTransactionData data) {
|
|
||||||
this.data = data;
|
|
||||||
|
|
||||||
this.service = data.getService().value;
|
|
||||||
this.name = data.getName();
|
|
||||||
this.identifier = data.getIdentifier();
|
|
||||||
}
|
|
||||||
|
|
||||||
public ArbitraryTransactionDataHashWrapper(int service, String name, String identifier) {
|
|
||||||
this.service = service;
|
|
||||||
this.name = name;
|
|
||||||
this.identifier = identifier;
|
|
||||||
}
|
|
||||||
|
|
||||||
public ArbitraryTransactionData getData() {
|
|
||||||
return data;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public boolean equals(Object o) {
|
|
||||||
if (this == o) return true;
|
|
||||||
if (o == null || getClass() != o.getClass()) return false;
|
|
||||||
ArbitraryTransactionDataHashWrapper that = (ArbitraryTransactionDataHashWrapper) o;
|
|
||||||
return service == that.service && name.equals(that.name) && Objects.equals(identifier, that.identifier);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public int hashCode() {
|
|
||||||
return Objects.hash(service, name, identifier);
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,33 +0,0 @@
|
|||||||
package org.qortal.controller.arbitrary;
|
|
||||||
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
|
||||||
import org.apache.logging.log4j.Logger;
|
|
||||||
import org.qortal.repository.DataException;
|
|
||||||
import org.qortal.repository.Repository;
|
|
||||||
import org.qortal.repository.RepositoryManager;
|
|
||||||
|
|
||||||
import java.util.TimerTask;
|
|
||||||
|
|
||||||
public class RebuildArbitraryResourceCacheTask extends TimerTask {
|
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(RebuildArbitraryResourceCacheTask.class);
|
|
||||||
|
|
||||||
public static final long MILLIS_IN_HOUR = 60 * 60 * 1000;
|
|
||||||
|
|
||||||
public static final long MILLIS_IN_MINUTE = 60 * 1000;
|
|
||||||
|
|
||||||
private static final String REBUILD_ARBITRARY_RESOURCE_CACHE_TASK = "Rebuild Arbitrary Resource Cache Task";
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
|
|
||||||
Thread.currentThread().setName(REBUILD_ARBITRARY_RESOURCE_CACHE_TASK);
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
ArbitraryDataCacheManager.getInstance().buildArbitraryResourcesCache(repository, true);
|
|
||||||
}
|
|
||||||
catch( DataException e ) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,139 +0,0 @@
|
|||||||
package org.qortal.controller.hsqldb;
|
|
||||||
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
|
||||||
import org.apache.logging.log4j.Logger;
|
|
||||||
import org.apache.logging.log4j.util.PropertySource;
|
|
||||||
import org.qortal.data.account.AccountBalanceData;
|
|
||||||
import org.qortal.data.account.BlockHeightRange;
|
|
||||||
import org.qortal.data.account.BlockHeightRangeAddressAmounts;
|
|
||||||
import org.qortal.repository.hsqldb.HSQLDBCacheUtils;
|
|
||||||
import org.qortal.settings.Settings;
|
|
||||||
import org.qortal.utils.BalanceRecorderUtils;
|
|
||||||
|
|
||||||
import java.util.Comparator;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Optional;
|
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
|
||||||
import java.util.concurrent.CopyOnWriteArrayList;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
public class HSQLDBBalanceRecorder extends Thread{
|
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(HSQLDBBalanceRecorder.class);
|
|
||||||
|
|
||||||
private static HSQLDBBalanceRecorder SINGLETON = null;
|
|
||||||
|
|
||||||
private ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight = new ConcurrentHashMap<>();
|
|
||||||
|
|
||||||
private ConcurrentHashMap<String, List<AccountBalanceData>> balancesByAddress = new ConcurrentHashMap<>();
|
|
||||||
|
|
||||||
private CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics = new CopyOnWriteArrayList<>();
|
|
||||||
|
|
||||||
private int priorityRequested;
|
|
||||||
private int frequency;
|
|
||||||
private int capacity;
|
|
||||||
|
|
||||||
private HSQLDBBalanceRecorder( int priorityRequested, int frequency, int capacity) {
|
|
||||||
|
|
||||||
super("Balance Recorder");
|
|
||||||
|
|
||||||
this.priorityRequested = priorityRequested;
|
|
||||||
this.frequency = frequency;
|
|
||||||
this.capacity = capacity;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static Optional<HSQLDBBalanceRecorder> getInstance() {
|
|
||||||
|
|
||||||
if( SINGLETON == null ) {
|
|
||||||
|
|
||||||
SINGLETON
|
|
||||||
= new HSQLDBBalanceRecorder(
|
|
||||||
Settings.getInstance().getBalanceRecorderPriority(),
|
|
||||||
Settings.getInstance().getBalanceRecorderFrequency(),
|
|
||||||
Settings.getInstance().getBalanceRecorderCapacity()
|
|
||||||
);
|
|
||||||
|
|
||||||
}
|
|
||||||
else if( SINGLETON == null ) {
|
|
||||||
|
|
||||||
return Optional.empty();
|
|
||||||
}
|
|
||||||
|
|
||||||
return Optional.of(SINGLETON);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
|
|
||||||
Thread.currentThread().setName("Balance Recorder");
|
|
||||||
|
|
||||||
HSQLDBCacheUtils.startRecordingBalances(this.balancesByHeight, this.balanceDynamics, this.priorityRequested, this.frequency, this.capacity);
|
|
||||||
}
|
|
||||||
|
|
||||||
public List<BlockHeightRangeAddressAmounts> getLatestDynamics(int limit, long offset) {
|
|
||||||
|
|
||||||
List<BlockHeightRangeAddressAmounts> latest = this.balanceDynamics.stream()
|
|
||||||
.sorted(BalanceRecorderUtils.BLOCK_HEIGHT_RANGE_ADDRESS_AMOUNTS_COMPARATOR.reversed())
|
|
||||||
.skip(offset)
|
|
||||||
.limit(limit)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
return latest;
|
|
||||||
}
|
|
||||||
|
|
||||||
public List<BlockHeightRange> getRanges(Integer offset, Integer limit, Boolean reverse) {
|
|
||||||
|
|
||||||
if( reverse ) {
|
|
||||||
return this.balanceDynamics.stream()
|
|
||||||
.map(BlockHeightRangeAddressAmounts::getRange)
|
|
||||||
.sorted(BalanceRecorderUtils.BLOCK_HEIGHT_RANGE_COMPARATOR.reversed())
|
|
||||||
.skip(offset)
|
|
||||||
.limit(limit)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
return this.balanceDynamics.stream()
|
|
||||||
.map(BlockHeightRangeAddressAmounts::getRange)
|
|
||||||
.sorted(BalanceRecorderUtils.BLOCK_HEIGHT_RANGE_COMPARATOR)
|
|
||||||
.skip(offset)
|
|
||||||
.limit(limit)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public Optional<BlockHeightRangeAddressAmounts> getAddressAmounts(BlockHeightRange range) {
|
|
||||||
|
|
||||||
return this.balanceDynamics.stream()
|
|
||||||
.filter( dynamic -> dynamic.getRange().equals(range))
|
|
||||||
.findAny();
|
|
||||||
}
|
|
||||||
|
|
||||||
public Optional<BlockHeightRange> getRange( int height ) {
|
|
||||||
return this.balanceDynamics.stream()
|
|
||||||
.map(BlockHeightRangeAddressAmounts::getRange)
|
|
||||||
.filter( range -> range.getBegin() < height && range.getEnd() >= height )
|
|
||||||
.findAny();
|
|
||||||
}
|
|
||||||
|
|
||||||
private Optional<Integer> getLastHeight() {
|
|
||||||
return this.balancesByHeight.keySet().stream().sorted(Comparator.reverseOrder()).findFirst();
|
|
||||||
}
|
|
||||||
|
|
||||||
public List<Integer> getBlocksRecorded() {
|
|
||||||
|
|
||||||
return this.balancesByHeight.keySet().stream().collect(Collectors.toList());
|
|
||||||
}
|
|
||||||
|
|
||||||
public List<AccountBalanceData> getAccountBalanceRecordings(String address) {
|
|
||||||
return this.balancesByAddress.get(address);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "HSQLDBBalanceRecorder{" +
|
|
||||||
"priorityRequested=" + priorityRequested +
|
|
||||||
", frequency=" + frequency +
|
|
||||||
", capacity=" + capacity +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,22 +0,0 @@
|
|||||||
package org.qortal.controller.hsqldb;
|
|
||||||
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceCache;
|
|
||||||
import org.qortal.repository.RepositoryManager;
|
|
||||||
import org.qortal.repository.hsqldb.HSQLDBCacheUtils;
|
|
||||||
import org.qortal.repository.hsqldb.HSQLDBRepository;
|
|
||||||
import org.qortal.settings.Settings;
|
|
||||||
|
|
||||||
public class HSQLDBDataCacheManager extends Thread{
|
|
||||||
|
|
||||||
public HSQLDBDataCacheManager() {}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
Thread.currentThread().setName("HSQLDB Data Cache Manager");
|
|
||||||
|
|
||||||
HSQLDBCacheUtils.startCaching(
|
|
||||||
Settings.getInstance().getDbCacheThreadPriority(),
|
|
||||||
Settings.getInstance().getDbCacheFrequency()
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
@ -11,8 +11,6 @@ import org.qortal.repository.RepositoryManager;
|
|||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
import static java.lang.Thread.MIN_PRIORITY;
|
|
||||||
|
|
||||||
public class AtStatesPruner implements Runnable {
|
public class AtStatesPruner implements Runnable {
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(AtStatesPruner.class);
|
private static final Logger LOGGER = LogManager.getLogger(AtStatesPruner.class);
|
||||||
@ -39,25 +37,15 @@ public class AtStatesPruner implements Runnable {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int pruneStartHeight;
|
|
||||||
int maxLatestAtStatesHeight;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
pruneStartHeight = repository.getATRepository().getAtPruneHeight();
|
int pruneStartHeight = repository.getATRepository().getAtPruneHeight();
|
||||||
maxLatestAtStatesHeight = PruneManager.getMaxHeightForLatestAtStates(repository);
|
int maxLatestAtStatesHeight = PruneManager.getMaxHeightForLatestAtStates(repository);
|
||||||
|
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
repository.getATRepository().rebuildLatestAtStates(maxLatestAtStatesHeight);
|
repository.getATRepository().rebuildLatestAtStates(maxLatestAtStatesHeight);
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("AT States Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
|
|
||||||
try {
|
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
|
|
||||||
Thread.sleep(Settings.getInstance().getAtStatesPruneInterval());
|
Thread.sleep(Settings.getInstance().getAtStatesPruneInterval());
|
||||||
@ -87,7 +75,7 @@ public class AtStatesPruner implements Runnable {
|
|||||||
if (pruneStartHeight >= upperPruneHeight)
|
if (pruneStartHeight >= upperPruneHeight)
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
LOGGER.info(String.format("Pruning AT states between blocks %d and %d...", pruneStartHeight, upperPruneHeight));
|
LOGGER.debug(String.format("Pruning AT states between blocks %d and %d...", pruneStartHeight, upperPruneHeight));
|
||||||
|
|
||||||
int numAtStatesPruned = repository.getATRepository().pruneAtStates(pruneStartHeight, upperPruneHeight);
|
int numAtStatesPruned = repository.getATRepository().pruneAtStates(pruneStartHeight, upperPruneHeight);
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
@ -97,7 +85,7 @@ public class AtStatesPruner implements Runnable {
|
|||||||
|
|
||||||
if (numAtStatesPruned > 0 || numAtStateDataRowsTrimmed > 0) {
|
if (numAtStatesPruned > 0 || numAtStateDataRowsTrimmed > 0) {
|
||||||
final int finalPruneStartHeight = pruneStartHeight;
|
final int finalPruneStartHeight = pruneStartHeight;
|
||||||
LOGGER.info(() -> String.format("Pruned %d AT state%s between blocks %d and %d",
|
LOGGER.debug(() -> String.format("Pruned %d AT state%s between blocks %d and %d",
|
||||||
numAtStatesPruned, (numAtStatesPruned != 1 ? "s" : ""),
|
numAtStatesPruned, (numAtStatesPruned != 1 ? "s" : ""),
|
||||||
finalPruneStartHeight, upperPruneHeight));
|
finalPruneStartHeight, upperPruneHeight));
|
||||||
} else {
|
} else {
|
||||||
@ -110,26 +98,21 @@ public class AtStatesPruner implements Runnable {
|
|||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
|
|
||||||
final int finalPruneStartHeight = pruneStartHeight;
|
final int finalPruneStartHeight = pruneStartHeight;
|
||||||
LOGGER.info(() -> String.format("Bumping AT state base prune height to %d", finalPruneStartHeight));
|
LOGGER.debug(() -> String.format("Bumping AT state base prune height to %d", finalPruneStartHeight));
|
||||||
} else {
|
}
|
||||||
|
else {
|
||||||
// We've pruned up to the upper prunable height
|
// We've pruned up to the upper prunable height
|
||||||
// Back off for a while to save CPU for syncing
|
// Back off for a while to save CPU for syncing
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
Thread.sleep(5 * 60 * 1000L);
|
Thread.sleep(5*60*1000L);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
} catch (DataException e) {
|
||||||
|
LOGGER.warn(String.format("Repository issue trying to prune AT states: %s", e.getMessage()));
|
||||||
} catch (InterruptedException e) {
|
} catch (InterruptedException e) {
|
||||||
if (Controller.isStopping()) {
|
// Time to exit
|
||||||
LOGGER.info("AT States Pruning Shutting Down");
|
|
||||||
} else {
|
|
||||||
LOGGER.warn("AT States Pruning interrupted. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.warn("AT States Pruning stopped working. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch(Exception e){
|
|
||||||
LOGGER.error("AT States Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -11,8 +11,6 @@ import org.qortal.repository.RepositoryManager;
|
|||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
import static java.lang.Thread.MIN_PRIORITY;
|
|
||||||
|
|
||||||
public class AtStatesTrimmer implements Runnable {
|
public class AtStatesTrimmer implements Runnable {
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(AtStatesTrimmer.class);
|
private static final Logger LOGGER = LogManager.getLogger(AtStatesTrimmer.class);
|
||||||
@ -26,24 +24,15 @@ public class AtStatesTrimmer implements Runnable {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
int trimStartHeight;
|
|
||||||
int maxLatestAtStatesHeight;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
trimStartHeight = repository.getATRepository().getAtTrimHeight();
|
int trimStartHeight = repository.getATRepository().getAtTrimHeight();
|
||||||
maxLatestAtStatesHeight = PruneManager.getMaxHeightForLatestAtStates(repository);
|
int maxLatestAtStatesHeight = PruneManager.getMaxHeightForLatestAtStates(repository);
|
||||||
|
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
repository.getATRepository().rebuildLatestAtStates(maxLatestAtStatesHeight);
|
repository.getATRepository().rebuildLatestAtStates(maxLatestAtStatesHeight);
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("AT States Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
try {
|
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
|
|
||||||
Thread.sleep(Settings.getInstance().getAtStatesTrimInterval());
|
Thread.sleep(Settings.getInstance().getAtStatesTrimInterval());
|
||||||
@ -74,7 +63,7 @@ public class AtStatesTrimmer implements Runnable {
|
|||||||
|
|
||||||
if (numAtStatesTrimmed > 0) {
|
if (numAtStatesTrimmed > 0) {
|
||||||
final int finalTrimStartHeight = trimStartHeight;
|
final int finalTrimStartHeight = trimStartHeight;
|
||||||
LOGGER.info(() -> String.format("Trimmed %d AT state%s between blocks %d and %d",
|
LOGGER.debug(() -> String.format("Trimmed %d AT state%s between blocks %d and %d",
|
||||||
numAtStatesTrimmed, (numAtStatesTrimmed != 1 ? "s" : ""),
|
numAtStatesTrimmed, (numAtStatesTrimmed != 1 ? "s" : ""),
|
||||||
finalTrimStartHeight, upperTrimHeight));
|
finalTrimStartHeight, upperTrimHeight));
|
||||||
} else {
|
} else {
|
||||||
@ -87,22 +76,14 @@ public class AtStatesTrimmer implements Runnable {
|
|||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
|
|
||||||
final int finalTrimStartHeight = trimStartHeight;
|
final int finalTrimStartHeight = trimStartHeight;
|
||||||
LOGGER.info(() -> String.format("Bumping AT state base trim height to %d", finalTrimStartHeight));
|
LOGGER.debug(() -> String.format("Bumping AT state base trim height to %d", finalTrimStartHeight));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
} catch (DataException e) {
|
||||||
|
LOGGER.warn(String.format("Repository issue trying to trim AT states: %s", e.getMessage()));
|
||||||
} catch (InterruptedException e) {
|
} catch (InterruptedException e) {
|
||||||
if(Controller.isStopping()) {
|
// Time to exit
|
||||||
LOGGER.info("AT States Trimming Shutting Down");
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.warn("AT States Trimming interrupted. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.warn("AT States Trimming stopped working. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("AT States Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -15,13 +15,11 @@ import org.qortal.utils.NTP;
|
|||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
|
|
||||||
import static java.lang.Thread.NORM_PRIORITY;
|
|
||||||
|
|
||||||
public class BlockArchiver implements Runnable {
|
public class BlockArchiver implements Runnable {
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(BlockArchiver.class);
|
private static final Logger LOGGER = LogManager.getLogger(BlockArchiver.class);
|
||||||
|
|
||||||
private static final long INITIAL_SLEEP_PERIOD = 15 * 60 * 1000L; // ms
|
private static final long INITIAL_SLEEP_PERIOD = 5 * 60 * 1000L + 1234L; // ms
|
||||||
|
|
||||||
public void run() {
|
public void run() {
|
||||||
Thread.currentThread().setName("Block archiver");
|
Thread.currentThread().setName("Block archiver");
|
||||||
@ -30,13 +28,11 @@ public class BlockArchiver implements Runnable {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
int startHeight;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
// Don't even start building until initial rush has ended
|
// Don't even start building until initial rush has ended
|
||||||
Thread.sleep(INITIAL_SLEEP_PERIOD);
|
Thread.sleep(INITIAL_SLEEP_PERIOD);
|
||||||
|
|
||||||
startHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
|
int startHeight = repository.getBlockArchiveRepository().getBlockArchiveHeight();
|
||||||
|
|
||||||
// Don't attempt to archive if we have no ATStatesHeightIndex, as it will be too slow
|
// Don't attempt to archive if we have no ATStatesHeightIndex, as it will be too slow
|
||||||
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
|
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
|
||||||
@ -45,17 +41,10 @@ public class BlockArchiver implements Runnable {
|
|||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("Block Archiving is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.info("Starting block archiver from height {}...", startHeight);
|
LOGGER.info("Starting block archiver from height {}...", startHeight);
|
||||||
|
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
|
|
||||||
try {
|
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
|
|
||||||
Thread.sleep(Settings.getInstance().getArchiveInterval());
|
Thread.sleep(Settings.getInstance().getArchiveInterval());
|
||||||
@ -76,6 +65,7 @@ public class BlockArchiver implements Runnable {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
// Build cache of blocks
|
// Build cache of blocks
|
||||||
try {
|
try {
|
||||||
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
|
final int maximumArchiveHeight = BlockArchiveWriter.getMaxArchiveHeight(repository);
|
||||||
@ -98,7 +88,7 @@ public class BlockArchiver implements Runnable {
|
|||||||
// We didn't reach our file size target, so that must mean that we don't have enough blocks
|
// We didn't reach our file size target, so that must mean that we don't have enough blocks
|
||||||
// yet or something went wrong. Sleep for a while and then try again.
|
// yet or something went wrong. Sleep for a while and then try again.
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
Thread.sleep(2 * 60 * 60 * 1000L); // 1 hour
|
Thread.sleep(60 * 60 * 1000L); // 1 hour
|
||||||
break;
|
break;
|
||||||
|
|
||||||
case BLOCK_NOT_FOUND:
|
case BLOCK_NOT_FOUND:
|
||||||
@ -107,25 +97,21 @@ public class BlockArchiver implements Runnable {
|
|||||||
LOGGER.info("Error: block not found when building archive. If this error persists, " +
|
LOGGER.info("Error: block not found when building archive. If this error persists, " +
|
||||||
"a bootstrap or re-sync may be needed.");
|
"a bootstrap or re-sync may be needed.");
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
Thread.sleep(60 * 1000L); // 1 minute
|
Thread.sleep( 60 * 1000L); // 1 minute
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
} catch (IOException | TransformationException e) {
|
} catch (IOException | TransformationException e) {
|
||||||
LOGGER.info("Caught exception when creating block cache", e);
|
LOGGER.info("Caught exception when creating block cache", e);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
} catch (DataException e) {
|
||||||
|
LOGGER.info("Caught exception when creating block cache", e);
|
||||||
} catch (InterruptedException e) {
|
} catch (InterruptedException e) {
|
||||||
if (Controller.isStopping()) {
|
// Do nothing
|
||||||
LOGGER.info("Block Archiving Shutting Down");
|
|
||||||
} else {
|
|
||||||
LOGGER.warn("Block Archiving interrupted. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.warn("Block Archiving stopped working. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch(Exception e){
|
|
||||||
LOGGER.error("Block Archiving is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -11,8 +11,6 @@ import org.qortal.repository.RepositoryManager;
|
|||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
import static java.lang.Thread.NORM_PRIORITY;
|
|
||||||
|
|
||||||
public class BlockPruner implements Runnable {
|
public class BlockPruner implements Runnable {
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(BlockPruner.class);
|
private static final Logger LOGGER = LogManager.getLogger(BlockPruner.class);
|
||||||
@ -39,10 +37,8 @@ public class BlockPruner implements Runnable {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
int pruneStartHeight;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
pruneStartHeight = repository.getBlockRepository().getBlockPruneHeight();
|
int pruneStartHeight = repository.getBlockRepository().getBlockPruneHeight();
|
||||||
|
|
||||||
// Don't attempt to prune if we have no ATStatesHeightIndex, as it will be too slow
|
// Don't attempt to prune if we have no ATStatesHeightIndex, as it will be too slow
|
||||||
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
|
boolean hasAtStatesHeightIndex = repository.getATRepository().hasAtStatesHeightIndex();
|
||||||
@ -50,16 +46,8 @@ public class BlockPruner implements Runnable {
|
|||||||
LOGGER.info("Unable to start block pruner due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
|
LOGGER.info("Unable to start block pruner due to missing ATStatesHeightIndex. Bootstrapping is recommended.");
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("Block Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
|
|
||||||
try {
|
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
|
|
||||||
Thread.sleep(Settings.getInstance().getBlockPruneInterval());
|
Thread.sleep(Settings.getInstance().getBlockPruneInterval());
|
||||||
@ -95,20 +83,20 @@ public class BlockPruner implements Runnable {
|
|||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
LOGGER.info(String.format("Pruning blocks between %d and %d...", pruneStartHeight, upperPruneHeight));
|
LOGGER.debug(String.format("Pruning blocks between %d and %d...", pruneStartHeight, upperPruneHeight));
|
||||||
|
|
||||||
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(pruneStartHeight, upperPruneHeight);
|
int numBlocksPruned = repository.getBlockRepository().pruneBlocks(pruneStartHeight, upperPruneHeight);
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
|
|
||||||
if (numBlocksPruned > 0) {
|
if (numBlocksPruned > 0) {
|
||||||
LOGGER.info(String.format("Pruned %d block%s between %d and %d",
|
LOGGER.debug(String.format("Pruned %d block%s between %d and %d",
|
||||||
numBlocksPruned, (numBlocksPruned != 1 ? "s" : ""),
|
numBlocksPruned, (numBlocksPruned != 1 ? "s" : ""),
|
||||||
pruneStartHeight, upperPruneHeight));
|
pruneStartHeight, upperPruneHeight));
|
||||||
} else {
|
} else {
|
||||||
final int nextPruneHeight = upperPruneHeight + 1;
|
final int nextPruneHeight = upperPruneHeight + 1;
|
||||||
repository.getBlockRepository().setBlockPruneHeight(nextPruneHeight);
|
repository.getBlockRepository().setBlockPruneHeight(nextPruneHeight);
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
LOGGER.info(String.format("Bumping block base prune height to %d", pruneStartHeight));
|
LOGGER.debug(String.format("Bumping block base prune height to %d", pruneStartHeight));
|
||||||
|
|
||||||
// Can we move onto next batch?
|
// Can we move onto next batch?
|
||||||
if (upperPrunableHeight > nextPruneHeight) {
|
if (upperPrunableHeight > nextPruneHeight) {
|
||||||
@ -121,19 +109,12 @@ public class BlockPruner implements Runnable {
|
|||||||
Thread.sleep(10*60*1000L);
|
Thread.sleep(10*60*1000L);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
} catch (DataException e) {
|
||||||
|
LOGGER.warn(String.format("Repository issue trying to prune blocks: %s", e.getMessage()));
|
||||||
} catch (InterruptedException e) {
|
} catch (InterruptedException e) {
|
||||||
if(Controller.isStopping()) {
|
// Time to exit
|
||||||
LOGGER.info("Block Pruning Shutting Down");
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.warn("Block Pruning interrupted. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.warn("Block Pruning stopped working. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch(Exception e){
|
|
||||||
LOGGER.error("Block Pruning is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -12,8 +12,6 @@ import org.qortal.repository.RepositoryManager;
|
|||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
import static java.lang.Thread.NORM_PRIORITY;
|
|
||||||
|
|
||||||
public class OnlineAccountsSignaturesTrimmer implements Runnable {
|
public class OnlineAccountsSignaturesTrimmer implements Runnable {
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(OnlineAccountsSignaturesTrimmer.class);
|
private static final Logger LOGGER = LogManager.getLogger(OnlineAccountsSignaturesTrimmer.class);
|
||||||
@ -28,22 +26,13 @@ public class OnlineAccountsSignaturesTrimmer implements Runnable {
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
int trimStartHeight;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
try (final Repository repository = RepositoryManager.getRepository()) {
|
||||||
// Don't even start trimming until initial rush has ended
|
// Don't even start trimming until initial rush has ended
|
||||||
Thread.sleep(INITIAL_SLEEP_PERIOD);
|
Thread.sleep(INITIAL_SLEEP_PERIOD);
|
||||||
|
|
||||||
trimStartHeight = repository.getBlockRepository().getOnlineAccountsSignaturesTrimHeight();
|
int trimStartHeight = repository.getBlockRepository().getOnlineAccountsSignaturesTrimHeight();
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("Online Accounts Signatures Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
while (!Controller.isStopping()) {
|
while (!Controller.isStopping()) {
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
|
|
||||||
try {
|
|
||||||
repository.discardChanges();
|
repository.discardChanges();
|
||||||
|
|
||||||
Thread.sleep(Settings.getInstance().getOnlineSignaturesTrimInterval());
|
Thread.sleep(Settings.getInstance().getOnlineSignaturesTrimInterval());
|
||||||
@ -71,7 +60,7 @@ public class OnlineAccountsSignaturesTrimmer implements Runnable {
|
|||||||
|
|
||||||
if (numSigsTrimmed > 0) {
|
if (numSigsTrimmed > 0) {
|
||||||
final int finalTrimStartHeight = trimStartHeight;
|
final int finalTrimStartHeight = trimStartHeight;
|
||||||
LOGGER.info(() -> String.format("Trimmed %d online accounts signature%s between blocks %d and %d",
|
LOGGER.debug(() -> String.format("Trimmed %d online accounts signature%s between blocks %d and %d",
|
||||||
numSigsTrimmed, (numSigsTrimmed != 1 ? "s" : ""),
|
numSigsTrimmed, (numSigsTrimmed != 1 ? "s" : ""),
|
||||||
finalTrimStartHeight, upperTrimHeight));
|
finalTrimStartHeight, upperTrimHeight));
|
||||||
} else {
|
} else {
|
||||||
@ -83,22 +72,15 @@ public class OnlineAccountsSignaturesTrimmer implements Runnable {
|
|||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
|
|
||||||
final int finalTrimStartHeight = trimStartHeight;
|
final int finalTrimStartHeight = trimStartHeight;
|
||||||
LOGGER.info(() -> String.format("Bumping online accounts signatures base trim height to %d", finalTrimStartHeight));
|
LOGGER.debug(() -> String.format("Bumping online accounts signatures base trim height to %d", finalTrimStartHeight));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
} catch (DataException e) {
|
||||||
|
LOGGER.warn(String.format("Repository issue trying to trim online accounts signatures: %s", e.getMessage()));
|
||||||
} catch (InterruptedException e) {
|
} catch (InterruptedException e) {
|
||||||
if(Controller.isStopping()) {
|
// Time to exit
|
||||||
LOGGER.info("Online Accounts Signatures Trimming Shutting Down");
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.warn("Online Accounts Signatures Trimming interrupted. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.warn("Online Accounts Signatures Trimming stopped working. Trying again. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("Online Accounts Signatures Trimming is not working! Not trying again. Restart ASAP. Report this error immediately to the developers.", e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -40,7 +40,7 @@ public class PruneManager {
|
|||||||
}
|
}
|
||||||
|
|
||||||
public void start() {
|
public void start() {
|
||||||
this.executorService = Executors.newCachedThreadPool(new DaemonThreadFactory(Settings.getInstance().getPruningThreadPriority()));
|
this.executorService = Executors.newCachedThreadPool(new DaemonThreadFactory());
|
||||||
|
|
||||||
if (Settings.getInstance().isTopOnly()) {
|
if (Settings.getInstance().isTopOnly()) {
|
||||||
// Top-only-sync
|
// Top-only-sync
|
||||||
|
@ -83,7 +83,6 @@ public abstract class Bitcoiny implements ForeignBlockchain {
|
|||||||
return this.bitcoinjContext;
|
return this.bitcoinjContext;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public String getCurrencyCode() {
|
public String getCurrencyCode() {
|
||||||
return this.currencyCode;
|
return this.currencyCode;
|
||||||
}
|
}
|
||||||
|
@ -2,8 +2,6 @@ package org.qortal.crosschain;
|
|||||||
|
|
||||||
public interface ForeignBlockchain {
|
public interface ForeignBlockchain {
|
||||||
|
|
||||||
public String getCurrencyCode();
|
|
||||||
|
|
||||||
public boolean isValidAddress(String address);
|
public boolean isValidAddress(String address);
|
||||||
|
|
||||||
public boolean isValidWalletKey(String walletKey);
|
public boolean isValidWalletKey(String walletKey);
|
||||||
|
@ -1,54 +0,0 @@
|
|||||||
package org.qortal.data.account;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
|
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
// All properties to be converted to JSON via JAXB
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class AddressAmountData {
|
|
||||||
|
|
||||||
private String address;
|
|
||||||
|
|
||||||
@XmlJavaTypeAdapter(value = org.qortal.api.AmountTypeAdapter.class)
|
|
||||||
private long amount;
|
|
||||||
|
|
||||||
public AddressAmountData() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public AddressAmountData(String address, long amount) {
|
|
||||||
|
|
||||||
this.address = address;
|
|
||||||
this.amount = amount;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getAddress() {
|
|
||||||
return address;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getAmount() {
|
|
||||||
return amount;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public boolean equals(Object o) {
|
|
||||||
if (this == o) return true;
|
|
||||||
if (o == null || getClass() != o.getClass()) return false;
|
|
||||||
AddressAmountData that = (AddressAmountData) o;
|
|
||||||
return amount == that.amount && Objects.equals(address, that.address);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public int hashCode() {
|
|
||||||
return Objects.hash(address, amount);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "AddressAmountData{" +
|
|
||||||
"address='" + address + '\'' +
|
|
||||||
", amount=" + amount +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -33,10 +33,9 @@ public class AddressLevelPairing {
|
|||||||
public int getLevel() {
|
public int getLevel() {
|
||||||
return level;
|
return level;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public String toString() {
|
public String toString() {
|
||||||
return "AddressLevelPairing{" +
|
return "SponsorshipReport{" +
|
||||||
"address='" + address + '\'' +
|
"address='" + address + '\'' +
|
||||||
", level=" + level +
|
", level=" + level +
|
||||||
'}';
|
'}';
|
||||||
|
@ -1,59 +0,0 @@
|
|||||||
package org.qortal.data.account;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
// All properties to be converted to JSON via JAXB
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class BlockHeightRange {
|
|
||||||
|
|
||||||
private int begin;
|
|
||||||
|
|
||||||
private int end;
|
|
||||||
|
|
||||||
private boolean isRewardDistribution;
|
|
||||||
|
|
||||||
public BlockHeightRange() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public BlockHeightRange(int begin, int end, boolean isRewardDistribution) {
|
|
||||||
this.begin = begin;
|
|
||||||
this.end = end;
|
|
||||||
this.isRewardDistribution = isRewardDistribution;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getBegin() {
|
|
||||||
return begin;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getEnd() {
|
|
||||||
return end;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean isRewardDistribution() {
|
|
||||||
return isRewardDistribution;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public boolean equals(Object o) {
|
|
||||||
if (this == o) return true;
|
|
||||||
if (o == null || getClass() != o.getClass()) return false;
|
|
||||||
BlockHeightRange that = (BlockHeightRange) o;
|
|
||||||
return begin == that.begin && end == that.end;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public int hashCode() {
|
|
||||||
return Objects.hash(begin, end);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "BlockHeightRange{" +
|
|
||||||
"begin=" + begin +
|
|
||||||
", end=" + end +
|
|
||||||
", isRewardDistribution=" + isRewardDistribution +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,52 +0,0 @@
|
|||||||
package org.qortal.data.account;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
// All properties to be converted to JSON via JAXB
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class BlockHeightRangeAddressAmounts {
|
|
||||||
|
|
||||||
private BlockHeightRange range;
|
|
||||||
|
|
||||||
private List<AddressAmountData> amounts;
|
|
||||||
|
|
||||||
public BlockHeightRangeAddressAmounts() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public BlockHeightRangeAddressAmounts(BlockHeightRange range, List<AddressAmountData> amounts) {
|
|
||||||
this.range = range;
|
|
||||||
this.amounts = amounts;
|
|
||||||
}
|
|
||||||
|
|
||||||
public BlockHeightRange getRange() {
|
|
||||||
return range;
|
|
||||||
}
|
|
||||||
|
|
||||||
public List<AddressAmountData> getAmounts() {
|
|
||||||
return amounts;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public boolean equals(Object o) {
|
|
||||||
if (this == o) return true;
|
|
||||||
if (o == null || getClass() != o.getClass()) return false;
|
|
||||||
BlockHeightRangeAddressAmounts that = (BlockHeightRangeAddressAmounts) o;
|
|
||||||
return Objects.equals(range, that.range) && Objects.equals(amounts, that.amounts);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public int hashCode() {
|
|
||||||
return Objects.hash(range, amounts);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "BlockHeightRangeAddressAmounts{" +
|
|
||||||
"range=" + range +
|
|
||||||
", amounts=" + amounts +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,34 +0,0 @@
|
|||||||
package org.qortal.data.arbitrary;
|
|
||||||
|
|
||||||
import org.qortal.arbitrary.misc.Service;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class ArbitraryDataIndex {
|
|
||||||
|
|
||||||
public String t;
|
|
||||||
public String n;
|
|
||||||
public int c;
|
|
||||||
public String l;
|
|
||||||
|
|
||||||
public ArbitraryDataIndex() {}
|
|
||||||
|
|
||||||
public ArbitraryDataIndex(String t, String n, int c, String l) {
|
|
||||||
this.t = t;
|
|
||||||
this.n = n;
|
|
||||||
this.c = c;
|
|
||||||
this.l = l;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "ArbitraryDataIndex{" +
|
|
||||||
"t='" + t + '\'' +
|
|
||||||
", n='" + n + '\'' +
|
|
||||||
", c=" + c +
|
|
||||||
", l='" + l + '\'' +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,41 +0,0 @@
|
|||||||
package org.qortal.data.arbitrary;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class ArbitraryDataIndexDetail {
|
|
||||||
|
|
||||||
public String issuer;
|
|
||||||
public int rank;
|
|
||||||
public String term;
|
|
||||||
public String name;
|
|
||||||
public int category;
|
|
||||||
public String link;
|
|
||||||
public String indexIdentifer;
|
|
||||||
|
|
||||||
public ArbitraryDataIndexDetail() {}
|
|
||||||
|
|
||||||
public ArbitraryDataIndexDetail(String issuer, int rank, ArbitraryDataIndex index, String indexIdentifer) {
|
|
||||||
this.issuer = issuer;
|
|
||||||
this.rank = rank;
|
|
||||||
this.term = index.t;
|
|
||||||
this.name = index.n;
|
|
||||||
this.category = index.c;
|
|
||||||
this.link = index.l;
|
|
||||||
this.indexIdentifer = indexIdentifer;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "ArbitraryDataIndexDetail{" +
|
|
||||||
"issuer='" + issuer + '\'' +
|
|
||||||
", rank=" + rank +
|
|
||||||
", term='" + term + '\'' +
|
|
||||||
", name='" + name + '\'' +
|
|
||||||
", category=" + category +
|
|
||||||
", link='" + link + '\'' +
|
|
||||||
", indexIdentifer='" + indexIdentifer + '\'' +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,38 +0,0 @@
|
|||||||
package org.qortal.data.arbitrary;
|
|
||||||
|
|
||||||
import org.qortal.arbitrary.misc.Service;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class ArbitraryDataIndexScoreKey {
|
|
||||||
|
|
||||||
public String name;
|
|
||||||
public int category;
|
|
||||||
public String link;
|
|
||||||
|
|
||||||
public ArbitraryDataIndexScoreKey() {}
|
|
||||||
|
|
||||||
public ArbitraryDataIndexScoreKey(String name, int category, String link) {
|
|
||||||
this.name = name;
|
|
||||||
this.category = category;
|
|
||||||
this.link = link;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public boolean equals(Object o) {
|
|
||||||
if (this == o) return true;
|
|
||||||
if (o == null || getClass() != o.getClass()) return false;
|
|
||||||
ArbitraryDataIndexScoreKey that = (ArbitraryDataIndexScoreKey) o;
|
|
||||||
return category == that.category && Objects.equals(name, that.name) && Objects.equals(link, that.link);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public int hashCode() {
|
|
||||||
return Objects.hash(name, category, link);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
}
|
|
@ -1,38 +0,0 @@
|
|||||||
package org.qortal.data.arbitrary;
|
|
||||||
|
|
||||||
import org.qortal.arbitrary.misc.Service;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class ArbitraryDataIndexScorecard {
|
|
||||||
|
|
||||||
public double score;
|
|
||||||
public String name;
|
|
||||||
public int category;
|
|
||||||
public String link;
|
|
||||||
|
|
||||||
public ArbitraryDataIndexScorecard() {}
|
|
||||||
|
|
||||||
public ArbitraryDataIndexScorecard(double score, String name, int category, String link) {
|
|
||||||
this.score = score;
|
|
||||||
this.name = name;
|
|
||||||
this.category = category;
|
|
||||||
this.link = link;
|
|
||||||
}
|
|
||||||
|
|
||||||
public double getScore() {
|
|
||||||
return score;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "ArbitraryDataIndexScorecard{" +
|
|
||||||
"score=" + score +
|
|
||||||
", name='" + name + '\'' +
|
|
||||||
", category=" + category +
|
|
||||||
", link='" + link + '\'' +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,26 +0,0 @@
|
|||||||
package org.qortal.data.arbitrary;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Optional;
|
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
|
||||||
|
|
||||||
public class ArbitraryResourceCache {
|
|
||||||
private ConcurrentHashMap<Integer, List<ArbitraryResourceData>> dataByService = new ConcurrentHashMap<>();
|
|
||||||
private ConcurrentHashMap<String, Integer> levelByName = new ConcurrentHashMap<>();
|
|
||||||
|
|
||||||
private ArbitraryResourceCache() {}
|
|
||||||
|
|
||||||
private static ArbitraryResourceCache SINGLETON = new ArbitraryResourceCache();
|
|
||||||
|
|
||||||
public static ArbitraryResourceCache getInstance(){
|
|
||||||
return SINGLETON;
|
|
||||||
}
|
|
||||||
|
|
||||||
public ConcurrentHashMap<String, Integer> getLevelByName() {
|
|
||||||
return levelByName;
|
|
||||||
}
|
|
||||||
|
|
||||||
public ConcurrentHashMap<Integer, List<ArbitraryResourceData>> getDataByService() {
|
|
||||||
return this.dataByService;
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,57 +0,0 @@
|
|||||||
package org.qortal.data.arbitrary;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class DataMonitorInfo {
|
|
||||||
private long timestamp;
|
|
||||||
private String identifier;
|
|
||||||
private String name;
|
|
||||||
private String service;
|
|
||||||
private String description;
|
|
||||||
private long transactionTimestamp;
|
|
||||||
private long latestPutTimestamp;
|
|
||||||
|
|
||||||
public DataMonitorInfo() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public DataMonitorInfo(long timestamp, String identifier, String name, String service, String description, long transactionTimestamp, long latestPutTimestamp) {
|
|
||||||
|
|
||||||
this.timestamp = timestamp;
|
|
||||||
this.identifier = identifier;
|
|
||||||
this.name = name;
|
|
||||||
this.service = service;
|
|
||||||
this.description = description;
|
|
||||||
this.transactionTimestamp = transactionTimestamp;
|
|
||||||
this.latestPutTimestamp = latestPutTimestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getTimestamp() {
|
|
||||||
return timestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getIdentifier() {
|
|
||||||
return identifier;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getName() {
|
|
||||||
return name;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getService() {
|
|
||||||
return service;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getDescription() {
|
|
||||||
return description;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getTransactionTimestamp() {
|
|
||||||
return transactionTimestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getLatestPutTimestamp() {
|
|
||||||
return latestPutTimestamp;
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,23 +0,0 @@
|
|||||||
package org.qortal.data.arbitrary;
|
|
||||||
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
|
||||||
|
|
||||||
public class IndexCache {
|
|
||||||
|
|
||||||
public static final IndexCache SINGLETON = new IndexCache();
|
|
||||||
private ConcurrentHashMap<String, List<ArbitraryDataIndexDetail>> indicesByTerm = new ConcurrentHashMap<>();
|
|
||||||
private ConcurrentHashMap<String, List<ArbitraryDataIndexDetail>> indicesByIssuer = new ConcurrentHashMap<>();
|
|
||||||
|
|
||||||
public static IndexCache getInstance() {
|
|
||||||
return SINGLETON;
|
|
||||||
}
|
|
||||||
|
|
||||||
public ConcurrentHashMap<String, List<ArbitraryDataIndexDetail>> getIndicesByTerm() {
|
|
||||||
return indicesByTerm;
|
|
||||||
}
|
|
||||||
|
|
||||||
public ConcurrentHashMap<String, List<ArbitraryDataIndexDetail>> getIndicesByIssuer() {
|
|
||||||
return indicesByIssuer;
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,11 +1,8 @@
|
|||||||
package org.qortal.data.block;
|
package org.qortal.data.block;
|
||||||
|
|
||||||
import com.google.common.primitives.Bytes;
|
import com.google.common.primitives.Bytes;
|
||||||
import org.qortal.account.Account;
|
|
||||||
import org.qortal.block.BlockChain;
|
import org.qortal.block.BlockChain;
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.repository.Repository;
|
|
||||||
import org.qortal.repository.RepositoryManager;
|
|
||||||
import org.qortal.settings.Settings;
|
import org.qortal.settings.Settings;
|
||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
@ -235,31 +232,11 @@ public class BlockData implements Serializable {
|
|||||||
return blockTimestamp < onlineAccountSignaturesTrimmedTimestamp && blockTimestamp < currentTrimmableTimestamp;
|
return blockTimestamp < onlineAccountSignaturesTrimmedTimestamp && blockTimestamp < currentTrimmableTimestamp;
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getMinterAddressFromPublicKey() {
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
return Account.getRewardShareMintingAddress(repository, this.minterPublicKey);
|
|
||||||
} catch (DataException e) {
|
|
||||||
return "Unknown";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getMinterLevelFromPublicKey() {
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
return Account.getRewardShareEffectiveMintingLevel(repository, this.minterPublicKey);
|
|
||||||
} catch (DataException e) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// JAXB special
|
// JAXB special
|
||||||
|
|
||||||
@XmlElement(name = "minterAddress")
|
@XmlElement(name = "minterAddress")
|
||||||
protected String getMinterAddress() {
|
protected String getMinterAddress() {
|
||||||
return getMinterAddressFromPublicKey();
|
return Crypto.toAddress(this.minterPublicKey);
|
||||||
}
|
}
|
||||||
|
|
||||||
@XmlElement(name = "minterLevel")
|
|
||||||
protected int getMinterLevel() {
|
|
||||||
return getMinterLevelFromPublicKey();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
@ -1,85 +0,0 @@
|
|||||||
package org.qortal.data.block;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
// All properties to be converted to JSON via JAX-RS
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class DecodedOnlineAccountData {
|
|
||||||
|
|
||||||
private long onlineTimestamp;
|
|
||||||
private String minter;
|
|
||||||
private String recipient;
|
|
||||||
private int sharePercent;
|
|
||||||
private boolean minterGroupMember;
|
|
||||||
private String name;
|
|
||||||
private int level;
|
|
||||||
|
|
||||||
public DecodedOnlineAccountData() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public DecodedOnlineAccountData(long onlineTimestamp, String minter, String recipient, int sharePercent, boolean minterGroupMember, String name, int level) {
|
|
||||||
this.onlineTimestamp = onlineTimestamp;
|
|
||||||
this.minter = minter;
|
|
||||||
this.recipient = recipient;
|
|
||||||
this.sharePercent = sharePercent;
|
|
||||||
this.minterGroupMember = minterGroupMember;
|
|
||||||
this.name = name;
|
|
||||||
this.level = level;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getOnlineTimestamp() {
|
|
||||||
return onlineTimestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getMinter() {
|
|
||||||
return minter;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getRecipient() {
|
|
||||||
return recipient;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getSharePercent() {
|
|
||||||
return sharePercent;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean isMinterGroupMember() {
|
|
||||||
return minterGroupMember;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getName() {
|
|
||||||
return name;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getLevel() {
|
|
||||||
return level;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public boolean equals(Object o) {
|
|
||||||
if (this == o) return true;
|
|
||||||
if (o == null || getClass() != o.getClass()) return false;
|
|
||||||
DecodedOnlineAccountData that = (DecodedOnlineAccountData) o;
|
|
||||||
return onlineTimestamp == that.onlineTimestamp && sharePercent == that.sharePercent && minterGroupMember == that.minterGroupMember && level == that.level && Objects.equals(minter, that.minter) && Objects.equals(recipient, that.recipient) && Objects.equals(name, that.name);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public int hashCode() {
|
|
||||||
return Objects.hash(onlineTimestamp, minter, recipient, sharePercent, minterGroupMember, name, level);
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "DecodedOnlineAccountData{" +
|
|
||||||
"onlineTimestamp=" + onlineTimestamp +
|
|
||||||
", minter='" + minter + '\'' +
|
|
||||||
", recipient='" + recipient + '\'' +
|
|
||||||
", sharePercent=" + sharePercent +
|
|
||||||
", minterGroupMember=" + minterGroupMember +
|
|
||||||
", name='" + name + '\'' +
|
|
||||||
", level=" + level +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,35 +0,0 @@
|
|||||||
package org.qortal.data.system;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class DbConnectionInfo {
|
|
||||||
|
|
||||||
private long updated;
|
|
||||||
|
|
||||||
private String owner;
|
|
||||||
|
|
||||||
private String state;
|
|
||||||
|
|
||||||
public DbConnectionInfo() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public DbConnectionInfo(long timeOpened, String owner, String state) {
|
|
||||||
this.updated = timeOpened;
|
|
||||||
this.owner = owner;
|
|
||||||
this.state = state;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getUpdated() {
|
|
||||||
return updated;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getOwner() {
|
|
||||||
return owner;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getState() {
|
|
||||||
return state;
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,49 +0,0 @@
|
|||||||
package org.qortal.data.system;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class SystemInfo {
|
|
||||||
|
|
||||||
private long freeMemory;
|
|
||||||
|
|
||||||
private long memoryInUse;
|
|
||||||
|
|
||||||
private long totalMemory;
|
|
||||||
|
|
||||||
private long maxMemory;
|
|
||||||
|
|
||||||
private int availableProcessors;
|
|
||||||
|
|
||||||
public SystemInfo() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public SystemInfo(long freeMemory, long memoryInUse, long totalMemory, long maxMemory, int availableProcessors) {
|
|
||||||
this.freeMemory = freeMemory;
|
|
||||||
this.memoryInUse = memoryInUse;
|
|
||||||
this.totalMemory = totalMemory;
|
|
||||||
this.maxMemory = maxMemory;
|
|
||||||
this.availableProcessors = availableProcessors;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getFreeMemory() {
|
|
||||||
return freeMemory;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getMemoryInUse() {
|
|
||||||
return memoryInUse;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getTotalMemory() {
|
|
||||||
return totalMemory;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getMaxMemory() {
|
|
||||||
return maxMemory;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getAvailableProcessors() {
|
|
||||||
return availableProcessors;
|
|
||||||
}
|
|
||||||
}
|
|
@ -200,26 +200,4 @@ public class ArbitraryTransactionData extends TransactionData {
|
|||||||
return this.payments;
|
return this.payments;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public String toString() {
|
|
||||||
return "ArbitraryTransactionData{" +
|
|
||||||
"version=" + version +
|
|
||||||
", service=" + service +
|
|
||||||
", nonce=" + nonce +
|
|
||||||
", size=" + size +
|
|
||||||
", name='" + name + '\'' +
|
|
||||||
", identifier='" + identifier + '\'' +
|
|
||||||
", method=" + method +
|
|
||||||
", compression=" + compression +
|
|
||||||
", dataType=" + dataType +
|
|
||||||
", type=" + type +
|
|
||||||
", timestamp=" + timestamp +
|
|
||||||
", fee=" + fee +
|
|
||||||
", txGroupId=" + txGroupId +
|
|
||||||
", blockHeight=" + blockHeight +
|
|
||||||
", blockSequence=" + blockSequence +
|
|
||||||
", approvalStatus=" + approvalStatus +
|
|
||||||
", approvalHeight=" + approvalHeight +
|
|
||||||
'}';
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
@ -1,57 +0,0 @@
|
|||||||
package org.qortal.event;
|
|
||||||
|
|
||||||
import javax.xml.bind.annotation.XmlAccessType;
|
|
||||||
import javax.xml.bind.annotation.XmlAccessorType;
|
|
||||||
|
|
||||||
@XmlAccessorType(XmlAccessType.FIELD)
|
|
||||||
public class DataMonitorEvent implements Event{
|
|
||||||
private long timestamp;
|
|
||||||
private String identifier;
|
|
||||||
private String name;
|
|
||||||
private String service;
|
|
||||||
private String description;
|
|
||||||
private long transactionTimestamp;
|
|
||||||
private long latestPutTimestamp;
|
|
||||||
|
|
||||||
public DataMonitorEvent() {
|
|
||||||
}
|
|
||||||
|
|
||||||
public DataMonitorEvent(long timestamp, String identifier, String name, String service, String description, long transactionTimestamp, long latestPutTimestamp) {
|
|
||||||
|
|
||||||
this.timestamp = timestamp;
|
|
||||||
this.identifier = identifier;
|
|
||||||
this.name = name;
|
|
||||||
this.service = service;
|
|
||||||
this.description = description;
|
|
||||||
this.transactionTimestamp = transactionTimestamp;
|
|
||||||
this.latestPutTimestamp = latestPutTimestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getTimestamp() {
|
|
||||||
return timestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getIdentifier() {
|
|
||||||
return identifier;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getName() {
|
|
||||||
return name;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getService() {
|
|
||||||
return service;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String getDescription() {
|
|
||||||
return description;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getTransactionTimestamp() {
|
|
||||||
return transactionTimestamp;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getLatestPutTimestamp() {
|
|
||||||
return latestPutTimestamp;
|
|
||||||
}
|
|
||||||
}
|
|
@ -2,7 +2,6 @@ package org.qortal.group;
|
|||||||
|
|
||||||
import org.qortal.account.Account;
|
import org.qortal.account.Account;
|
||||||
import org.qortal.account.PublicKeyAccount;
|
import org.qortal.account.PublicKeyAccount;
|
||||||
import org.qortal.block.BlockChain;
|
|
||||||
import org.qortal.controller.Controller;
|
import org.qortal.controller.Controller;
|
||||||
import org.qortal.crypto.Crypto;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.data.group.*;
|
import org.qortal.data.group.*;
|
||||||
@ -151,13 +150,8 @@ public class Group {
|
|||||||
// Adminship
|
// Adminship
|
||||||
|
|
||||||
private GroupAdminData getAdmin(String admin) throws DataException {
|
private GroupAdminData getAdmin(String admin) throws DataException {
|
||||||
if( repository.getBlockRepository().getBlockchainHeight() < BlockChain.getInstance().getAdminQueryFixHeight()) {
|
|
||||||
return groupRepository.getAdminFaulty(this.groupData.getGroupId(), admin);
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
return groupRepository.getAdmin(this.groupData.getGroupId(), admin);
|
return groupRepository.getAdmin(this.groupData.getGroupId(), admin);
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
private boolean adminExists(String admin) throws DataException {
|
private boolean adminExists(String admin) throws DataException {
|
||||||
return groupRepository.adminExists(this.groupData.getGroupId(), admin);
|
return groupRepository.adminExists(this.groupData.getGroupId(), admin);
|
||||||
@ -674,8 +668,8 @@ public class Group {
|
|||||||
public void uninvite(GroupInviteTransactionData groupInviteTransactionData) throws DataException {
|
public void uninvite(GroupInviteTransactionData groupInviteTransactionData) throws DataException {
|
||||||
String invitee = groupInviteTransactionData.getInvitee();
|
String invitee = groupInviteTransactionData.getInvitee();
|
||||||
|
|
||||||
// If member exists and the join request is present then they were added when invite matched join request
|
// If member exists then they were added when invite matched join request
|
||||||
if (this.memberExists(invitee) && groupInviteTransactionData.getJoinReference() != null) {
|
if (this.memberExists(invitee)) {
|
||||||
// Rebuild join request using cached reference to transaction that created join request.
|
// Rebuild join request using cached reference to transaction that created join request.
|
||||||
this.rebuildJoinRequest(invitee, groupInviteTransactionData.getJoinReference());
|
this.rebuildJoinRequest(invitee, groupInviteTransactionData.getJoinReference());
|
||||||
|
|
||||||
|
@ -269,7 +269,7 @@ public enum Handshake {
|
|||||||
private static final int POW_DIFFICULTY_POST_131 = 2; // leading zero bits
|
private static final int POW_DIFFICULTY_POST_131 = 2; // leading zero bits
|
||||||
|
|
||||||
|
|
||||||
private static final ExecutorService responseExecutor = Executors.newFixedThreadPool(Settings.getInstance().getNetworkPoWComputePoolSize(), new DaemonThreadFactory("Network-PoW", Settings.getInstance().getHandshakeThreadPriority()));
|
private static final ExecutorService responseExecutor = Executors.newFixedThreadPool(Settings.getInstance().getNetworkPoWComputePoolSize(), new DaemonThreadFactory("Network-PoW"));
|
||||||
|
|
||||||
private static final byte[] ZERO_CHALLENGE = new byte[ChallengeMessage.CHALLENGE_LENGTH];
|
private static final byte[] ZERO_CHALLENGE = new byte[ChallengeMessage.CHALLENGE_LENGTH];
|
||||||
|
|
||||||
|
@ -53,7 +53,7 @@ public class Network {
|
|||||||
/**
|
/**
|
||||||
* How long between informational broadcasts to all connected peers, in milliseconds.
|
* How long between informational broadcasts to all connected peers, in milliseconds.
|
||||||
*/
|
*/
|
||||||
private static final long BROADCAST_INTERVAL = 30 * 1000L; // ms
|
private static final long BROADCAST_INTERVAL = 60 * 1000L; // ms
|
||||||
/**
|
/**
|
||||||
* Maximum time since last successful connection for peer info to be propagated, in milliseconds.
|
* Maximum time since last successful connection for peer info to be propagated, in milliseconds.
|
||||||
*/
|
*/
|
||||||
@ -83,12 +83,12 @@ public class Network {
|
|||||||
"node6.qortalnodes.live", "node7.qortalnodes.live", "node8.qortalnodes.live"
|
"node6.qortalnodes.live", "node7.qortalnodes.live", "node8.qortalnodes.live"
|
||||||
};
|
};
|
||||||
|
|
||||||
private static final long NETWORK_EPC_KEEPALIVE = 5L; // seconds
|
private static final long NETWORK_EPC_KEEPALIVE = 10L; // seconds
|
||||||
|
|
||||||
public static final int MAX_SIGNATURES_PER_REPLY = 500;
|
public static final int MAX_SIGNATURES_PER_REPLY = 500;
|
||||||
public static final int MAX_BLOCK_SUMMARIES_PER_REPLY = 500;
|
public static final int MAX_BLOCK_SUMMARIES_PER_REPLY = 500;
|
||||||
|
|
||||||
private static final long DISCONNECTION_CHECK_INTERVAL = 20 * 1000L; // milliseconds
|
private static final long DISCONNECTION_CHECK_INTERVAL = 10 * 1000L; // milliseconds
|
||||||
|
|
||||||
private static final int BROADCAST_CHAIN_TIP_DEPTH = 7; // Just enough to fill a SINGLE TCP packet (~1440 bytes)
|
private static final int BROADCAST_CHAIN_TIP_DEPTH = 7; // Just enough to fill a SINGLE TCP packet (~1440 bytes)
|
||||||
|
|
||||||
@ -164,11 +164,11 @@ public class Network {
|
|||||||
maxPeers = Settings.getInstance().getMaxPeers();
|
maxPeers = Settings.getInstance().getMaxPeers();
|
||||||
|
|
||||||
// We'll use a cached thread pool but with more aggressive timeout.
|
// We'll use a cached thread pool but with more aggressive timeout.
|
||||||
ExecutorService networkExecutor = new ThreadPoolExecutor(2,
|
ExecutorService networkExecutor = new ThreadPoolExecutor(1,
|
||||||
Settings.getInstance().getMaxNetworkThreadPoolSize(),
|
Settings.getInstance().getMaxNetworkThreadPoolSize(),
|
||||||
NETWORK_EPC_KEEPALIVE, TimeUnit.SECONDS,
|
NETWORK_EPC_KEEPALIVE, TimeUnit.SECONDS,
|
||||||
new SynchronousQueue<Runnable>(),
|
new SynchronousQueue<Runnable>(),
|
||||||
new NamedThreadFactory("Network-EPC", Settings.getInstance().getNetworkThreadPriority()));
|
new NamedThreadFactory("Network-EPC"));
|
||||||
networkEPC = new NetworkProcessor(networkExecutor);
|
networkEPC = new NetworkProcessor(networkExecutor);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4,15 +4,10 @@ import org.apache.logging.log4j.LogManager;
|
|||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
import org.qortal.network.Network;
|
import org.qortal.network.Network;
|
||||||
import org.qortal.network.Peer;
|
import org.qortal.network.Peer;
|
||||||
import org.qortal.utils.DaemonThreadFactory;
|
|
||||||
import org.qortal.utils.ExecuteProduceConsume.Task;
|
import org.qortal.utils.ExecuteProduceConsume.Task;
|
||||||
|
|
||||||
import java.util.concurrent.ExecutorService;
|
|
||||||
import java.util.concurrent.Executors;
|
|
||||||
|
|
||||||
public class PeerConnectTask implements Task {
|
public class PeerConnectTask implements Task {
|
||||||
private static final Logger LOGGER = LogManager.getLogger(PeerConnectTask.class);
|
private static final Logger LOGGER = LogManager.getLogger(PeerConnectTask.class);
|
||||||
private static final ExecutorService connectionExecutor = Executors.newCachedThreadPool(new DaemonThreadFactory(8));
|
|
||||||
|
|
||||||
private final Peer peer;
|
private final Peer peer;
|
||||||
private final String name;
|
private final String name;
|
||||||
@ -29,24 +24,6 @@ public class PeerConnectTask implements Task {
|
|||||||
|
|
||||||
@Override
|
@Override
|
||||||
public void perform() throws InterruptedException {
|
public void perform() throws InterruptedException {
|
||||||
// Submit connection task to a dedicated thread pool for non-blocking I/O
|
|
||||||
connectionExecutor.submit(() -> {
|
|
||||||
try {
|
|
||||||
connectPeerAsync(peer);
|
|
||||||
} catch (InterruptedException e) {
|
|
||||||
LOGGER.error("Connection attempt interrupted for peer {}", peer, e);
|
|
||||||
Thread.currentThread().interrupt(); // Reset interrupt flag
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
private void connectPeerAsync(Peer peer) throws InterruptedException {
|
|
||||||
// Perform peer connection in a separate thread to avoid blocking main task execution
|
|
||||||
try {
|
|
||||||
Network.getInstance().connectPeer(peer);
|
Network.getInstance().connectPeer(peer);
|
||||||
LOGGER.trace("Successfully connected to peer {}", peer);
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error("Error connecting to peer {}", peer, e);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -76,7 +76,7 @@ public interface ATRepository {
|
|||||||
* Although <tt>expectedValue</tt>, if provided, is natively an unsigned long,
|
* Although <tt>expectedValue</tt>, if provided, is natively an unsigned long,
|
||||||
* the data segment comparison is done via unsigned hex string.
|
* the data segment comparison is done via unsigned hex string.
|
||||||
*/
|
*/
|
||||||
public List<ATStateData> getMatchingFinalATStates(byte[] codeHash, byte[] buyerPublicKey, byte[] sellerPublicKey, Boolean isFinished,
|
public List<ATStateData> getMatchingFinalATStates(byte[] codeHash, Boolean isFinished,
|
||||||
Integer dataByteOffset, Long expectedValue, Integer minimumFinalHeight,
|
Integer dataByteOffset, Long expectedValue, Integer minimumFinalHeight,
|
||||||
Integer limit, Integer offset, Boolean reverse) throws DataException;
|
Integer limit, Integer offset, Boolean reverse) throws DataException;
|
||||||
|
|
||||||
|
@ -27,10 +27,6 @@ public interface ArbitraryRepository {
|
|||||||
|
|
||||||
public List<ArbitraryTransactionData> getArbitraryTransactions(String name, Service service, String identifier, long since) throws DataException;
|
public List<ArbitraryTransactionData> getArbitraryTransactions(String name, Service service, String identifier, long since) throws DataException;
|
||||||
|
|
||||||
List<ArbitraryTransactionData> getLatestArbitraryTransactions() throws DataException;
|
|
||||||
|
|
||||||
List<ArbitraryTransactionData> getLatestArbitraryTransactionsByName(String name) throws DataException;
|
|
||||||
|
|
||||||
public ArbitraryTransactionData getInitialTransaction(String name, Service service, Method method, String identifier) throws DataException;
|
public ArbitraryTransactionData getInitialTransaction(String name, Service service, Method method, String identifier) throws DataException;
|
||||||
|
|
||||||
public ArbitraryTransactionData getLatestTransaction(String name, Service service, Method method, String identifier) throws DataException;
|
public ArbitraryTransactionData getLatestTransaction(String name, Service service, Method method, String identifier) throws DataException;
|
||||||
@ -46,7 +42,7 @@ public interface ArbitraryRepository {
|
|||||||
|
|
||||||
public List<ArbitraryResourceData> getArbitraryResources(Service service, String identifier, List<String> names, boolean defaultResource, Boolean followedOnly, Boolean excludeBlocked, Boolean includeMetadata, Boolean includeStatus, Integer limit, Integer offset, Boolean reverse) throws DataException;
|
public List<ArbitraryResourceData> getArbitraryResources(Service service, String identifier, List<String> names, boolean defaultResource, Boolean followedOnly, Boolean excludeBlocked, Boolean includeMetadata, Boolean includeStatus, Integer limit, Integer offset, Boolean reverse) throws DataException;
|
||||||
|
|
||||||
public List<ArbitraryResourceData> searchArbitraryResources(Service service, String query, String identifier, List<String> names, String title, String description, List<String> keywords, boolean prefixOnly, List<String> namesFilter, boolean defaultResource, SearchMode mode, Integer minLevel, Boolean followedOnly, Boolean excludeBlocked, Boolean includeMetadata, Boolean includeStatus, Long before, Long after, Integer limit, Integer offset, Boolean reverse) throws DataException;
|
public List<ArbitraryResourceData> searchArbitraryResources(Service service, String query, String identifier, List<String> names, String title, String description, boolean prefixOnly, List<String> namesFilter, boolean defaultResource, SearchMode mode, Integer minLevel, Boolean followedOnly, Boolean excludeBlocked, Boolean includeMetadata, Boolean includeStatus, Long before, Long after, Integer limit, Integer offset, Boolean reverse) throws DataException;
|
||||||
|
|
||||||
List<ArbitraryResourceData> searchArbitraryResourcesSimple(
|
List<ArbitraryResourceData> searchArbitraryResourcesSimple(
|
||||||
Service service,
|
Service service,
|
||||||
|
@ -153,16 +153,13 @@ public class BlockArchiveWriter {
|
|||||||
int i = 0;
|
int i = 0;
|
||||||
while (headerBytes.size() + bytes.size() < this.fileSizeTarget) {
|
while (headerBytes.size() + bytes.size() < this.fileSizeTarget) {
|
||||||
|
|
||||||
// pause, since this can be a long process and other processes need to execute
|
|
||||||
Thread.sleep(Settings.getInstance().getArchivingPause());
|
|
||||||
|
|
||||||
if (Controller.isStopping()) {
|
if (Controller.isStopping()) {
|
||||||
return BlockArchiveWriteResult.STOPPING;
|
return BlockArchiveWriteResult.STOPPING;
|
||||||
}
|
}
|
||||||
|
if (Synchronizer.getInstance().isSynchronizing()) {
|
||||||
// wait until the Synchronizer stops
|
Thread.sleep(1000L);
|
||||||
if( Synchronizer.getInstance().isSynchronizing() )
|
|
||||||
continue;
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
int currentHeight = startHeight + i;
|
int currentHeight = startHeight + i;
|
||||||
if (currentHeight > endHeight) {
|
if (currentHeight > endHeight) {
|
||||||
|
@ -22,6 +22,6 @@ public interface ChatRepository {
|
|||||||
|
|
||||||
public ChatMessage toChatMessage(ChatTransactionData chatTransactionData, Encoding encoding) throws DataException;
|
public ChatMessage toChatMessage(ChatTransactionData chatTransactionData, Encoding encoding) throws DataException;
|
||||||
|
|
||||||
public ActiveChats getActiveChats(String address, Encoding encoding, Boolean hasChatReference) throws DataException;
|
public ActiveChats getActiveChats(String address, Encoding encoding) throws DataException;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -48,8 +48,6 @@ public interface GroupRepository {
|
|||||||
|
|
||||||
// Group Admins
|
// Group Admins
|
||||||
|
|
||||||
public GroupAdminData getAdminFaulty(int groupId, String address) throws DataException;
|
|
||||||
|
|
||||||
public GroupAdminData getAdmin(int groupId, String address) throws DataException;
|
public GroupAdminData getAdmin(int groupId, String address) throws DataException;
|
||||||
|
|
||||||
public boolean adminExists(int groupId, String address) throws DataException;
|
public boolean adminExists(int groupId, String address) throws DataException;
|
||||||
|
@ -1,11 +1,9 @@
|
|||||||
package org.qortal.repository.hsqldb;
|
package org.qortal.repository.hsqldb;
|
||||||
|
|
||||||
import com.google.common.primitives.Longs;
|
import com.google.common.primitives.Longs;
|
||||||
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
import org.apache.logging.log4j.LogManager;
|
||||||
import org.apache.logging.log4j.Logger;
|
import org.apache.logging.log4j.Logger;
|
||||||
import org.qortal.controller.Controller;
|
import org.qortal.controller.Controller;
|
||||||
import org.qortal.crypto.Crypto;
|
|
||||||
import org.qortal.data.at.ATData;
|
import org.qortal.data.at.ATData;
|
||||||
import org.qortal.data.at.ATStateData;
|
import org.qortal.data.at.ATStateData;
|
||||||
import org.qortal.repository.ATRepository;
|
import org.qortal.repository.ATRepository;
|
||||||
@ -18,8 +16,6 @@ import java.util.ArrayList;
|
|||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
|
|
||||||
import org.qortal.data.account.AccountData;
|
|
||||||
|
|
||||||
public class HSQLDBATRepository implements ATRepository {
|
public class HSQLDBATRepository implements ATRepository {
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(HSQLDBATRepository.class);
|
private static final Logger LOGGER = LogManager.getLogger(HSQLDBATRepository.class);
|
||||||
@ -404,7 +400,7 @@ public class HSQLDBATRepository implements ATRepository {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<ATStateData> getMatchingFinalATStates(byte[] codeHash, byte[] buyerPublicKey, byte[] sellerPublicKey, Boolean isFinished,
|
public List<ATStateData> getMatchingFinalATStates(byte[] codeHash, Boolean isFinished,
|
||||||
Integer dataByteOffset, Long expectedValue, Integer minimumFinalHeight,
|
Integer dataByteOffset, Long expectedValue, Integer minimumFinalHeight,
|
||||||
Integer limit, Integer offset, Boolean reverse) throws DataException {
|
Integer limit, Integer offset, Boolean reverse) throws DataException {
|
||||||
StringBuilder sql = new StringBuilder(1024);
|
StringBuilder sql = new StringBuilder(1024);
|
||||||
@ -425,14 +421,10 @@ public class HSQLDBATRepository implements ATRepository {
|
|||||||
|
|
||||||
// Order by AT_address and height to use compound primary key as index
|
// Order by AT_address and height to use compound primary key as index
|
||||||
// Both must be the same direction (DESC) also
|
// Both must be the same direction (DESC) also
|
||||||
sql.append("ORDER BY ATStates.height DESC LIMIT 1) AS FinalATStates ");
|
sql.append("ORDER BY ATStates.AT_address DESC, ATStates.height DESC "
|
||||||
|
+ "LIMIT 1 "
|
||||||
// Optional JOIN with ATTRANSACTIONS for buyerAddress
|
+ ") AS FinalATStates "
|
||||||
if (buyerPublicKey != null && buyerPublicKey.length > 0) {
|
+ "WHERE code_hash = ? ");
|
||||||
sql.append("JOIN ATTRANSACTIONS tx ON tx.at_address = ATs.AT_address ");
|
|
||||||
}
|
|
||||||
|
|
||||||
sql.append("WHERE ATs.code_hash = ? ");
|
|
||||||
bindParams.add(codeHash);
|
bindParams.add(codeHash);
|
||||||
|
|
||||||
if (isFinished != null) {
|
if (isFinished != null) {
|
||||||
@ -451,20 +443,6 @@ public class HSQLDBATRepository implements ATRepository {
|
|||||||
bindParams.add(rawExpectedValue);
|
bindParams.add(rawExpectedValue);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (buyerPublicKey != null && buyerPublicKey.length > 0 ) {
|
|
||||||
// the buyer must be the recipient of the transaction and not the creator of the AT
|
|
||||||
sql.append("AND tx.recipient = ? AND ATs.creator != ? ");
|
|
||||||
|
|
||||||
bindParams.add(Crypto.toAddress(buyerPublicKey));
|
|
||||||
bindParams.add(buyerPublicKey);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
if (sellerPublicKey != null && sellerPublicKey.length > 0) {
|
|
||||||
sql.append("AND ATs.creator = ? ");
|
|
||||||
bindParams.add(sellerPublicKey);
|
|
||||||
}
|
|
||||||
|
|
||||||
sql.append(" ORDER BY FinalATStates.height ");
|
sql.append(" ORDER BY FinalATStates.height ");
|
||||||
if (reverse != null && reverse)
|
if (reverse != null && reverse)
|
||||||
sql.append("DESC");
|
sql.append("DESC");
|
||||||
@ -505,7 +483,7 @@ public class HSQLDBATRepository implements ATRepository {
|
|||||||
Integer dataByteOffset, Long expectedValue,
|
Integer dataByteOffset, Long expectedValue,
|
||||||
int minimumCount, int maximumCount, long minimumPeriod) throws DataException {
|
int minimumCount, int maximumCount, long minimumPeriod) throws DataException {
|
||||||
// We need most recent entry first so we can use its timestamp to slice further results
|
// We need most recent entry first so we can use its timestamp to slice further results
|
||||||
List<ATStateData> mostRecentStates = this.getMatchingFinalATStates(codeHash, null, null, isFinished,
|
List<ATStateData> mostRecentStates = this.getMatchingFinalATStates(codeHash, isFinished,
|
||||||
dataByteOffset, expectedValue, null,
|
dataByteOffset, expectedValue, null,
|
||||||
1, 0, true);
|
1, 0, true);
|
||||||
|
|
||||||
|
@ -1215,7 +1215,7 @@ public class HSQLDBAccountRepository implements AccountRepository {
|
|||||||
sponseeSql.append(")");
|
sponseeSql.append(")");
|
||||||
|
|
||||||
// Create a new array to hold both
|
// Create a new array to hold both
|
||||||
Object[] combinedArray = new Object[realRewardShareRecipients.length + 1];
|
String[] combinedArray = new String[realRewardShareRecipients.length + 1];
|
||||||
|
|
||||||
// Add the single string to the first position
|
// Add the single string to the first position
|
||||||
combinedArray[0] = account;
|
combinedArray[0] = account;
|
||||||
@ -1439,7 +1439,7 @@ public class HSQLDBAccountRepository implements AccountRepository {
|
|||||||
sql.append(String.join(", ", Collections.nCopies(addressCount, "?")));
|
sql.append(String.join(", ", Collections.nCopies(addressCount, "?")));
|
||||||
sql.append(") ");
|
sql.append(") ");
|
||||||
sql.append("AND a.account = tx.recipient AND a.public_key != ats.creator AND asset_id = 0 ");
|
sql.append("AND a.account = tx.recipient AND a.public_key != ats.creator AND asset_id = 0 ");
|
||||||
Object[] sponsees = addresses.toArray(new Object[addressCount]);
|
String[] sponsees = addresses.toArray(new String[addressCount]);
|
||||||
ResultSet buySellResultSet = this.repository.checkedExecute(sql.toString(), sponsees);
|
ResultSet buySellResultSet = this.repository.checkedExecute(sql.toString(), sponsees);
|
||||||
|
|
||||||
return buySellResultSet;
|
return buySellResultSet;
|
||||||
@ -1456,7 +1456,7 @@ public class HSQLDBAccountRepository implements AccountRepository {
|
|||||||
sql.append(String.join(", ", Collections.nCopies(addressCount, "?")));
|
sql.append(String.join(", ", Collections.nCopies(addressCount, "?")));
|
||||||
sql.append(") ");
|
sql.append(") ");
|
||||||
sql.append("AND a.account != tx.recipient AND asset_id = 0 ");
|
sql.append("AND a.account != tx.recipient AND asset_id = 0 ");
|
||||||
Object[] sponsees = addresses.toArray(new Object[addressCount]);
|
String[] sponsees = addresses.toArray(new String[addressCount]);
|
||||||
|
|
||||||
return this.repository.checkedExecute(sql.toString(), sponsees);
|
return this.repository.checkedExecute(sql.toString(), sponsees);
|
||||||
}
|
}
|
||||||
@ -1490,7 +1490,7 @@ public class HSQLDBAccountRepository implements AccountRepository {
|
|||||||
txTypeTotalsSql.append(") and type in (10, 12, 40) ");
|
txTypeTotalsSql.append(") and type in (10, 12, 40) ");
|
||||||
txTypeTotalsSql.append("group by type order by type");
|
txTypeTotalsSql.append("group by type order by type");
|
||||||
|
|
||||||
Object[] sponsees = sponseeAddresses.toArray(new Object[sponseeCount]);
|
String[] sponsees = sponseeAddresses.toArray(new String[sponseeCount]);
|
||||||
ResultSet txTypeResultSet = this.repository.checkedExecute(txTypeTotalsSql.toString(), sponsees);
|
ResultSet txTypeResultSet = this.repository.checkedExecute(txTypeTotalsSql.toString(), sponsees);
|
||||||
return txTypeResultSet;
|
return txTypeResultSet;
|
||||||
}
|
}
|
||||||
@ -1502,7 +1502,7 @@ public class HSQLDBAccountRepository implements AccountRepository {
|
|||||||
avgBalanceSql.append(String.join(", ", Collections.nCopies(sponseeCount, "?")));
|
avgBalanceSql.append(String.join(", ", Collections.nCopies(sponseeCount, "?")));
|
||||||
avgBalanceSql.append(") and ASSET_ID = 0");
|
avgBalanceSql.append(") and ASSET_ID = 0");
|
||||||
|
|
||||||
Object[] sponsees = sponseeAddresses.toArray(new Object[sponseeCount]);
|
String[] sponsees = sponseeAddresses.toArray(new String[sponseeCount]);
|
||||||
return this.repository.checkedExecute(avgBalanceSql.toString(), sponsees);
|
return this.repository.checkedExecute(avgBalanceSql.toString(), sponsees);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1538,7 +1538,7 @@ public class HSQLDBAccountRepository implements AccountRepository {
|
|||||||
namesSql.append(String.join(", ", Collections.nCopies(sponseeCount, "?")));
|
namesSql.append(String.join(", ", Collections.nCopies(sponseeCount, "?")));
|
||||||
namesSql.append(")");
|
namesSql.append(")");
|
||||||
|
|
||||||
Object[] sponsees = sponseeAddresses.toArray(new Object[sponseeCount]);
|
String[] sponsees = sponseeAddresses.toArray(new String[sponseeCount]);
|
||||||
ResultSet namesResultSet = this.repository.checkedExecute(namesSql.toString(), sponsees);
|
ResultSet namesResultSet = this.repository.checkedExecute(namesSql.toString(), sponsees);
|
||||||
return namesResultSet;
|
return namesResultSet;
|
||||||
}
|
}
|
||||||
|
@ -7,7 +7,6 @@ import org.qortal.arbitrary.ArbitraryDataFile;
|
|||||||
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
|
import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
|
||||||
import org.qortal.arbitrary.misc.Category;
|
import org.qortal.arbitrary.misc.Category;
|
||||||
import org.qortal.arbitrary.misc.Service;
|
import org.qortal.arbitrary.misc.Service;
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceCache;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceData;
|
import org.qortal.data.arbitrary.ArbitraryResourceData;
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceMetadata;
|
import org.qortal.data.arbitrary.ArbitraryResourceMetadata;
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceStatus;
|
import org.qortal.data.arbitrary.ArbitraryResourceStatus;
|
||||||
@ -19,7 +18,6 @@ import org.qortal.data.transaction.BaseTransactionData;
|
|||||||
import org.qortal.data.transaction.TransactionData;
|
import org.qortal.data.transaction.TransactionData;
|
||||||
import org.qortal.repository.ArbitraryRepository;
|
import org.qortal.repository.ArbitraryRepository;
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.repository.DataException;
|
||||||
import org.qortal.settings.Settings;
|
|
||||||
import org.qortal.transaction.ArbitraryTransaction;
|
import org.qortal.transaction.ArbitraryTransaction;
|
||||||
import org.qortal.transaction.Transaction.ApprovalStatus;
|
import org.qortal.transaction.Transaction.ApprovalStatus;
|
||||||
import org.qortal.utils.Base58;
|
import org.qortal.utils.Base58;
|
||||||
@ -28,10 +26,8 @@ import org.qortal.utils.ListUtils;
|
|||||||
import java.sql.ResultSet;
|
import java.sql.ResultSet;
|
||||||
import java.sql.SQLException;
|
import java.sql.SQLException;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Arrays;
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
import java.util.Optional;
|
|
||||||
|
|
||||||
public class HSQLDBArbitraryRepository implements ArbitraryRepository {
|
public class HSQLDBArbitraryRepository implements ArbitraryRepository {
|
||||||
|
|
||||||
@ -227,144 +223,6 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
|
||||||
public List<ArbitraryTransactionData> getLatestArbitraryTransactions() throws DataException {
|
|
||||||
String sql = "SELECT type, reference, signature, creator, created_when, fee, " +
|
|
||||||
"tx_group_id, block_height, approval_status, approval_height, " +
|
|
||||||
"version, nonce, service, size, is_data_raw, data, metadata_hash, " +
|
|
||||||
"name, identifier, update_method, secret, compression FROM ArbitraryTransactions " +
|
|
||||||
"JOIN Transactions USING (signature) " +
|
|
||||||
"WHERE name IS NOT NULL " +
|
|
||||||
"ORDER BY created_when DESC";
|
|
||||||
List<ArbitraryTransactionData> arbitraryTransactionData = new ArrayList<>();
|
|
||||||
|
|
||||||
try (ResultSet resultSet = this.repository.checkedExecute(sql)) {
|
|
||||||
if (resultSet == null)
|
|
||||||
return new ArrayList<>(0);
|
|
||||||
|
|
||||||
do {
|
|
||||||
byte[] reference = resultSet.getBytes(2);
|
|
||||||
byte[] signature = resultSet.getBytes(3);
|
|
||||||
byte[] creatorPublicKey = resultSet.getBytes(4);
|
|
||||||
long timestamp = resultSet.getLong(5);
|
|
||||||
|
|
||||||
Long fee = resultSet.getLong(6);
|
|
||||||
if (fee == 0 && resultSet.wasNull())
|
|
||||||
fee = null;
|
|
||||||
|
|
||||||
int txGroupId = resultSet.getInt(7);
|
|
||||||
|
|
||||||
Integer blockHeight = resultSet.getInt(8);
|
|
||||||
if (blockHeight == 0 && resultSet.wasNull())
|
|
||||||
blockHeight = null;
|
|
||||||
|
|
||||||
ApprovalStatus approvalStatus = ApprovalStatus.valueOf(resultSet.getInt(9));
|
|
||||||
Integer approvalHeight = resultSet.getInt(10);
|
|
||||||
if (approvalHeight == 0 && resultSet.wasNull())
|
|
||||||
approvalHeight = null;
|
|
||||||
|
|
||||||
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, txGroupId, reference, creatorPublicKey, fee, approvalStatus, blockHeight, approvalHeight, signature);
|
|
||||||
|
|
||||||
int version = resultSet.getInt(11);
|
|
||||||
int nonce = resultSet.getInt(12);
|
|
||||||
int serviceInt = resultSet.getInt(13);
|
|
||||||
int size = resultSet.getInt(14);
|
|
||||||
boolean isDataRaw = resultSet.getBoolean(15); // NOT NULL, so no null to false
|
|
||||||
DataType dataType = isDataRaw ? DataType.RAW_DATA : DataType.DATA_HASH;
|
|
||||||
byte[] data = resultSet.getBytes(16);
|
|
||||||
byte[] metadataHash = resultSet.getBytes(17);
|
|
||||||
String nameResult = resultSet.getString(18);
|
|
||||||
String identifierResult = resultSet.getString(19);
|
|
||||||
Method method = Method.valueOf(resultSet.getInt(20));
|
|
||||||
byte[] secret = resultSet.getBytes(21);
|
|
||||||
Compression compression = Compression.valueOf(resultSet.getInt(22));
|
|
||||||
// FUTURE: get payments from signature if needed. Avoiding for now to reduce database calls.
|
|
||||||
|
|
||||||
ArbitraryTransactionData transactionData = new ArbitraryTransactionData(baseTransactionData,
|
|
||||||
version, serviceInt, nonce, size, nameResult, identifierResult, method, secret,
|
|
||||||
compression, data, dataType, metadataHash, null);
|
|
||||||
|
|
||||||
arbitraryTransactionData.add(transactionData);
|
|
||||||
} while (resultSet.next());
|
|
||||||
|
|
||||||
return arbitraryTransactionData;
|
|
||||||
} catch (SQLException e) {
|
|
||||||
throw new DataException("Unable to fetch arbitrary transactions from repository", e);
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
return new ArrayList<>(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
|
||||||
public List<ArbitraryTransactionData> getLatestArbitraryTransactionsByName( String name ) throws DataException {
|
|
||||||
String sql = "SELECT type, reference, signature, creator, created_when, fee, " +
|
|
||||||
"tx_group_id, block_height, approval_status, approval_height, " +
|
|
||||||
"version, nonce, service, size, is_data_raw, data, metadata_hash, " +
|
|
||||||
"name, identifier, update_method, secret, compression FROM ArbitraryTransactions " +
|
|
||||||
"JOIN Transactions USING (signature) " +
|
|
||||||
"WHERE name = ? " +
|
|
||||||
"ORDER BY created_when DESC";
|
|
||||||
List<ArbitraryTransactionData> arbitraryTransactionData = new ArrayList<>();
|
|
||||||
|
|
||||||
try (ResultSet resultSet = this.repository.checkedExecute(sql, name)) {
|
|
||||||
if (resultSet == null)
|
|
||||||
return new ArrayList<>(0);
|
|
||||||
|
|
||||||
do {
|
|
||||||
byte[] reference = resultSet.getBytes(2);
|
|
||||||
byte[] signature = resultSet.getBytes(3);
|
|
||||||
byte[] creatorPublicKey = resultSet.getBytes(4);
|
|
||||||
long timestamp = resultSet.getLong(5);
|
|
||||||
|
|
||||||
Long fee = resultSet.getLong(6);
|
|
||||||
if (fee == 0 && resultSet.wasNull())
|
|
||||||
fee = null;
|
|
||||||
|
|
||||||
int txGroupId = resultSet.getInt(7);
|
|
||||||
|
|
||||||
Integer blockHeight = resultSet.getInt(8);
|
|
||||||
if (blockHeight == 0 && resultSet.wasNull())
|
|
||||||
blockHeight = null;
|
|
||||||
|
|
||||||
ApprovalStatus approvalStatus = ApprovalStatus.valueOf(resultSet.getInt(9));
|
|
||||||
Integer approvalHeight = resultSet.getInt(10);
|
|
||||||
if (approvalHeight == 0 && resultSet.wasNull())
|
|
||||||
approvalHeight = null;
|
|
||||||
|
|
||||||
BaseTransactionData baseTransactionData = new BaseTransactionData(timestamp, txGroupId, reference, creatorPublicKey, fee, approvalStatus, blockHeight, approvalHeight, signature);
|
|
||||||
|
|
||||||
int version = resultSet.getInt(11);
|
|
||||||
int nonce = resultSet.getInt(12);
|
|
||||||
int serviceInt = resultSet.getInt(13);
|
|
||||||
int size = resultSet.getInt(14);
|
|
||||||
boolean isDataRaw = resultSet.getBoolean(15); // NOT NULL, so no null to false
|
|
||||||
DataType dataType = isDataRaw ? DataType.RAW_DATA : DataType.DATA_HASH;
|
|
||||||
byte[] data = resultSet.getBytes(16);
|
|
||||||
byte[] metadataHash = resultSet.getBytes(17);
|
|
||||||
String nameResult = resultSet.getString(18);
|
|
||||||
String identifierResult = resultSet.getString(19);
|
|
||||||
Method method = Method.valueOf(resultSet.getInt(20));
|
|
||||||
byte[] secret = resultSet.getBytes(21);
|
|
||||||
Compression compression = Compression.valueOf(resultSet.getInt(22));
|
|
||||||
// FUTURE: get payments from signature if needed. Avoiding for now to reduce database calls.
|
|
||||||
|
|
||||||
ArbitraryTransactionData transactionData = new ArbitraryTransactionData(baseTransactionData,
|
|
||||||
version, serviceInt, nonce, size, nameResult, identifierResult, method, secret,
|
|
||||||
compression, data, dataType, metadataHash, null);
|
|
||||||
|
|
||||||
arbitraryTransactionData.add(transactionData);
|
|
||||||
} while (resultSet.next());
|
|
||||||
|
|
||||||
return arbitraryTransactionData;
|
|
||||||
} catch (SQLException e) {
|
|
||||||
throw new DataException("Unable to fetch arbitrary transactions from repository", e);
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
return new ArrayList<>(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private ArbitraryTransactionData getSingleTransaction(String name, Service service, Method method, String identifier, boolean firstNotLast) throws DataException {
|
private ArbitraryTransactionData getSingleTransaction(String name, Service service, Method method, String identifier, boolean firstNotLast) throws DataException {
|
||||||
if (name == null || service == null) {
|
if (name == null || service == null) {
|
||||||
// Required fields
|
// Required fields
|
||||||
@ -862,54 +720,9 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public List<ArbitraryResourceData> searchArbitraryResources(Service service, String query, String identifier, List<String> names, String title, String description, List<String> keywords, boolean prefixOnly,
|
public List<ArbitraryResourceData> searchArbitraryResources(Service service, String query, String identifier, List<String> names, String title, String description, boolean prefixOnly,
|
||||||
List<String> exactMatchNames, boolean defaultResource, SearchMode mode, Integer minLevel, Boolean followedOnly, Boolean excludeBlocked,
|
List<String> exactMatchNames, boolean defaultResource, SearchMode mode, Integer minLevel, Boolean followedOnly, Boolean excludeBlocked,
|
||||||
Boolean includeMetadata, Boolean includeStatus, Long before, Long after, Integer limit, Integer offset, Boolean reverse) throws DataException {
|
Boolean includeMetadata, Boolean includeStatus, Long before, Long after, Integer limit, Integer offset, Boolean reverse) throws DataException {
|
||||||
|
|
||||||
if(Settings.getInstance().isDbCacheEnabled()) {
|
|
||||||
List<ArbitraryResourceData> list
|
|
||||||
= HSQLDBCacheUtils.callCache(
|
|
||||||
ArbitraryResourceCache.getInstance(),
|
|
||||||
service, query, identifier, names, title, description, prefixOnly, exactMatchNames,
|
|
||||||
defaultResource, mode, minLevel, followedOnly, excludeBlocked, includeMetadata, includeStatus,
|
|
||||||
before, after, limit, offset, reverse);
|
|
||||||
|
|
||||||
if( !list.isEmpty() ) {
|
|
||||||
List<ArbitraryResourceData> results
|
|
||||||
= HSQLDBCacheUtils.filterList(
|
|
||||||
list,
|
|
||||||
ArbitraryResourceCache.getInstance().getLevelByName(),
|
|
||||||
Optional.ofNullable(mode),
|
|
||||||
Optional.ofNullable(service),
|
|
||||||
Optional.ofNullable(query),
|
|
||||||
Optional.ofNullable(identifier),
|
|
||||||
Optional.ofNullable(names),
|
|
||||||
Optional.ofNullable(title),
|
|
||||||
Optional.ofNullable(description),
|
|
||||||
prefixOnly,
|
|
||||||
Optional.ofNullable(exactMatchNames),
|
|
||||||
Optional.ofNullable(keywords),
|
|
||||||
defaultResource,
|
|
||||||
Optional.ofNullable(minLevel),
|
|
||||||
Optional.ofNullable(() -> ListUtils.followedNames()),
|
|
||||||
Optional.ofNullable(ListUtils::blockedNames),
|
|
||||||
Optional.ofNullable(includeMetadata),
|
|
||||||
Optional.ofNullable(includeStatus),
|
|
||||||
Optional.ofNullable(before),
|
|
||||||
Optional.ofNullable(after),
|
|
||||||
Optional.ofNullable(limit),
|
|
||||||
Optional.ofNullable(offset),
|
|
||||||
Optional.ofNullable(reverse)
|
|
||||||
);
|
|
||||||
|
|
||||||
return results;
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.info("Db Enabled Cache has zero candidates.");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
StringBuilder sql = new StringBuilder(512);
|
StringBuilder sql = new StringBuilder(512);
|
||||||
List<Object> bindParams = new ArrayList<>();
|
List<Object> bindParams = new ArrayList<>();
|
||||||
|
|
||||||
@ -996,26 +809,6 @@ public class HSQLDBArbitraryRepository implements ArbitraryRepository {
|
|||||||
bindParams.add(queryWildcard);
|
bindParams.add(queryWildcard);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (keywords != null && !keywords.isEmpty()) {
|
|
||||||
List<String> searchKeywords = new ArrayList<>(keywords);
|
|
||||||
|
|
||||||
List<String> conditions = new ArrayList<>();
|
|
||||||
List<String> bindValues = new ArrayList<>();
|
|
||||||
|
|
||||||
for (int i = 0; i < searchKeywords.size(); i++) {
|
|
||||||
conditions.add("LOWER(description) LIKE ?");
|
|
||||||
bindValues.add("%" + searchKeywords.get(i).trim().toLowerCase() + "%");
|
|
||||||
}
|
|
||||||
|
|
||||||
String finalCondition = String.join(" OR ", conditions);
|
|
||||||
sql.append(" AND (").append(finalCondition).append(")");
|
|
||||||
|
|
||||||
bindParams.addAll(bindValues);
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
// Handle name searches
|
// Handle name searches
|
||||||
if (names != null && !names.isEmpty()) {
|
if (names != null && !names.isEmpty()) {
|
||||||
sql.append(" AND (");
|
sql.append(" AND (");
|
||||||
|
@ -1,863 +0,0 @@
|
|||||||
package org.qortal.repository.hsqldb;
|
|
||||||
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
|
||||||
import org.apache.logging.log4j.Logger;
|
|
||||||
import org.qortal.api.SearchMode;
|
|
||||||
import org.qortal.api.resource.TransactionsResource;
|
|
||||||
import org.qortal.arbitrary.misc.Category;
|
|
||||||
import org.qortal.arbitrary.misc.Service;
|
|
||||||
import org.qortal.controller.Controller;
|
|
||||||
import org.qortal.data.account.AccountBalanceData;
|
|
||||||
import org.qortal.data.account.AddressAmountData;
|
|
||||||
import org.qortal.data.account.BlockHeightRange;
|
|
||||||
import org.qortal.data.account.BlockHeightRangeAddressAmounts;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceCache;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceData;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceMetadata;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceStatus;
|
|
||||||
import org.qortal.data.transaction.TransactionData;
|
|
||||||
import org.qortal.repository.DataException;
|
|
||||||
import org.qortal.repository.Repository;
|
|
||||||
import org.qortal.repository.RepositoryManager;
|
|
||||||
import org.qortal.settings.Settings;
|
|
||||||
import org.qortal.utils.BalanceRecorderUtils;
|
|
||||||
|
|
||||||
import java.sql.ResultSet;
|
|
||||||
import java.sql.SQLException;
|
|
||||||
import java.sql.SQLNonTransientConnectionException;
|
|
||||||
import java.sql.Statement;
|
|
||||||
import java.time.format.DateTimeFormatter;
|
|
||||||
import java.util.AbstractMap;
|
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.Comparator;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Map;
|
|
||||||
import java.util.Objects;
|
|
||||||
import java.util.Optional;
|
|
||||||
import java.util.Timer;
|
|
||||||
import java.util.TimerTask;
|
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
|
||||||
import java.util.concurrent.CopyOnWriteArrayList;
|
|
||||||
import java.util.function.Function;
|
|
||||||
import java.util.function.Predicate;
|
|
||||||
import java.util.function.Supplier;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
import java.util.stream.Stream;
|
|
||||||
|
|
||||||
import static org.qortal.api.SearchMode.LATEST;
|
|
||||||
|
|
||||||
public class HSQLDBCacheUtils {
|
|
||||||
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(HSQLDBCacheUtils.class);
|
|
||||||
private static final DateTimeFormatter TIME_FORMATTER = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm");
|
|
||||||
private static final Comparator<? super ArbitraryResourceData> CREATED_WHEN_COMPARATOR = new Comparator<ArbitraryResourceData>() {
|
|
||||||
@Override
|
|
||||||
public int compare(ArbitraryResourceData data1, ArbitraryResourceData data2) {
|
|
||||||
|
|
||||||
Long a = data1.created;
|
|
||||||
Long b = data2.created;
|
|
||||||
|
|
||||||
return Long.compare(a != null ? a : Long.MIN_VALUE, b != null ? b : Long.MIN_VALUE);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
private static final String DEFAULT_IDENTIFIER = "default";
|
|
||||||
private static final int ZERO = 0;
|
|
||||||
public static final String DB_CACHE_TIMER = "DB Cache Timer";
|
|
||||||
public static final String DB_CACHE_TIMER_TASK = "DB Cache Timer Task";
|
|
||||||
public static final String BALANCE_RECORDER_TIMER = "Balance Recorder Timer";
|
|
||||||
public static final String BALANCE_RECORDER_TIMER_TASK = "Balance Recorder Timer Task";
|
|
||||||
|
|
||||||
/**
|
|
||||||
*
|
|
||||||
* @param cache
|
|
||||||
* @param service the service to filter
|
|
||||||
* @param query query for name, identifier, title or description match
|
|
||||||
* @param identifier the identifier to match
|
|
||||||
* @param names the names to match, ignored if there are exact names
|
|
||||||
* @param title the title to match for
|
|
||||||
* @param description the description to match for
|
|
||||||
* @param prefixOnly true to match on prefix only, false for match anywhere in string
|
|
||||||
* @param exactMatchNames names to match exactly, overrides names
|
|
||||||
* @param defaultResource true to query filter identifier on the default identifier and use the query terms to match candidates names only
|
|
||||||
* @param mode LATEST or ALL
|
|
||||||
* @param minLevel the minimum account level for resource creators
|
|
||||||
* @param includeOnly names to retain, exclude all others
|
|
||||||
* @param exclude names to exclude, retain all others
|
|
||||||
* @param includeMetadata true to include resource metadata in the results, false to exclude metadata
|
|
||||||
* @param includeStatus true to include resource status in the results, false to exclude status
|
|
||||||
* @param before the latest creation timestamp for any candidate
|
|
||||||
* @param after the earliest creation timestamp for any candidate
|
|
||||||
* @param limit the maximum number of resource results to return
|
|
||||||
* @param offset the number of resource results to skip after the results have been retained, filtered and sorted
|
|
||||||
* @param reverse true to reverse the sort order, false to order in chronological order
|
|
||||||
*
|
|
||||||
* @return the resource results
|
|
||||||
*/
|
|
||||||
public static List<ArbitraryResourceData> callCache(
|
|
||||||
ArbitraryResourceCache cache,
|
|
||||||
Service service,
|
|
||||||
String query,
|
|
||||||
String identifier,
|
|
||||||
List<String> names,
|
|
||||||
String title,
|
|
||||||
String description,
|
|
||||||
boolean prefixOnly,
|
|
||||||
List<String> exactMatchNames,
|
|
||||||
boolean defaultResource,
|
|
||||||
SearchMode mode,
|
|
||||||
Integer minLevel,
|
|
||||||
Boolean followedOnly,
|
|
||||||
Boolean excludeBlocked,
|
|
||||||
Boolean includeMetadata,
|
|
||||||
Boolean includeStatus,
|
|
||||||
Long before,
|
|
||||||
Long after,
|
|
||||||
Integer limit,
|
|
||||||
Integer offset,
|
|
||||||
Boolean reverse) {
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> candidates = new ArrayList<>();
|
|
||||||
|
|
||||||
// cache all results for requested service
|
|
||||||
if( service != null ) {
|
|
||||||
candidates.addAll(cache.getDataByService().getOrDefault(service.value, new ArrayList<>(0)));
|
|
||||||
}
|
|
||||||
// if no requested, then empty cache
|
|
||||||
|
|
||||||
return candidates;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Filter candidates
|
|
||||||
*
|
|
||||||
* @param candidates the candidates, they may be preprocessed
|
|
||||||
* @param levelByName name -> level map
|
|
||||||
* @param mode LATEST or ALL
|
|
||||||
* @param service the service to filter
|
|
||||||
* @param query query for name, identifier, title or description match
|
|
||||||
* @param identifier the identifier to match
|
|
||||||
* @param names the names to match, ignored if there are exact names
|
|
||||||
* @param title the title to match for
|
|
||||||
* @param description the description to match for
|
|
||||||
* @param prefixOnly true to match on prefix only, false for match anywhere in string
|
|
||||||
* @param exactMatchNames names to match exactly, overrides names
|
|
||||||
* @param defaultResource true to query filter identifier on the default identifier and use the query terms to match candidates names only
|
|
||||||
* @param minLevel the minimum account level for resource creators
|
|
||||||
* @param includeOnly names to retain, exclude all others
|
|
||||||
* @param exclude names to exclude, retain all others
|
|
||||||
* @param includeMetadata true to include resource metadata in the results, false to exclude metadata
|
|
||||||
* @param includeStatus true to include resource status in the results, false to exclude status
|
|
||||||
* @param before the latest creation timestamp for any candidate
|
|
||||||
* @param after the earliest creation timestamp for any candidate
|
|
||||||
* @param limit the maximum number of resource results to return
|
|
||||||
* @param offset the number of resource results to skip after the results have been retained, filtered and sorted
|
|
||||||
* @param reverse true to reverse the sort order, false to order in chronological order
|
|
||||||
*
|
|
||||||
* @return the resource results
|
|
||||||
*/
|
|
||||||
public static List<ArbitraryResourceData> filterList(
|
|
||||||
List<ArbitraryResourceData> candidates,
|
|
||||||
Map<String, Integer> levelByName,
|
|
||||||
Optional<SearchMode> mode,
|
|
||||||
Optional<Service> service,
|
|
||||||
Optional<String> query,
|
|
||||||
Optional<String> identifier,
|
|
||||||
Optional<List<String>> names,
|
|
||||||
Optional<String> title,
|
|
||||||
Optional<String> description,
|
|
||||||
boolean prefixOnly,
|
|
||||||
Optional<List<String>> exactMatchNames,
|
|
||||||
Optional<List<String>> keywords,
|
|
||||||
boolean defaultResource,
|
|
||||||
Optional<Integer> minLevel,
|
|
||||||
Optional<Supplier<List<String>>> includeOnly,
|
|
||||||
Optional<Supplier<List<String>>> exclude,
|
|
||||||
Optional<Boolean> includeMetadata,
|
|
||||||
Optional<Boolean> includeStatus,
|
|
||||||
Optional<Long> before,
|
|
||||||
Optional<Long> after,
|
|
||||||
Optional<Integer> limit,
|
|
||||||
Optional<Integer> offset,
|
|
||||||
Optional<Boolean> reverse) {
|
|
||||||
|
|
||||||
// retain only candidates with names
|
|
||||||
Stream<ArbitraryResourceData> stream = candidates.stream().filter(candidate -> candidate.name != null );
|
|
||||||
|
|
||||||
if(after.isPresent()) {
|
|
||||||
stream = stream.filter( candidate -> candidate.created > after.get().longValue() );
|
|
||||||
}
|
|
||||||
|
|
||||||
if(before.isPresent()) {
|
|
||||||
stream = stream.filter( candidate -> candidate.created < before.get().longValue() );
|
|
||||||
}
|
|
||||||
|
|
||||||
if(exclude.isPresent())
|
|
||||||
stream = stream.filter( candidate -> !exclude.get().get().contains( candidate.name ));
|
|
||||||
|
|
||||||
// filter by service
|
|
||||||
if( service.isPresent() )
|
|
||||||
stream = stream.filter(candidate -> candidate.service.equals(service.get()));
|
|
||||||
|
|
||||||
// filter by query (either identifier, name, title or description)
|
|
||||||
if (query.isPresent()) {
|
|
||||||
|
|
||||||
Predicate<String> predicate
|
|
||||||
= prefixOnly ? getPrefixPredicate(query.get()) : getContainsPredicate(query.get());
|
|
||||||
|
|
||||||
if (defaultResource) {
|
|
||||||
stream = stream.filter( candidate -> DEFAULT_IDENTIFIER.equals( candidate.identifier ) && predicate.test(candidate.name));
|
|
||||||
} else {
|
|
||||||
stream = stream.filter( candidate -> passQuery(predicate, candidate));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// filter for identifier, title and description
|
|
||||||
stream = filterTerm(identifier, data -> data.identifier, prefixOnly, stream);
|
|
||||||
stream = filterTerm(title, data -> data.metadata != null ? data.metadata.getTitle() : null, prefixOnly, stream);
|
|
||||||
stream = filterTerm(description, data -> data.metadata != null ? data.metadata.getDescription() : null, prefixOnly, stream);
|
|
||||||
|
|
||||||
// New: Filter by keywords if provided
|
|
||||||
if (keywords.isPresent() && !keywords.get().isEmpty()) {
|
|
||||||
List<String> searchKeywords = keywords.get().stream()
|
|
||||||
.map(String::toLowerCase)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
stream = stream.filter(candidate -> {
|
|
||||||
|
|
||||||
if (candidate.metadata != null && candidate.metadata.getDescription() != null) {
|
|
||||||
String descriptionLower = candidate.metadata.getDescription().toLowerCase();
|
|
||||||
return searchKeywords.stream().anyMatch(descriptionLower::contains);
|
|
||||||
}
|
|
||||||
return false;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
if (keywords.isPresent() && !keywords.get().isEmpty()) {
|
|
||||||
List<String> searchKeywords = keywords.get().stream()
|
|
||||||
.map(String::toLowerCase)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
stream = stream.filter(candidate -> {
|
|
||||||
if (candidate.metadata != null && candidate.metadata.getDescription() != null) {
|
|
||||||
String descriptionLower = candidate.metadata.getDescription().toLowerCase();
|
|
||||||
return searchKeywords.stream().anyMatch(descriptionLower::contains);
|
|
||||||
}
|
|
||||||
return false;
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
// if exact names is set, retain resources with exact names
|
|
||||||
if( exactMatchNames.isPresent() && !exactMatchNames.get().isEmpty()) {
|
|
||||||
|
|
||||||
// key the data by lower case name
|
|
||||||
Map<String, List<ArbitraryResourceData>> dataByName
|
|
||||||
= stream.collect(Collectors.groupingBy(data -> data.name.toLowerCase()));
|
|
||||||
|
|
||||||
// lower the case of the exact names
|
|
||||||
// retain the lower case names of the data above
|
|
||||||
List<String> exactNamesToSearch
|
|
||||||
= exactMatchNames.get().stream()
|
|
||||||
.map(String::toLowerCase)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
exactNamesToSearch.retainAll(dataByName.keySet());
|
|
||||||
|
|
||||||
// get the data for the names retained and
|
|
||||||
// set them to the stream
|
|
||||||
stream
|
|
||||||
= dataByName.entrySet().stream()
|
|
||||||
.filter(entry -> exactNamesToSearch.contains(entry.getKey())).flatMap(entry -> entry.getValue().stream());
|
|
||||||
}
|
|
||||||
// if exact names is not set, retain resources that match
|
|
||||||
else if( names.isPresent() && !names.get().isEmpty() ) {
|
|
||||||
|
|
||||||
stream = retainTerms(names.get(), data -> data.name, prefixOnly, stream);
|
|
||||||
}
|
|
||||||
|
|
||||||
// filter for minimum account level
|
|
||||||
if(minLevel.isPresent())
|
|
||||||
stream = stream.filter( candidate -> levelByName.getOrDefault(candidate.name, 0) >= minLevel.get() );
|
|
||||||
|
|
||||||
// if latest mode or empty
|
|
||||||
if( LATEST.equals( mode.orElse( LATEST ) ) ) {
|
|
||||||
|
|
||||||
// Include latest item only for a name/service combination
|
|
||||||
stream
|
|
||||||
= stream.filter(candidate -> candidate.service != null && candidate.created != null ).collect(
|
|
||||||
Collectors.groupingBy(
|
|
||||||
data -> new AbstractMap.SimpleEntry<>(data.name, data.service), // name, service combination
|
|
||||||
Collectors.maxBy(Comparator.comparingLong(data -> data.created)) // latest data item
|
|
||||||
)).values().stream().filter(Optional::isPresent).map(Optional::get); // if there is a value for the group, then retain it
|
|
||||||
}
|
|
||||||
|
|
||||||
// sort
|
|
||||||
if( reverse.isPresent() && reverse.get())
|
|
||||||
stream = stream.sorted(CREATED_WHEN_COMPARATOR.reversed());
|
|
||||||
else
|
|
||||||
stream = stream.sorted(CREATED_WHEN_COMPARATOR);
|
|
||||||
|
|
||||||
// skip to offset
|
|
||||||
if( offset.isPresent() ) stream = stream.skip(offset.get());
|
|
||||||
|
|
||||||
// truncate to limit
|
|
||||||
if( limit.isPresent() && limit.get() > 0 ) stream = stream.limit(limit.get());
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> listCopy1 = stream.collect(Collectors.toList());
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> listCopy2 = new ArrayList<>(listCopy1.size());
|
|
||||||
|
|
||||||
// remove metadata from the first copy
|
|
||||||
if( includeMetadata.isEmpty() || !includeMetadata.get() ) {
|
|
||||||
for( ArbitraryResourceData data : listCopy1 ) {
|
|
||||||
ArbitraryResourceData copy = new ArbitraryResourceData();
|
|
||||||
copy.name = data.name;
|
|
||||||
copy.service = data.service;
|
|
||||||
copy.identifier = data.identifier;
|
|
||||||
copy.status = data.status;
|
|
||||||
copy.metadata = null;
|
|
||||||
|
|
||||||
copy.size = data.size;
|
|
||||||
copy.created = data.created;
|
|
||||||
copy.updated = data.updated;
|
|
||||||
|
|
||||||
listCopy2.add(copy);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// put the list copy 1 into the second copy
|
|
||||||
else {
|
|
||||||
listCopy2.addAll(listCopy1);
|
|
||||||
}
|
|
||||||
|
|
||||||
// remove status from final copy
|
|
||||||
if( includeStatus.isEmpty() || !includeStatus.get() ) {
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> finalCopy = new ArrayList<>(listCopy2.size());
|
|
||||||
|
|
||||||
for( ArbitraryResourceData data : listCopy2 ) {
|
|
||||||
ArbitraryResourceData copy = new ArbitraryResourceData();
|
|
||||||
copy.name = data.name;
|
|
||||||
copy.service = data.service;
|
|
||||||
copy.identifier = data.identifier;
|
|
||||||
copy.status = null;
|
|
||||||
copy.metadata = data.metadata;
|
|
||||||
|
|
||||||
copy.size = data.size;
|
|
||||||
copy.created = data.created;
|
|
||||||
copy.updated = data.updated;
|
|
||||||
|
|
||||||
finalCopy.add(copy);
|
|
||||||
}
|
|
||||||
|
|
||||||
return finalCopy;
|
|
||||||
}
|
|
||||||
// keep status included by returning the second copy
|
|
||||||
else {
|
|
||||||
return listCopy2;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Filter Terms
|
|
||||||
*
|
|
||||||
* @param term the term to filter
|
|
||||||
* @param stringSupplier the string of interest from the resource candidates
|
|
||||||
* @param prefixOnly true if prexif only, false for contains
|
|
||||||
* @param stream the stream of candidates
|
|
||||||
*
|
|
||||||
* @return the stream that filtered the term
|
|
||||||
*/
|
|
||||||
private static Stream<ArbitraryResourceData> filterTerm(
|
|
||||||
Optional<String> term,
|
|
||||||
Function<ArbitraryResourceData,String> stringSupplier,
|
|
||||||
boolean prefixOnly,
|
|
||||||
Stream<ArbitraryResourceData> stream) {
|
|
||||||
|
|
||||||
if(term.isPresent()){
|
|
||||||
Predicate<String> predicate
|
|
||||||
= prefixOnly ? getPrefixPredicate(term.get()): getContainsPredicate(term.get());
|
|
||||||
stream = stream.filter(candidate -> predicate.test(stringSupplier.apply(candidate)));
|
|
||||||
}
|
|
||||||
|
|
||||||
return stream;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Retain Terms
|
|
||||||
*
|
|
||||||
* Retain resources that satisfy terms given.
|
|
||||||
*
|
|
||||||
* @param terms the terms to retain
|
|
||||||
* @param stringSupplier the string of interest from the resource candidates
|
|
||||||
* @param prefixOnly true if prexif only, false for contains
|
|
||||||
* @param stream the stream of candidates
|
|
||||||
*
|
|
||||||
* @return the stream that retained the terms
|
|
||||||
*/
|
|
||||||
private static Stream<ArbitraryResourceData> retainTerms(
|
|
||||||
List<String> terms,
|
|
||||||
Function<ArbitraryResourceData,String> stringSupplier,
|
|
||||||
boolean prefixOnly,
|
|
||||||
Stream<ArbitraryResourceData> stream) {
|
|
||||||
|
|
||||||
// collect the data to process, start the data to retain
|
|
||||||
List<ArbitraryResourceData> toProcess = stream.collect(Collectors.toList());
|
|
||||||
List<ArbitraryResourceData> toRetain = new ArrayList<>();
|
|
||||||
|
|
||||||
// for each term, get the predicate, get a new stream process and
|
|
||||||
// apply the predicate to each data item in the stream
|
|
||||||
for( String term : terms ) {
|
|
||||||
Predicate<String> predicate
|
|
||||||
= prefixOnly ? getPrefixPredicate(term) : getContainsPredicate(term);
|
|
||||||
toRetain.addAll(
|
|
||||||
toProcess.stream()
|
|
||||||
.filter(candidate -> predicate.test(stringSupplier.apply(candidate)))
|
|
||||||
.collect(Collectors.toList())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return toRetain.stream();
|
|
||||||
}
|
|
||||||
|
|
||||||
private static Predicate<String> getContainsPredicate(String term) {
|
|
||||||
return value -> value != null && value.toLowerCase().contains(term.toLowerCase());
|
|
||||||
}
|
|
||||||
|
|
||||||
private static Predicate<String> getPrefixPredicate(String term) {
|
|
||||||
return value -> value != null && value.toLowerCase().startsWith(term.toLowerCase());
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Pass Query
|
|
||||||
*
|
|
||||||
* Compare name, identifier, title and description
|
|
||||||
*
|
|
||||||
* @param predicate the string comparison predicate
|
|
||||||
* @param candidate the candiddte to compare
|
|
||||||
*
|
|
||||||
* @return true if there is a match, otherwise false
|
|
||||||
*/
|
|
||||||
private static boolean passQuery(Predicate<String> predicate, ArbitraryResourceData candidate) {
|
|
||||||
|
|
||||||
if( predicate.test(candidate.name) ) return true;
|
|
||||||
|
|
||||||
if( predicate.test(candidate.identifier) ) return true;
|
|
||||||
|
|
||||||
if( candidate.metadata != null ) {
|
|
||||||
|
|
||||||
if( predicate.test(candidate.metadata.getTitle() )) return true;
|
|
||||||
if( predicate.test(candidate.metadata.getDescription())) return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Start Caching
|
|
||||||
*
|
|
||||||
* @param priorityRequested the thread priority to fill cache in
|
|
||||||
* @param frequency the frequency to fill the cache (in seconds)
|
|
||||||
*
|
|
||||||
* @return the data cache
|
|
||||||
*/
|
|
||||||
public static void startCaching(int priorityRequested, int frequency) {
|
|
||||||
|
|
||||||
Timer timer = buildTimer(DB_CACHE_TIMER, priorityRequested);
|
|
||||||
|
|
||||||
TimerTask task = new TimerTask() {
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
|
|
||||||
Thread.currentThread().setName(DB_CACHE_TIMER_TASK);
|
|
||||||
|
|
||||||
try (final HSQLDBRepository respository = (HSQLDBRepository) Controller.REPOSITORY_FACTORY.getRepository()) {
|
|
||||||
fillCache(ArbitraryResourceCache.getInstance(), respository);
|
|
||||||
}
|
|
||||||
catch( DataException e ) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// delay 1 second
|
|
||||||
timer.scheduleAtFixedRate(task, 1000, frequency * 1000);
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Start Recording Balances
|
|
||||||
*
|
|
||||||
* @param balancesByHeight height -> account balances
|
|
||||||
* @param balanceDynamics every balance dynamic
|
|
||||||
* @param priorityRequested the requested thread priority
|
|
||||||
* @param frequency the recording frequencies, in minutes
|
|
||||||
* @param capacity the maximum size of balanceDynamics
|
|
||||||
*/
|
|
||||||
public static void startRecordingBalances(
|
|
||||||
final ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight,
|
|
||||||
CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics,
|
|
||||||
int priorityRequested,
|
|
||||||
int frequency,
|
|
||||||
int capacity) {
|
|
||||||
|
|
||||||
Timer timer = buildTimer(BALANCE_RECORDER_TIMER, priorityRequested);
|
|
||||||
|
|
||||||
TimerTask task = new TimerTask() {
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
|
|
||||||
Thread.currentThread().setName(BALANCE_RECORDER_TIMER_TASK);
|
|
||||||
|
|
||||||
int currentHeight = recordCurrentBalances(balancesByHeight);
|
|
||||||
|
|
||||||
LOGGER.debug("recorded balances: height = " + currentHeight);
|
|
||||||
|
|
||||||
// remove invalidated recordings, recording after current height
|
|
||||||
BalanceRecorderUtils.removeRecordingsAboveHeight(currentHeight, balancesByHeight);
|
|
||||||
|
|
||||||
// remove invalidated dynamics, on or after current height
|
|
||||||
BalanceRecorderUtils.removeDynamicsOnOrAboveHeight(currentHeight, balanceDynamics);
|
|
||||||
|
|
||||||
// if there are 2 or more recordings, then produce balance dynamics for the first 2 recordings
|
|
||||||
if( balancesByHeight.size() > 1 ) {
|
|
||||||
|
|
||||||
Optional<Integer> priorHeight = BalanceRecorderUtils.getPriorHeight(currentHeight, balancesByHeight);
|
|
||||||
|
|
||||||
// if there is a prior height
|
|
||||||
if(priorHeight.isPresent()) {
|
|
||||||
|
|
||||||
boolean isRewardDistribution = BalanceRecorderUtils.isRewardDistributionRange(priorHeight.get(), currentHeight);
|
|
||||||
|
|
||||||
// if this range has a reward recording block or if other blocks are enabled for recording
|
|
||||||
if( isRewardDistribution || !Settings.getInstance().isRewardRecordingOnly() ) {
|
|
||||||
produceBalanceDynamics(currentHeight, priorHeight, isRewardDistribution, balancesByHeight, balanceDynamics, capacity);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
LOGGER.warn("Expecting prior height and nothing was discovered, current height = " + currentHeight);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// else this should be the first recording
|
|
||||||
else {
|
|
||||||
LOGGER.info("first balance recording completed");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// wait 5 minutes
|
|
||||||
timer.scheduleAtFixedRate(task, 300_000, frequency * 60_000);
|
|
||||||
}
|
|
||||||
|
|
||||||
private static void produceBalanceDynamics(int currentHeight, Optional<Integer> priorHeight, boolean isRewardDistribution, ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight, CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics, int capacity) {
|
|
||||||
BlockHeightRange blockHeightRange = new BlockHeightRange(priorHeight.get(), currentHeight, isRewardDistribution);
|
|
||||||
|
|
||||||
LOGGER.debug("building dynamics for block heights: range = " + blockHeightRange);
|
|
||||||
|
|
||||||
List<AccountBalanceData> currentBalances = balancesByHeight.get(currentHeight);
|
|
||||||
|
|
||||||
ArrayList<TransactionData> transactions = getTransactionDataForBlocks(blockHeightRange);
|
|
||||||
|
|
||||||
LOGGER.info("transactions counted for balance adjustments: count = " + transactions.size());
|
|
||||||
List<AddressAmountData> currentDynamics
|
|
||||||
= BalanceRecorderUtils.buildBalanceDynamics(
|
|
||||||
currentBalances,
|
|
||||||
balancesByHeight.get(priorHeight.get()),
|
|
||||||
Settings.getInstance().getMinimumBalanceRecording(),
|
|
||||||
transactions);
|
|
||||||
|
|
||||||
LOGGER.debug("dynamics built: count = " + currentDynamics.size());
|
|
||||||
|
|
||||||
if(LOGGER.isDebugEnabled())
|
|
||||||
currentDynamics.stream()
|
|
||||||
.sorted(Comparator.comparingLong(AddressAmountData::getAmount).reversed())
|
|
||||||
.limit(Settings.getInstance().getTopBalanceLoggingLimit())
|
|
||||||
.forEach(top5Dynamic -> LOGGER.debug("Top Dynamics = " + top5Dynamic));
|
|
||||||
|
|
||||||
BlockHeightRangeAddressAmounts amounts
|
|
||||||
= new BlockHeightRangeAddressAmounts( blockHeightRange, currentDynamics );
|
|
||||||
|
|
||||||
balanceDynamics.add(amounts);
|
|
||||||
|
|
||||||
BalanceRecorderUtils.removeRecordingsBelowHeight(currentHeight - Settings.getInstance().getBalanceRecorderRollbackAllowance(), balancesByHeight);
|
|
||||||
|
|
||||||
while(balanceDynamics.size() > capacity) {
|
|
||||||
BlockHeightRangeAddressAmounts oldestDynamics = BalanceRecorderUtils.removeOldestDynamics(balanceDynamics);
|
|
||||||
|
|
||||||
LOGGER.debug("removing oldest dynamics: range " + oldestDynamics.getRange());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private static ArrayList<TransactionData> getTransactionDataForBlocks(BlockHeightRange blockHeightRange) {
|
|
||||||
ArrayList<TransactionData> transactions;
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
List<byte[]> signatures
|
|
||||||
= repository.getTransactionRepository().getSignaturesMatchingCriteria(
|
|
||||||
blockHeightRange.getBegin() + 1, blockHeightRange.getEnd() - blockHeightRange.getBegin(),
|
|
||||||
null, null,null, null, null,
|
|
||||||
TransactionsResource.ConfirmationStatus.CONFIRMED,
|
|
||||||
null, null, null);
|
|
||||||
|
|
||||||
transactions = new ArrayList<>(signatures.size());
|
|
||||||
for (byte[] signature : signatures) {
|
|
||||||
transactions.add(repository.getTransactionRepository().fromSignature(signature));
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.debug(String.format("Found %s transactions for " + blockHeightRange, transactions.size()));
|
|
||||||
} catch (Exception e) {
|
|
||||||
transactions = new ArrayList<>(0);
|
|
||||||
LOGGER.warn("Problems getting transactions for balance recording: " + e.getMessage());
|
|
||||||
}
|
|
||||||
return transactions;
|
|
||||||
}
|
|
||||||
|
|
||||||
private static int recordCurrentBalances(ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight) {
|
|
||||||
int currentHeight;
|
|
||||||
|
|
||||||
try (final HSQLDBRepository repository = (HSQLDBRepository) Controller.REPOSITORY_FACTORY.getRepository()) {
|
|
||||||
|
|
||||||
// get current balances
|
|
||||||
List<AccountBalanceData> accountBalances = getAccountBalances(repository);
|
|
||||||
|
|
||||||
// get anyone of the balances
|
|
||||||
Optional<AccountBalanceData> data = accountBalances.stream().findAny();
|
|
||||||
|
|
||||||
// if there are any balances, then record them
|
|
||||||
if (data.isPresent()) {
|
|
||||||
// map all new balances to the current height
|
|
||||||
balancesByHeight.put(data.get().getHeight(), accountBalances);
|
|
||||||
|
|
||||||
currentHeight = data.get().getHeight();
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
currentHeight = Integer.MAX_VALUE;
|
|
||||||
}
|
|
||||||
} catch (DataException e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
currentHeight = Integer.MAX_VALUE;
|
|
||||||
}
|
|
||||||
|
|
||||||
return currentHeight;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Build Timer
|
|
||||||
*
|
|
||||||
* Build a timer for scheduling a timer task.
|
|
||||||
*
|
|
||||||
* @param name the name for the thread running the timer task
|
|
||||||
* @param priorityRequested the priority for the thread running the timer task
|
|
||||||
*
|
|
||||||
* @return a timer for scheduling a timer task
|
|
||||||
*/
|
|
||||||
private static Timer buildTimer( final String name, int priorityRequested) {
|
|
||||||
// ensure priority is in between 1-10
|
|
||||||
final int priority = Math.max(0, Math.min(10, priorityRequested));
|
|
||||||
|
|
||||||
// Create a custom Timer with updated priority threads
|
|
||||||
Timer timer = new Timer(true) { // 'true' to make the Timer daemon
|
|
||||||
@Override
|
|
||||||
public void schedule(TimerTask task, long delay) {
|
|
||||||
Thread thread = new Thread(task, name) {
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
this.setPriority(priority);
|
|
||||||
super.run();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
thread.setPriority(priority);
|
|
||||||
thread.start();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
return timer;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Fill Cache
|
|
||||||
*
|
|
||||||
* @param cache the cache to fill
|
|
||||||
* @param repository the data source to fill the cache with
|
|
||||||
*/
|
|
||||||
public static void fillCache(ArbitraryResourceCache cache, HSQLDBRepository repository) {
|
|
||||||
|
|
||||||
try {
|
|
||||||
// ensure all data is committed in, before we query it
|
|
||||||
repository.saveChanges();
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> resources = getResources(repository);
|
|
||||||
|
|
||||||
Map<Integer, List<ArbitraryResourceData>> dataByService
|
|
||||||
= resources.stream()
|
|
||||||
.collect(Collectors.groupingBy(data -> data.service.value));
|
|
||||||
|
|
||||||
// lock, clear and refill
|
|
||||||
synchronized (cache.getDataByService()) {
|
|
||||||
cache.getDataByService().clear();
|
|
||||||
cache.getDataByService().putAll(dataByService);
|
|
||||||
}
|
|
||||||
|
|
||||||
fillNamepMap(cache.getLevelByName(), repository);
|
|
||||||
}
|
|
||||||
catch (SQLNonTransientConnectionException e ) {
|
|
||||||
LOGGER.warn("Connection problems. Retry later.");
|
|
||||||
}
|
|
||||||
catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Fill Name Map
|
|
||||||
*
|
|
||||||
* Name -> Level
|
|
||||||
*
|
|
||||||
* @param levelByName the map to fill
|
|
||||||
* @param repository the data source
|
|
||||||
*
|
|
||||||
* @throws SQLException
|
|
||||||
*/
|
|
||||||
private static void fillNamepMap(ConcurrentHashMap<String, Integer> levelByName, HSQLDBRepository repository ) throws SQLException {
|
|
||||||
|
|
||||||
StringBuilder sql = new StringBuilder(512);
|
|
||||||
|
|
||||||
sql.append("SELECT name, level ");
|
|
||||||
sql.append("FROM NAMES ");
|
|
||||||
sql.append("INNER JOIN ACCOUNTS on owner = account ");
|
|
||||||
|
|
||||||
Statement statement = repository.connection.createStatement();
|
|
||||||
|
|
||||||
ResultSet resultSet = statement.executeQuery(sql.toString());
|
|
||||||
|
|
||||||
if (resultSet == null)
|
|
||||||
return;
|
|
||||||
|
|
||||||
if (!resultSet.next())
|
|
||||||
return;
|
|
||||||
|
|
||||||
do {
|
|
||||||
levelByName.put(resultSet.getString(1), resultSet.getInt(2));
|
|
||||||
} while(resultSet.next());
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Get Resource
|
|
||||||
*
|
|
||||||
* @param repository source data
|
|
||||||
*
|
|
||||||
* @return the resources
|
|
||||||
* @throws SQLException
|
|
||||||
*/
|
|
||||||
private static List<ArbitraryResourceData> getResources( HSQLDBRepository repository) throws SQLException {
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> resources = new ArrayList<>();
|
|
||||||
|
|
||||||
StringBuilder sql = new StringBuilder(512);
|
|
||||||
|
|
||||||
sql.append("SELECT name, service, identifier, size, status, created_when, updated_when, ");
|
|
||||||
sql.append("title, description, category, tag1, tag2, tag3, tag4, tag5 ");
|
|
||||||
sql.append("FROM ArbitraryResourcesCache ");
|
|
||||||
sql.append("LEFT JOIN ArbitraryMetadataCache USING (service, name, identifier) WHERE name IS NOT NULL");
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> arbitraryResources = new ArrayList<>();
|
|
||||||
Statement statement = repository.connection.createStatement();
|
|
||||||
|
|
||||||
ResultSet resultSet = statement.executeQuery(sql.toString());
|
|
||||||
|
|
||||||
if (resultSet == null)
|
|
||||||
return resources;
|
|
||||||
|
|
||||||
if (!resultSet.next())
|
|
||||||
return resources;
|
|
||||||
|
|
||||||
do {
|
|
||||||
String nameResult = resultSet.getString(1);
|
|
||||||
int serviceResult = resultSet.getInt(2);
|
|
||||||
String identifierResult = resultSet.getString(3);
|
|
||||||
Integer sizeResult = resultSet.getInt(4);
|
|
||||||
Integer status = resultSet.getInt(5);
|
|
||||||
Long created = resultSet.getLong(6);
|
|
||||||
Long updated = resultSet.getLong(7);
|
|
||||||
|
|
||||||
String titleResult = resultSet.getString(8);
|
|
||||||
String descriptionResult = resultSet.getString(9);
|
|
||||||
String category = resultSet.getString(10);
|
|
||||||
String tag1 = resultSet.getString(11);
|
|
||||||
String tag2 = resultSet.getString(12);
|
|
||||||
String tag3 = resultSet.getString(13);
|
|
||||||
String tag4 = resultSet.getString(14);
|
|
||||||
String tag5 = resultSet.getString(15);
|
|
||||||
|
|
||||||
if (Objects.equals(identifierResult, "default")) {
|
|
||||||
// Map "default" back to null. This is optional but probably less confusing than returning "default".
|
|
||||||
identifierResult = null;
|
|
||||||
}
|
|
||||||
|
|
||||||
ArbitraryResourceData arbitraryResourceData = new ArbitraryResourceData();
|
|
||||||
arbitraryResourceData.name = nameResult;
|
|
||||||
arbitraryResourceData.service = Service.valueOf(serviceResult);
|
|
||||||
arbitraryResourceData.identifier = identifierResult;
|
|
||||||
arbitraryResourceData.size = sizeResult;
|
|
||||||
arbitraryResourceData.created = created;
|
|
||||||
arbitraryResourceData.updated = (updated == 0) ? null : updated;
|
|
||||||
|
|
||||||
arbitraryResourceData.setStatus(ArbitraryResourceStatus.Status.valueOf(status));
|
|
||||||
|
|
||||||
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
|
|
||||||
metadata.setTitle(titleResult);
|
|
||||||
metadata.setDescription(descriptionResult);
|
|
||||||
metadata.setCategory(Category.uncategorizedValueOf(category));
|
|
||||||
|
|
||||||
List<String> tags = new ArrayList<>();
|
|
||||||
if (tag1 != null) tags.add(tag1);
|
|
||||||
if (tag2 != null) tags.add(tag2);
|
|
||||||
if (tag3 != null) tags.add(tag3);
|
|
||||||
if (tag4 != null) tags.add(tag4);
|
|
||||||
if (tag5 != null) tags.add(tag5);
|
|
||||||
metadata.setTags(!tags.isEmpty() ? tags : null);
|
|
||||||
|
|
||||||
if (metadata.hasMetadata()) {
|
|
||||||
arbitraryResourceData.metadata = metadata;
|
|
||||||
}
|
|
||||||
|
|
||||||
resources.add( arbitraryResourceData );
|
|
||||||
} while (resultSet.next());
|
|
||||||
|
|
||||||
return resources;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static List<AccountBalanceData> getAccountBalances(HSQLDBRepository repository) {
|
|
||||||
|
|
||||||
StringBuilder sql = new StringBuilder();
|
|
||||||
|
|
||||||
sql.append("SELECT account, balance, height ");
|
|
||||||
sql.append("FROM ACCOUNTBALANCES as balances ");
|
|
||||||
sql.append("JOIN (SELECT height FROM BLOCKS ORDER BY height DESC LIMIT 1) AS max_height ON true ");
|
|
||||||
sql.append("WHERE asset_id=0");
|
|
||||||
|
|
||||||
List<AccountBalanceData> data = new ArrayList<>();
|
|
||||||
|
|
||||||
LOGGER.info( "Getting account balances ...");
|
|
||||||
|
|
||||||
try {
|
|
||||||
Statement statement = repository.connection.createStatement();
|
|
||||||
|
|
||||||
ResultSet resultSet = statement.executeQuery(sql.toString());
|
|
||||||
|
|
||||||
if (resultSet == null || !resultSet.next())
|
|
||||||
return new ArrayList<>(0);
|
|
||||||
|
|
||||||
do {
|
|
||||||
String account = resultSet.getString(1);
|
|
||||||
long balance = resultSet.getLong(2);
|
|
||||||
int height = resultSet.getInt(3);
|
|
||||||
|
|
||||||
data.add(new AccountBalanceData(account, ZERO, balance, height));
|
|
||||||
} while (resultSet.next());
|
|
||||||
} catch (SQLException e) {
|
|
||||||
LOGGER.warn(e.getMessage());
|
|
||||||
} catch (Exception e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.info("Retrieved account balances: count = " + data.size());
|
|
||||||
|
|
||||||
return data;
|
|
||||||
}
|
|
||||||
}
|
|
@ -176,14 +176,14 @@ public class HSQLDBChatRepository implements ChatRepository {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public ActiveChats getActiveChats(String address, Encoding encoding, Boolean hasChatReference) throws DataException {
|
public ActiveChats getActiveChats(String address, Encoding encoding) throws DataException {
|
||||||
List<GroupChat> groupChats = getActiveGroupChats(address, encoding, hasChatReference);
|
List<GroupChat> groupChats = getActiveGroupChats(address, encoding);
|
||||||
List<DirectChat> directChats = getActiveDirectChats(address, hasChatReference);
|
List<DirectChat> directChats = getActiveDirectChats(address);
|
||||||
|
|
||||||
return new ActiveChats(groupChats, directChats);
|
return new ActiveChats(groupChats, directChats);
|
||||||
}
|
}
|
||||||
|
|
||||||
private List<GroupChat> getActiveGroupChats(String address, Encoding encoding, Boolean hasChatReference) throws DataException {
|
private List<GroupChat> getActiveGroupChats(String address, Encoding encoding) throws DataException {
|
||||||
// Find groups where address is a member and potential latest message details
|
// Find groups where address is a member and potential latest message details
|
||||||
String groupsSql = "SELECT group_id, group_name, latest_timestamp, sender, sender_name, signature, data "
|
String groupsSql = "SELECT group_id, group_name, latest_timestamp, sender, sender_name, signature, data "
|
||||||
+ "FROM GroupMembers "
|
+ "FROM GroupMembers "
|
||||||
@ -194,16 +194,8 @@ public class HSQLDBChatRepository implements ChatRepository {
|
|||||||
+ "JOIN Transactions USING (signature) "
|
+ "JOIN Transactions USING (signature) "
|
||||||
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
|
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
|
||||||
// NOTE: We need to qualify "Groups.group_id" here to avoid "General error" bug in HSQLDB v2.5.0
|
// NOTE: We need to qualify "Groups.group_id" here to avoid "General error" bug in HSQLDB v2.5.0
|
||||||
+ "WHERE tx_group_id = Groups.group_id AND type = " + TransactionType.CHAT.value + " ";
|
+ "WHERE tx_group_id = Groups.group_id AND type = " + TransactionType.CHAT.value + " "
|
||||||
|
+ "ORDER BY created_when DESC "
|
||||||
if (hasChatReference != null) {
|
|
||||||
if (hasChatReference) {
|
|
||||||
groupsSql += "AND chat_reference IS NOT NULL ";
|
|
||||||
} else {
|
|
||||||
groupsSql += "AND chat_reference IS NULL ";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
groupsSql += "ORDER BY created_when DESC "
|
|
||||||
+ "LIMIT 1"
|
+ "LIMIT 1"
|
||||||
+ ") AS LatestMessages ON TRUE "
|
+ ") AS LatestMessages ON TRUE "
|
||||||
+ "WHERE address = ?";
|
+ "WHERE address = ?";
|
||||||
@ -238,16 +230,8 @@ public class HSQLDBChatRepository implements ChatRepository {
|
|||||||
+ "JOIN Transactions USING (signature) "
|
+ "JOIN Transactions USING (signature) "
|
||||||
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
|
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
|
||||||
+ "WHERE tx_group_id = 0 "
|
+ "WHERE tx_group_id = 0 "
|
||||||
+ "AND recipient IS NULL ";
|
+ "AND recipient IS NULL "
|
||||||
|
+ "ORDER BY created_when DESC "
|
||||||
if (hasChatReference != null) {
|
|
||||||
if (hasChatReference) {
|
|
||||||
grouplessSql += "AND chat_reference IS NOT NULL ";
|
|
||||||
} else {
|
|
||||||
grouplessSql += "AND chat_reference IS NULL ";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
grouplessSql += "ORDER BY created_when DESC "
|
|
||||||
+ "LIMIT 1";
|
+ "LIMIT 1";
|
||||||
|
|
||||||
try (ResultSet resultSet = this.repository.checkedExecute(grouplessSql)) {
|
try (ResultSet resultSet = this.repository.checkedExecute(grouplessSql)) {
|
||||||
@ -275,7 +259,7 @@ public class HSQLDBChatRepository implements ChatRepository {
|
|||||||
return groupChats;
|
return groupChats;
|
||||||
}
|
}
|
||||||
|
|
||||||
private List<DirectChat> getActiveDirectChats(String address, Boolean hasChatReference) throws DataException {
|
private List<DirectChat> getActiveDirectChats(String address) throws DataException {
|
||||||
// Find chat messages involving address
|
// Find chat messages involving address
|
||||||
String directSql = "SELECT other_address, name, latest_timestamp, sender, sender_name "
|
String directSql = "SELECT other_address, name, latest_timestamp, sender, sender_name "
|
||||||
+ "FROM ("
|
+ "FROM ("
|
||||||
@ -291,18 +275,8 @@ public class HSQLDBChatRepository implements ChatRepository {
|
|||||||
+ "NATURAL JOIN Transactions "
|
+ "NATURAL JOIN Transactions "
|
||||||
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
|
+ "LEFT OUTER JOIN Names AS SenderNames ON SenderNames.owner = sender "
|
||||||
+ "WHERE (sender = other_address AND recipient = ?) "
|
+ "WHERE (sender = other_address AND recipient = ?) "
|
||||||
+ "OR (sender = ? AND recipient = other_address) ";
|
+ "OR (sender = ? AND recipient = other_address) "
|
||||||
|
+ "ORDER BY created_when DESC "
|
||||||
// Apply hasChatReference filter
|
|
||||||
if (hasChatReference != null) {
|
|
||||||
if (hasChatReference) {
|
|
||||||
directSql += "AND chat_reference IS NOT NULL ";
|
|
||||||
} else {
|
|
||||||
directSql += "AND chat_reference IS NULL ";
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
directSql += "ORDER BY created_when DESC "
|
|
||||||
+ "LIMIT 1"
|
+ "LIMIT 1"
|
||||||
+ ") AS LatestMessages "
|
+ ") AS LatestMessages "
|
||||||
+ "LEFT OUTER JOIN Names ON owner = other_address";
|
+ "LEFT OUTER JOIN Names ON owner = other_address";
|
||||||
|
@ -454,41 +454,40 @@ public class HSQLDBDatabaseUpdates {
|
|||||||
|
|
||||||
case 12:
|
case 12:
|
||||||
// Groups
|
// Groups
|
||||||
// NOTE: We need to set Groups to `GROUPS` here to avoid SQL Standard Keywords in HSQLDB v2.7.4
|
stmt.execute("CREATE TABLE Groups (group_id GroupID, owner QortalAddress NOT NULL, group_name GroupName NOT NULL, "
|
||||||
stmt.execute("CREATE TABLE `GROUPS` (group_id GroupID, owner QortalAddress NOT NULL, group_name GroupName NOT NULL, "
|
|
||||||
+ "created_when EpochMillis NOT NULL, updated_when EpochMillis, is_open BOOLEAN NOT NULL, "
|
+ "created_when EpochMillis NOT NULL, updated_when EpochMillis, is_open BOOLEAN NOT NULL, "
|
||||||
+ "approval_threshold TINYINT NOT NULL, min_block_delay INTEGER NOT NULL, max_block_delay INTEGER NOT NULL, "
|
+ "approval_threshold TINYINT NOT NULL, min_block_delay INTEGER NOT NULL, max_block_delay INTEGER NOT NULL, "
|
||||||
+ "reference Signature, creation_group_id GroupID, reduced_group_name GroupName NOT NULL, "
|
+ "reference Signature, creation_group_id GroupID, reduced_group_name GroupName NOT NULL, "
|
||||||
+ "description GenericDescription NOT NULL, PRIMARY KEY (group_id))");
|
+ "description GenericDescription NOT NULL, PRIMARY KEY (group_id))");
|
||||||
// For finding groups by name
|
// For finding groups by name
|
||||||
stmt.execute("CREATE INDEX GroupNameIndex on `GROUPS` (group_name)");
|
stmt.execute("CREATE INDEX GroupNameIndex on Groups (group_name)");
|
||||||
// For finding groups by reduced name
|
// For finding groups by reduced name
|
||||||
stmt.execute("CREATE INDEX GroupReducedNameIndex on `GROUPS` (reduced_group_name)");
|
stmt.execute("CREATE INDEX GroupReducedNameIndex on Groups (reduced_group_name)");
|
||||||
// For finding groups by owner
|
// For finding groups by owner
|
||||||
stmt.execute("CREATE INDEX GroupOwnerIndex ON `GROUPS` (owner)");
|
stmt.execute("CREATE INDEX GroupOwnerIndex ON Groups (owner)");
|
||||||
|
|
||||||
// We need a corresponding trigger to make sure new group_id values are assigned sequentially starting from 1
|
// We need a corresponding trigger to make sure new group_id values are assigned sequentially starting from 1
|
||||||
stmt.execute("CREATE TRIGGER Group_ID_Trigger BEFORE INSERT ON `GROUPS` "
|
stmt.execute("CREATE TRIGGER Group_ID_Trigger BEFORE INSERT ON Groups "
|
||||||
+ "REFERENCING NEW ROW AS new_row FOR EACH ROW WHEN (new_row.group_id IS NULL) "
|
+ "REFERENCING NEW ROW AS new_row FOR EACH ROW WHEN (new_row.group_id IS NULL) "
|
||||||
+ "SET new_row.group_id = (SELECT IFNULL(MAX(group_id) + 1, 1) FROM `GROUPS`)");
|
+ "SET new_row.group_id = (SELECT IFNULL(MAX(group_id) + 1, 1) FROM Groups)");
|
||||||
|
|
||||||
// Admins
|
// Admins
|
||||||
stmt.execute("CREATE TABLE GroupAdmins (group_id GroupID, admin QortalAddress, reference Signature NOT NULL, "
|
stmt.execute("CREATE TABLE GroupAdmins (group_id GroupID, admin QortalAddress, reference Signature NOT NULL, "
|
||||||
+ "PRIMARY KEY (group_id, admin), FOREIGN KEY (group_id) REFERENCES `GROUPS` (group_id) ON DELETE CASCADE)");
|
+ "PRIMARY KEY (group_id, admin), FOREIGN KEY (group_id) REFERENCES Groups (group_id) ON DELETE CASCADE)");
|
||||||
// For finding groups by admin address
|
// For finding groups by admin address
|
||||||
stmt.execute("CREATE INDEX GroupAdminIndex ON GroupAdmins (admin)");
|
stmt.execute("CREATE INDEX GroupAdminIndex ON GroupAdmins (admin)");
|
||||||
|
|
||||||
// Members
|
// Members
|
||||||
stmt.execute("CREATE TABLE GroupMembers (group_id GroupID, address QortalAddress, "
|
stmt.execute("CREATE TABLE GroupMembers (group_id GroupID, address QortalAddress, "
|
||||||
+ "joined_when EpochMillis NOT NULL, reference Signature NOT NULL, "
|
+ "joined_when EpochMillis NOT NULL, reference Signature NOT NULL, "
|
||||||
+ "PRIMARY KEY (group_id, address), FOREIGN KEY (group_id) REFERENCES `GROUPS` (group_id) ON DELETE CASCADE)");
|
+ "PRIMARY KEY (group_id, address), FOREIGN KEY (group_id) REFERENCES Groups (group_id) ON DELETE CASCADE)");
|
||||||
// For finding groups by member address
|
// For finding groups by member address
|
||||||
stmt.execute("CREATE INDEX GroupMemberIndex ON GroupMembers (address)");
|
stmt.execute("CREATE INDEX GroupMemberIndex ON GroupMembers (address)");
|
||||||
|
|
||||||
// Invites
|
// Invites
|
||||||
stmt.execute("CREATE TABLE GroupInvites (group_id GroupID, inviter QortalAddress, invitee QortalAddress, "
|
stmt.execute("CREATE TABLE GroupInvites (group_id GroupID, inviter QortalAddress, invitee QortalAddress, "
|
||||||
+ "expires_when EpochMillis, reference Signature, "
|
+ "expires_when EpochMillis, reference Signature, "
|
||||||
+ "PRIMARY KEY (group_id, invitee), FOREIGN KEY (group_id) REFERENCES `GROUPS` (group_id) ON DELETE CASCADE)");
|
+ "PRIMARY KEY (group_id, invitee), FOREIGN KEY (group_id) REFERENCES Groups (group_id) ON DELETE CASCADE)");
|
||||||
// For finding invites sent by inviter
|
// For finding invites sent by inviter
|
||||||
stmt.execute("CREATE INDEX GroupInviteInviterIndex ON GroupInvites (inviter)");
|
stmt.execute("CREATE INDEX GroupInviteInviterIndex ON GroupInvites (inviter)");
|
||||||
// For finding invites by group
|
// For finding invites by group
|
||||||
@ -504,7 +503,7 @@ public class HSQLDBDatabaseUpdates {
|
|||||||
// NULL expires_when means does not expire!
|
// NULL expires_when means does not expire!
|
||||||
stmt.execute("CREATE TABLE GroupBans (group_id GroupID, offender QortalAddress, admin QortalAddress NOT NULL, "
|
stmt.execute("CREATE TABLE GroupBans (group_id GroupID, offender QortalAddress, admin QortalAddress NOT NULL, "
|
||||||
+ "banned_when EpochMillis NOT NULL, reason GenericDescription NOT NULL, expires_when EpochMillis, reference Signature NOT NULL, "
|
+ "banned_when EpochMillis NOT NULL, reason GenericDescription NOT NULL, expires_when EpochMillis, reference Signature NOT NULL, "
|
||||||
+ "PRIMARY KEY (group_id, offender), FOREIGN KEY (group_id) REFERENCES `GROUPS` (group_id) ON DELETE CASCADE)");
|
+ "PRIMARY KEY (group_id, offender), FOREIGN KEY (group_id) REFERENCES Groups (group_id) ON DELETE CASCADE)");
|
||||||
// For expiry maintenance
|
// For expiry maintenance
|
||||||
stmt.execute("CREATE INDEX GroupBanExpiryIndex ON GroupBans (expires_when)");
|
stmt.execute("CREATE INDEX GroupBanExpiryIndex ON GroupBans (expires_when)");
|
||||||
break;
|
break;
|
||||||
|
@ -350,24 +350,9 @@ public class HSQLDBGroupRepository implements GroupRepository {
|
|||||||
|
|
||||||
// Group Admins
|
// Group Admins
|
||||||
|
|
||||||
@Override
|
|
||||||
public GroupAdminData getAdminFaulty(int groupId, String address) throws DataException {
|
|
||||||
try (ResultSet resultSet = this.repository.checkedExecute("SELECT admin, reference FROM GroupAdmins WHERE group_id = ?", groupId)) {
|
|
||||||
if (resultSet == null)
|
|
||||||
return null;
|
|
||||||
|
|
||||||
String admin = resultSet.getString(1);
|
|
||||||
byte[] reference = resultSet.getBytes(2);
|
|
||||||
|
|
||||||
return new GroupAdminData(groupId, admin, reference);
|
|
||||||
} catch (SQLException e) {
|
|
||||||
throw new DataException("Unable to fetch group admin from repository", e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public GroupAdminData getAdmin(int groupId, String address) throws DataException {
|
public GroupAdminData getAdmin(int groupId, String address) throws DataException {
|
||||||
try (ResultSet resultSet = this.repository.checkedExecute("SELECT admin, reference FROM GroupAdmins WHERE group_id = ? AND admin = ?", groupId, address)) {
|
try (ResultSet resultSet = this.repository.checkedExecute("SELECT admin, reference FROM GroupAdmins WHERE group_id = ?", groupId)) {
|
||||||
if (resultSet == null)
|
if (resultSet == null)
|
||||||
return null;
|
return null;
|
||||||
|
|
||||||
|
@ -5,8 +5,6 @@ import org.apache.logging.log4j.Logger;
|
|||||||
import org.hsqldb.HsqlException;
|
import org.hsqldb.HsqlException;
|
||||||
import org.hsqldb.error.ErrorCode;
|
import org.hsqldb.error.ErrorCode;
|
||||||
import org.hsqldb.jdbc.HSQLDBPool;
|
import org.hsqldb.jdbc.HSQLDBPool;
|
||||||
import org.hsqldb.jdbc.HSQLDBPoolMonitored;
|
|
||||||
import org.qortal.data.system.DbConnectionInfo;
|
|
||||||
import org.qortal.repository.DataException;
|
import org.qortal.repository.DataException;
|
||||||
import org.qortal.repository.Repository;
|
import org.qortal.repository.Repository;
|
||||||
import org.qortal.repository.RepositoryFactory;
|
import org.qortal.repository.RepositoryFactory;
|
||||||
@ -16,8 +14,6 @@ import java.sql.Connection;
|
|||||||
import java.sql.DriverManager;
|
import java.sql.DriverManager;
|
||||||
import java.sql.SQLException;
|
import java.sql.SQLException;
|
||||||
import java.sql.Statement;
|
import java.sql.Statement;
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
|
|
||||||
public class HSQLDBRepositoryFactory implements RepositoryFactory {
|
public class HSQLDBRepositoryFactory implements RepositoryFactory {
|
||||||
@ -61,13 +57,7 @@ public class HSQLDBRepositoryFactory implements RepositoryFactory {
|
|||||||
HSQLDBRepository.attemptRecovery(connectionUrl, "backup");
|
HSQLDBRepository.attemptRecovery(connectionUrl, "backup");
|
||||||
}
|
}
|
||||||
|
|
||||||
if(Settings.getInstance().isConnectionPoolMonitorEnabled()) {
|
|
||||||
this.connectionPool = new HSQLDBPoolMonitored(Settings.getInstance().getRepositoryConnectionPoolSize());
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
this.connectionPool = new HSQLDBPool(Settings.getInstance().getRepositoryConnectionPoolSize());
|
this.connectionPool = new HSQLDBPool(Settings.getInstance().getRepositoryConnectionPoolSize());
|
||||||
}
|
|
||||||
|
|
||||||
this.connectionPool.setUrl(this.connectionUrl);
|
this.connectionPool.setUrl(this.connectionUrl);
|
||||||
|
|
||||||
Properties properties = new Properties();
|
Properties properties = new Properties();
|
||||||
@ -163,19 +153,4 @@ public class HSQLDBRepositoryFactory implements RepositoryFactory {
|
|||||||
return HSQLDBRepository.isDeadlockException(e);
|
return HSQLDBRepository.isDeadlockException(e);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* Get Connection States
|
|
||||||
*
|
|
||||||
* Get the database connection states, if database connection pool monitoring is enabled.
|
|
||||||
*
|
|
||||||
* @return the connection states if enabled, otherwise an empty list
|
|
||||||
*/
|
|
||||||
public List<DbConnectionInfo> getDbConnectionsStates() {
|
|
||||||
if( Settings.getInstance().isConnectionPoolMonitorEnabled() ) {
|
|
||||||
return ((HSQLDBPoolMonitored) this.connectionPool).getDbConnectionsStates();
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
return new ArrayList<>(0);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
@ -114,8 +114,6 @@ public class Settings {
|
|||||||
|
|
||||||
/** Whether we check, fetch and install auto-updates */
|
/** Whether we check, fetch and install auto-updates */
|
||||||
private boolean autoUpdateEnabled = true;
|
private boolean autoUpdateEnabled = true;
|
||||||
/** Whether we check, restart node without connected peers */
|
|
||||||
private boolean autoRestartEnabled = false;
|
|
||||||
/** How long between repository backups (ms), or 0 if disabled. */
|
/** How long between repository backups (ms), or 0 if disabled. */
|
||||||
private long repositoryBackupInterval = 0; // ms
|
private long repositoryBackupInterval = 0; // ms
|
||||||
/** Whether to show a notification when we backup repository. */
|
/** Whether to show a notification when we backup repository. */
|
||||||
@ -199,32 +197,32 @@ public class Settings {
|
|||||||
/** Target number of outbound connections to peers we should make. */
|
/** Target number of outbound connections to peers we should make. */
|
||||||
private int minOutboundPeers = 32;
|
private int minOutboundPeers = 32;
|
||||||
/** Maximum number of peer connections we allow. */
|
/** Maximum number of peer connections we allow. */
|
||||||
private int maxPeers = 64;
|
private int maxPeers = 60;
|
||||||
/** Number of slots to reserve for short-lived QDN data transfers */
|
/** Number of slots to reserve for short-lived QDN data transfers */
|
||||||
private int maxDataPeers = 5;
|
private int maxDataPeers = 5;
|
||||||
/** Maximum number of threads for network engine. */
|
/** Maximum number of threads for network engine. */
|
||||||
private int maxNetworkThreadPoolSize = 512;
|
private int maxNetworkThreadPoolSize = 620;
|
||||||
/** Maximum number of threads for network proof-of-work compute, used during handshaking. */
|
/** Maximum number of threads for network proof-of-work compute, used during handshaking. */
|
||||||
private int networkPoWComputePoolSize = 4;
|
private int networkPoWComputePoolSize = 2;
|
||||||
/** Maximum number of retry attempts if a peer fails to respond with the requested data */
|
/** Maximum number of retry attempts if a peer fails to respond with the requested data */
|
||||||
private int maxRetries = 3;
|
private int maxRetries = 2;
|
||||||
|
|
||||||
/** The number of seconds of no activity before recovery mode begins */
|
/** The number of seconds of no activity before recovery mode begins */
|
||||||
public long recoveryModeTimeout = 9999999999999L;
|
public long recoveryModeTimeout = 9999999999999L;
|
||||||
|
|
||||||
/** Minimum peer version number required in order to sync with them */
|
/** Minimum peer version number required in order to sync with them */
|
||||||
private String minPeerVersion = "4.6.5";
|
private String minPeerVersion = "4.5.2";
|
||||||
/** Whether to allow connections with peers below minPeerVersion
|
/** Whether to allow connections with peers below minPeerVersion
|
||||||
* If true, we won't sync with them but they can still sync with us, and will show in the peers list
|
* If true, we won't sync with them but they can still sync with us, and will show in the peers list
|
||||||
* If false, sync will be blocked both ways, and they will not appear in the peers list */
|
* If false, sync will be blocked both ways, and they will not appear in the peers list */
|
||||||
private boolean allowConnectionsWithOlderPeerVersions = true;
|
private boolean allowConnectionsWithOlderPeerVersions = true;
|
||||||
|
|
||||||
/** Minimum time (in seconds) that we should attempt to remain connected to a peer for */
|
/** Minimum time (in seconds) that we should attempt to remain connected to a peer for */
|
||||||
private int minPeerConnectionTime = 2 * 60 * 60; // seconds
|
private int minPeerConnectionTime = 60 * 60; // seconds
|
||||||
/** Maximum time (in seconds) that we should attempt to remain connected to a peer for */
|
/** Maximum time (in seconds) that we should attempt to remain connected to a peer for */
|
||||||
private int maxPeerConnectionTime = 6 * 60 * 60; // seconds
|
private int maxPeerConnectionTime = 4 * 60 * 60; // seconds
|
||||||
/** Maximum time (in seconds) that a peer should remain connected when requesting QDN data */
|
/** Maximum time (in seconds) that a peer should remain connected when requesting QDN data */
|
||||||
private int maxDataPeerConnectionTime = 30 * 60; // seconds
|
private int maxDataPeerConnectionTime = 2 * 60; // seconds
|
||||||
|
|
||||||
/** Whether to sync multiple blocks at once in normal operation */
|
/** Whether to sync multiple blocks at once in normal operation */
|
||||||
private boolean fastSyncEnabled = true;
|
private boolean fastSyncEnabled = true;
|
||||||
@ -281,10 +279,7 @@ public class Settings {
|
|||||||
// Auto-update sources
|
// Auto-update sources
|
||||||
private String[] autoUpdateRepos = new String[] {
|
private String[] autoUpdateRepos = new String[] {
|
||||||
"https://github.com/Qortal/qortal/raw/%s/qortal.update",
|
"https://github.com/Qortal/qortal/raw/%s/qortal.update",
|
||||||
"https://raw.githubusercontent.com@151.101.16.133/Qortal/qortal/%s/qortal.update",
|
"https://raw.githubusercontent.com@151.101.16.133/Qortal/qortal/%s/qortal.update"
|
||||||
"https://qortal.link/Auto-Update/%s/qortal.update",
|
|
||||||
"https://qortal.name/Auto-Update/%s/qortal.update",
|
|
||||||
"https://update.qortal.org/Auto-Update/%s/qortal.update"
|
|
||||||
};
|
};
|
||||||
|
|
||||||
// Lists
|
// Lists
|
||||||
@ -383,167 +378,6 @@ public class Settings {
|
|||||||
* Exclude from settings.json to disable this warning. */
|
* Exclude from settings.json to disable this warning. */
|
||||||
private Integer threadCountPerMessageTypeWarningThreshold = null;
|
private Integer threadCountPerMessageTypeWarningThreshold = null;
|
||||||
|
|
||||||
/**
|
|
||||||
* DB Cache Enabled?
|
|
||||||
*/
|
|
||||||
private boolean dbCacheEnabled = true;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* DB Cache Thread Priority
|
|
||||||
*
|
|
||||||
* If DB Cache is disabled, then this is ignored. If value is lower then 1, than 1 is used. If value is higher
|
|
||||||
* than 10,, then 10 is used.
|
|
||||||
*/
|
|
||||||
private int dbCacheThreadPriority = 1;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* DB Cache Frequency
|
|
||||||
*
|
|
||||||
* The number of seconds in between DB cache updates. If DB Cache is disabled, then this is ignored.
|
|
||||||
*/
|
|
||||||
private int dbCacheFrequency = 120;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Network Thread Priority
|
|
||||||
*
|
|
||||||
* The Network Thread Priority
|
|
||||||
*
|
|
||||||
* The thread priority (1 is lowest, 10 is highest) of the threads used for network peer connections. This is the
|
|
||||||
* main thread connecting to a peer in the network.
|
|
||||||
*/
|
|
||||||
private int networkThreadPriority = 7;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* The Handshake Thread Priority
|
|
||||||
*
|
|
||||||
* The thread priority (1 i slowest, 10 is highest) of the threads used for peer handshake messaging. This is a
|
|
||||||
* secondary thread to exchange status messaging to a peer in the network.
|
|
||||||
*/
|
|
||||||
private int handshakeThreadPriority = 7;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Pruning Thread Priority
|
|
||||||
*
|
|
||||||
* The thread priority (1 is lowest, 10 is highest) of the threads used for database pruning and trimming.
|
|
||||||
*/
|
|
||||||
private int pruningThreadPriority = 2;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Sychronizer Thread Priority
|
|
||||||
*
|
|
||||||
* The thread priority (1 is lowest, 10 is highest) of the threads used for synchronizing with the others peers.
|
|
||||||
*/
|
|
||||||
private int synchronizerThreadPriority = 10;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Archiving Pause
|
|
||||||
*
|
|
||||||
* In milliseconds
|
|
||||||
*
|
|
||||||
* The pause in between archiving blocks to allow other processes to execute.
|
|
||||||
*/
|
|
||||||
private long archivingPause = 3000;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Enable Balance Recorder?
|
|
||||||
*
|
|
||||||
* True for balance recording, otherwise false.
|
|
||||||
*/
|
|
||||||
private boolean balanceRecorderEnabled = false;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Balance Recorder Priority
|
|
||||||
*
|
|
||||||
* The thread priority (1 is lowest, 10 is highest) of the balance recorder thread, if enabled.
|
|
||||||
*/
|
|
||||||
private int balanceRecorderPriority = 1;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Balance Recorder Frequency
|
|
||||||
*
|
|
||||||
* How often the balances will be recorded, if enabled, measured in minutes.
|
|
||||||
*/
|
|
||||||
private int balanceRecorderFrequency = 20;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Balance Recorder Capacity
|
|
||||||
*
|
|
||||||
* The number of balance recorder ranges will be held in memory.
|
|
||||||
*/
|
|
||||||
private int balanceRecorderCapacity = 1000;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Minimum Balance Recording
|
|
||||||
*
|
|
||||||
* The minimum recored balance change in Qortoshis (1/100000000 QORT)
|
|
||||||
*/
|
|
||||||
private long minimumBalanceRecording = 100000000;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Top Balance Logging Limit
|
|
||||||
*
|
|
||||||
* When logging the number limit of top balance changes to show in the logs for any given block range.
|
|
||||||
*/
|
|
||||||
private long topBalanceLoggingLimit = 100;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Balance Recorder Rollback Allowance
|
|
||||||
*
|
|
||||||
* If the balance recorder is enabled, it must protect its prior balances by this number of blocks in case of
|
|
||||||
* a blockchain rollback and reorganization.
|
|
||||||
*/
|
|
||||||
private int balanceRecorderRollbackAllowance = 100;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Is Reward Recording Only
|
|
||||||
*
|
|
||||||
* Set true to only retain the recordings that cover reward distributions, otherwise set false.
|
|
||||||
*/
|
|
||||||
private boolean rewardRecordingOnly = true;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Is The Connection Monitored?
|
|
||||||
*
|
|
||||||
* Is the database connection pooled monitored?
|
|
||||||
*/
|
|
||||||
private boolean connectionPoolMonitorEnabled = false;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Buiild Arbitrary Resources Batch Size
|
|
||||||
*
|
|
||||||
* The number resources to batch per iteration when rebuilding.
|
|
||||||
*/
|
|
||||||
private int buildArbitraryResourcesBatchSize = 200;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Arbitrary Indexing Priority
|
|
||||||
*
|
|
||||||
* The thread priority when indexing arbirary resources.
|
|
||||||
*/
|
|
||||||
private int arbitraryIndexingPriority = 5;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Arbitrary Indexing Frequency (In Minutes)
|
|
||||||
*
|
|
||||||
* The frequency at which the arbitrary indices are cached.
|
|
||||||
*/
|
|
||||||
private int arbitraryIndexingFrequency = 10;
|
|
||||||
|
|
||||||
private boolean rebuildArbitraryResourceCacheTaskEnabled = false;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Rebuild Arbitrary Resource Cache Task Delay (In Minutes)
|
|
||||||
*
|
|
||||||
* Waiting period before the first rebuild task is started.
|
|
||||||
*/
|
|
||||||
private int rebuildArbitraryResourceCacheTaskDelay = 300;
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Rebuild Arbitrary Resource Cache Task Period (In Hours)
|
|
||||||
*
|
|
||||||
* The frequency the arbitrary resource cache is rebuilt.
|
|
||||||
*/
|
|
||||||
private int rebuildArbitraryResourceCacheTaskPeriod = 24;
|
|
||||||
|
|
||||||
// Domain mapping
|
// Domain mapping
|
||||||
public static class ThreadLimit {
|
public static class ThreadLimit {
|
||||||
@ -1079,10 +913,6 @@ public class Settings {
|
|||||||
return this.autoUpdateEnabled;
|
return this.autoUpdateEnabled;
|
||||||
}
|
}
|
||||||
|
|
||||||
public boolean isAutoRestartEnabled() {
|
|
||||||
return this.autoRestartEnabled;
|
|
||||||
}
|
|
||||||
|
|
||||||
public String[] getAutoUpdateRepos() {
|
public String[] getAutoUpdateRepos() {
|
||||||
return this.autoUpdateRepos;
|
return this.autoUpdateRepos;
|
||||||
}
|
}
|
||||||
@ -1302,96 +1132,4 @@ public class Settings {
|
|||||||
public Integer getThreadCountPerMessageTypeWarningThreshold() {
|
public Integer getThreadCountPerMessageTypeWarningThreshold() {
|
||||||
return this.threadCountPerMessageTypeWarningThreshold;
|
return this.threadCountPerMessageTypeWarningThreshold;
|
||||||
}
|
}
|
||||||
|
|
||||||
public boolean isDbCacheEnabled() {
|
|
||||||
return dbCacheEnabled;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getDbCacheThreadPriority() {
|
|
||||||
return dbCacheThreadPriority;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getDbCacheFrequency() {
|
|
||||||
return dbCacheFrequency;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getNetworkThreadPriority() {
|
|
||||||
return networkThreadPriority;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getHandshakeThreadPriority() {
|
|
||||||
return handshakeThreadPriority;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getPruningThreadPriority() {
|
|
||||||
return pruningThreadPriority;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getSynchronizerThreadPriority() {
|
|
||||||
return synchronizerThreadPriority;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getArchivingPause() {
|
|
||||||
return archivingPause;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getBalanceRecorderPriority() {
|
|
||||||
return balanceRecorderPriority;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getBalanceRecorderFrequency() {
|
|
||||||
return balanceRecorderFrequency;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getBalanceRecorderCapacity() {
|
|
||||||
return balanceRecorderCapacity;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean isBalanceRecorderEnabled() {
|
|
||||||
return balanceRecorderEnabled;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getMinimumBalanceRecording() {
|
|
||||||
return minimumBalanceRecording;
|
|
||||||
}
|
|
||||||
|
|
||||||
public long getTopBalanceLoggingLimit() {
|
|
||||||
return topBalanceLoggingLimit;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getBalanceRecorderRollbackAllowance() {
|
|
||||||
return balanceRecorderRollbackAllowance;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean isRewardRecordingOnly() {
|
|
||||||
return rewardRecordingOnly;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean isConnectionPoolMonitorEnabled() {
|
|
||||||
return connectionPoolMonitorEnabled;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getBuildArbitraryResourcesBatchSize() {
|
|
||||||
return buildArbitraryResourcesBatchSize;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getArbitraryIndexingPriority() {
|
|
||||||
return arbitraryIndexingPriority;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getArbitraryIndexingFrequency() {
|
|
||||||
return arbitraryIndexingFrequency;
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean isRebuildArbitraryResourceCacheTaskEnabled() {
|
|
||||||
return rebuildArbitraryResourceCacheTaskEnabled;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getRebuildArbitraryResourceCacheTaskDelay() {
|
|
||||||
return rebuildArbitraryResourceCacheTaskDelay;
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getRebuildArbitraryResourceCacheTaskPeriod() {
|
|
||||||
return rebuildArbitraryResourceCacheTaskPeriod;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
@ -9,7 +9,6 @@ import org.qortal.arbitrary.metadata.ArbitraryDataTransactionMetadata;
|
|||||||
import org.qortal.arbitrary.misc.Service;
|
import org.qortal.arbitrary.misc.Service;
|
||||||
import org.qortal.block.BlockChain;
|
import org.qortal.block.BlockChain;
|
||||||
import org.qortal.controller.arbitrary.ArbitraryDataManager;
|
import org.qortal.controller.arbitrary.ArbitraryDataManager;
|
||||||
import org.qortal.controller.arbitrary.ArbitraryTransactionDataHashWrapper;
|
|
||||||
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
|
import org.qortal.controller.repository.NamesDatabaseIntegrityCheck;
|
||||||
import org.qortal.crypto.Crypto;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.crypto.MemoryPoW;
|
import org.qortal.crypto.MemoryPoW;
|
||||||
@ -32,12 +31,8 @@ import org.qortal.utils.ArbitraryTransactionUtils;
|
|||||||
import org.qortal.utils.NTP;
|
import org.qortal.utils.NTP;
|
||||||
|
|
||||||
import java.io.IOException;
|
import java.io.IOException;
|
||||||
import java.util.HashMap;
|
|
||||||
import java.util.HashSet;
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Map;
|
|
||||||
import java.util.Objects;
|
import java.util.Objects;
|
||||||
import java.util.Set;
|
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
public class ArbitraryTransaction extends Transaction {
|
public class ArbitraryTransaction extends Transaction {
|
||||||
@ -308,13 +303,8 @@ public class ArbitraryTransaction extends Transaction {
|
|||||||
// Add/update arbitrary resource caches, but don't update the status as this involves time-consuming
|
// Add/update arbitrary resource caches, but don't update the status as this involves time-consuming
|
||||||
// disk reads, and is more prone to failure. The status will be updated on metadata retrieval, or when
|
// disk reads, and is more prone to failure. The status will be updated on metadata retrieval, or when
|
||||||
// accessing the resource.
|
// accessing the resource.
|
||||||
// Also, must add this transaction as a latest transaction, since the it has not been saved to the
|
this.updateArbitraryResourceCache(repository);
|
||||||
// repository yet.
|
this.updateArbitraryMetadataCache(repository);
|
||||||
this.updateArbitraryResourceCacheIncludingMetadata(
|
|
||||||
repository,
|
|
||||||
Set.of(new ArbitraryTransactionDataHashWrapper(arbitraryTransactionData)),
|
|
||||||
new HashMap<>(0)
|
|
||||||
);
|
|
||||||
|
|
||||||
repository.saveChanges();
|
repository.saveChanges();
|
||||||
|
|
||||||
@ -370,10 +360,7 @@ public class ArbitraryTransaction extends Transaction {
|
|||||||
*
|
*
|
||||||
* @throws DataException
|
* @throws DataException
|
||||||
*/
|
*/
|
||||||
public void updateArbitraryResourceCacheIncludingMetadata(
|
public void updateArbitraryResourceCache(Repository repository) throws DataException {
|
||||||
Repository repository,
|
|
||||||
Set<ArbitraryTransactionDataHashWrapper> latestTransactionWrappers,
|
|
||||||
Map<ArbitraryTransactionDataHashWrapper, ArbitraryResourceData> resourceByWrapper) throws DataException {
|
|
||||||
// Don't cache resources without a name (such as auto updates)
|
// Don't cache resources without a name (such as auto updates)
|
||||||
if (arbitraryTransactionData.getName() == null) {
|
if (arbitraryTransactionData.getName() == null) {
|
||||||
return;
|
return;
|
||||||
@ -398,33 +385,17 @@ public class ArbitraryTransaction extends Transaction {
|
|||||||
arbitraryResourceData.name = name;
|
arbitraryResourceData.name = name;
|
||||||
arbitraryResourceData.identifier = identifier;
|
arbitraryResourceData.identifier = identifier;
|
||||||
|
|
||||||
final ArbitraryTransactionDataHashWrapper wrapper = new ArbitraryTransactionDataHashWrapper(arbitraryTransactionData);
|
|
||||||
|
|
||||||
ArbitraryTransactionData latestTransactionData;
|
|
||||||
if( latestTransactionWrappers.contains(wrapper)) {
|
|
||||||
latestTransactionData
|
|
||||||
= latestTransactionWrappers.stream()
|
|
||||||
.filter( latestWrapper -> latestWrapper.equals(wrapper))
|
|
||||||
.findAny().get()
|
|
||||||
.getData();
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
// Get the latest transaction
|
// Get the latest transaction
|
||||||
latestTransactionData = repository.getArbitraryRepository().getLatestTransaction(arbitraryTransactionData.getName(), arbitraryTransactionData.getService(), null, arbitraryTransactionData.getIdentifier());
|
ArbitraryTransactionData latestTransactionData = repository.getArbitraryRepository().getLatestTransaction(arbitraryTransactionData.getName(), arbitraryTransactionData.getService(), null, arbitraryTransactionData.getIdentifier());
|
||||||
if (latestTransactionData == null) {
|
if (latestTransactionData == null) {
|
||||||
LOGGER.info("We don't have a latest transaction, so delete from cache: arbitraryResourceData = " + arbitraryResourceData);
|
|
||||||
// We don't have a latest transaction, so delete from cache
|
// We don't have a latest transaction, so delete from cache
|
||||||
repository.getArbitraryRepository().delete(arbitraryResourceData);
|
repository.getArbitraryRepository().delete(arbitraryResourceData);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
|
||||||
ArbitraryResourceData existingArbitraryResourceData = resourceByWrapper.get(wrapper);
|
|
||||||
|
|
||||||
if( existingArbitraryResourceData == null ) {
|
|
||||||
// Get existing cached entry if it exists
|
// Get existing cached entry if it exists
|
||||||
existingArbitraryResourceData = repository.getArbitraryRepository()
|
ArbitraryResourceData existingArbitraryResourceData = repository.getArbitraryRepository()
|
||||||
.getArbitraryResource(service, name, identifier);
|
.getArbitraryResource(service, name, identifier);
|
||||||
}
|
|
||||||
|
|
||||||
// Check for existing cached data
|
// Check for existing cached data
|
||||||
if (existingArbitraryResourceData == null) {
|
if (existingArbitraryResourceData == null) {
|
||||||
@ -433,7 +404,6 @@ public class ArbitraryTransaction extends Transaction {
|
|||||||
arbitraryResourceData.updated = null;
|
arbitraryResourceData.updated = null;
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
resourceByWrapper.put(wrapper, existingArbitraryResourceData);
|
|
||||||
// An entry already exists - update created time from current transaction if this is older
|
// An entry already exists - update created time from current transaction if this is older
|
||||||
arbitraryResourceData.created = Math.min(existingArbitraryResourceData.created, arbitraryTransactionData.getTimestamp());
|
arbitraryResourceData.created = Math.min(existingArbitraryResourceData.created, arbitraryTransactionData.getTimestamp());
|
||||||
|
|
||||||
@ -451,34 +421,6 @@ public class ArbitraryTransaction extends Transaction {
|
|||||||
|
|
||||||
// Save
|
// Save
|
||||||
repository.getArbitraryRepository().save(arbitraryResourceData);
|
repository.getArbitraryRepository().save(arbitraryResourceData);
|
||||||
|
|
||||||
// Update metadata for latest transaction if it is local
|
|
||||||
if (latestTransactionData.getMetadataHash() != null) {
|
|
||||||
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(latestTransactionData.getMetadataHash(), latestTransactionData.getSignature());
|
|
||||||
if (metadataFile.exists()) {
|
|
||||||
ArbitraryDataTransactionMetadata transactionMetadata = new ArbitraryDataTransactionMetadata(metadataFile.getFilePath());
|
|
||||||
try {
|
|
||||||
transactionMetadata.read();
|
|
||||||
|
|
||||||
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
|
|
||||||
metadata.setArbitraryResourceData(arbitraryResourceData);
|
|
||||||
metadata.setTitle(transactionMetadata.getTitle());
|
|
||||||
metadata.setDescription(transactionMetadata.getDescription());
|
|
||||||
metadata.setCategory(transactionMetadata.getCategory());
|
|
||||||
metadata.setTags(transactionMetadata.getTags());
|
|
||||||
repository.getArbitraryRepository().save(metadata);
|
|
||||||
|
|
||||||
} catch (IOException e) {
|
|
||||||
// Ignore, as we can add it again later
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// We don't have a local copy of this metadata file, so delete it from the cache
|
|
||||||
// It will be re-added if the file later arrives via the network
|
|
||||||
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
|
|
||||||
metadata.setArbitraryResourceData(arbitraryResourceData);
|
|
||||||
repository.getArbitraryRepository().delete(metadata);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public void updateArbitraryResourceStatus(Repository repository) throws DataException {
|
public void updateArbitraryResourceStatus(Repository repository) throws DataException {
|
||||||
@ -513,4 +455,60 @@ public class ArbitraryTransaction extends Transaction {
|
|||||||
repository.getArbitraryRepository().setStatus(arbitraryResourceData, status);
|
repository.getArbitraryRepository().setStatus(arbitraryResourceData, status);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public void updateArbitraryMetadataCache(Repository repository) throws DataException {
|
||||||
|
// Get the latest transaction
|
||||||
|
ArbitraryTransactionData latestTransactionData = repository.getArbitraryRepository().getLatestTransaction(arbitraryTransactionData.getName(), arbitraryTransactionData.getService(), null, arbitraryTransactionData.getIdentifier());
|
||||||
|
if (latestTransactionData == null) {
|
||||||
|
// We don't have a latest transaction, so give up
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
Service service = latestTransactionData.getService();
|
||||||
|
String name = latestTransactionData.getName();
|
||||||
|
String identifier = latestTransactionData.getIdentifier();
|
||||||
|
|
||||||
|
if (service == null) {
|
||||||
|
// Unsupported service - ignore this resource
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// In the cache we store null identifiers as "default", as it is part of the primary key
|
||||||
|
if (identifier == null) {
|
||||||
|
identifier = "default";
|
||||||
|
}
|
||||||
|
|
||||||
|
ArbitraryResourceData arbitraryResourceData = new ArbitraryResourceData();
|
||||||
|
arbitraryResourceData.service = service;
|
||||||
|
arbitraryResourceData.name = name;
|
||||||
|
arbitraryResourceData.identifier = identifier;
|
||||||
|
|
||||||
|
// Update metadata for latest transaction if it is local
|
||||||
|
if (latestTransactionData.getMetadataHash() != null) {
|
||||||
|
ArbitraryDataFile metadataFile = ArbitraryDataFile.fromHash(latestTransactionData.getMetadataHash(), latestTransactionData.getSignature());
|
||||||
|
if (metadataFile.exists()) {
|
||||||
|
ArbitraryDataTransactionMetadata transactionMetadata = new ArbitraryDataTransactionMetadata(metadataFile.getFilePath());
|
||||||
|
try {
|
||||||
|
transactionMetadata.read();
|
||||||
|
|
||||||
|
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
|
||||||
|
metadata.setArbitraryResourceData(arbitraryResourceData);
|
||||||
|
metadata.setTitle(transactionMetadata.getTitle());
|
||||||
|
metadata.setDescription(transactionMetadata.getDescription());
|
||||||
|
metadata.setCategory(transactionMetadata.getCategory());
|
||||||
|
metadata.setTags(transactionMetadata.getTags());
|
||||||
|
repository.getArbitraryRepository().save(metadata);
|
||||||
|
|
||||||
|
} catch (IOException e) {
|
||||||
|
// Ignore, as we can add it again later
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// We don't have a local copy of this metadata file, so delete it from the cache
|
||||||
|
// It will be re-added if the file later arrives via the network
|
||||||
|
ArbitraryResourceMetadata metadata = new ArbitraryResourceMetadata();
|
||||||
|
metadata.setArbitraryResourceData(arbitraryResourceData);
|
||||||
|
repository.getArbitraryRepository().delete(metadata);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -2,7 +2,6 @@ package org.qortal.transaction;
|
|||||||
|
|
||||||
import org.qortal.account.Account;
|
import org.qortal.account.Account;
|
||||||
import org.qortal.asset.Asset;
|
import org.qortal.asset.Asset;
|
||||||
import org.qortal.block.BlockChain;
|
|
||||||
import org.qortal.crypto.Crypto;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.data.group.GroupData;
|
import org.qortal.data.group.GroupData;
|
||||||
import org.qortal.data.transaction.CancelGroupBanTransactionData;
|
import org.qortal.data.transaction.CancelGroupBanTransactionData;
|
||||||
@ -13,7 +12,6 @@ import org.qortal.repository.Repository;
|
|||||||
|
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
public class CancelGroupBanTransaction extends Transaction {
|
public class CancelGroupBanTransaction extends Transaction {
|
||||||
|
|
||||||
@ -72,26 +70,9 @@ public class CancelGroupBanTransaction extends Transaction {
|
|||||||
if (!this.repository.getGroupRepository().adminExists(groupId, admin.getAddress()))
|
if (!this.repository.getGroupRepository().adminExists(groupId, admin.getAddress()))
|
||||||
return ValidationResult.NOT_GROUP_ADMIN;
|
return ValidationResult.NOT_GROUP_ADMIN;
|
||||||
|
|
||||||
if( this.repository.getBlockRepository().getBlockchainHeight() < BlockChain.getInstance().getNullGroupMembershipHeight() ) {
|
// Can't unban if not group's current owner
|
||||||
// Can't cancel ban if not group's current owner
|
|
||||||
if (!admin.getAddress().equals(groupData.getOwner()))
|
if (!admin.getAddress().equals(groupData.getOwner()))
|
||||||
return ValidationResult.INVALID_GROUP_OWNER;
|
return ValidationResult.INVALID_GROUP_OWNER;
|
||||||
}
|
|
||||||
// if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() )
|
|
||||||
else {
|
|
||||||
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
|
|
||||||
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
|
|
||||||
|
|
||||||
// if null ownership group, then check for admin approval
|
|
||||||
if(groupOwnedByNullAccount ) {
|
|
||||||
// Require approval if transaction relates to a group owned by the null account
|
|
||||||
if (!this.needsGroupApproval())
|
|
||||||
return ValidationResult.GROUP_APPROVAL_REQUIRED;
|
|
||||||
}
|
|
||||||
// Can't cancel ban if not group's current owner
|
|
||||||
else if (!admin.getAddress().equals(groupData.getOwner()))
|
|
||||||
return ValidationResult.INVALID_GROUP_OWNER;
|
|
||||||
}
|
|
||||||
|
|
||||||
Account member = getMember();
|
Account member = getMember();
|
||||||
|
|
||||||
|
@ -2,7 +2,6 @@ package org.qortal.transaction;
|
|||||||
|
|
||||||
import org.qortal.account.Account;
|
import org.qortal.account.Account;
|
||||||
import org.qortal.asset.Asset;
|
import org.qortal.asset.Asset;
|
||||||
import org.qortal.block.BlockChain;
|
|
||||||
import org.qortal.crypto.Crypto;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.data.group.GroupData;
|
import org.qortal.data.group.GroupData;
|
||||||
import org.qortal.data.transaction.CancelGroupInviteTransactionData;
|
import org.qortal.data.transaction.CancelGroupInviteTransactionData;
|
||||||
@ -13,7 +12,6 @@ import org.qortal.repository.Repository;
|
|||||||
|
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
public class CancelGroupInviteTransaction extends Transaction {
|
public class CancelGroupInviteTransaction extends Transaction {
|
||||||
|
|
||||||
@ -82,16 +80,6 @@ public class CancelGroupInviteTransaction extends Transaction {
|
|||||||
if (admin.getConfirmedBalance(Asset.QORT) < this.cancelGroupInviteTransactionData.getFee())
|
if (admin.getConfirmedBalance(Asset.QORT) < this.cancelGroupInviteTransactionData.getFee())
|
||||||
return ValidationResult.NO_BALANCE;
|
return ValidationResult.NO_BALANCE;
|
||||||
|
|
||||||
// if null ownership group, then check for admin approval
|
|
||||||
if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() ) {
|
|
||||||
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
|
|
||||||
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
|
|
||||||
|
|
||||||
// Require approval if transaction relates to a group owned by the null account
|
|
||||||
if (groupOwnedByNullAccount && !this.needsGroupApproval())
|
|
||||||
return ValidationResult.GROUP_APPROVAL_REQUIRED;
|
|
||||||
}
|
|
||||||
|
|
||||||
return ValidationResult.OK;
|
return ValidationResult.OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2,7 +2,6 @@ package org.qortal.transaction;
|
|||||||
|
|
||||||
import org.qortal.account.Account;
|
import org.qortal.account.Account;
|
||||||
import org.qortal.asset.Asset;
|
import org.qortal.asset.Asset;
|
||||||
import org.qortal.block.BlockChain;
|
|
||||||
import org.qortal.crypto.Crypto;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.data.group.GroupData;
|
import org.qortal.data.group.GroupData;
|
||||||
import org.qortal.data.transaction.GroupBanTransactionData;
|
import org.qortal.data.transaction.GroupBanTransactionData;
|
||||||
@ -13,7 +12,6 @@ import org.qortal.repository.Repository;
|
|||||||
|
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
public class GroupBanTransaction extends Transaction {
|
public class GroupBanTransaction extends Transaction {
|
||||||
|
|
||||||
@ -72,25 +70,9 @@ public class GroupBanTransaction extends Transaction {
|
|||||||
if (!this.repository.getGroupRepository().adminExists(groupId, admin.getAddress()))
|
if (!this.repository.getGroupRepository().adminExists(groupId, admin.getAddress()))
|
||||||
return ValidationResult.NOT_GROUP_ADMIN;
|
return ValidationResult.NOT_GROUP_ADMIN;
|
||||||
|
|
||||||
if( this.repository.getBlockRepository().getBlockchainHeight() < BlockChain.getInstance().getNullGroupMembershipHeight() ) {
|
|
||||||
// Can't ban if not group's current owner
|
// Can't ban if not group's current owner
|
||||||
if (!admin.getAddress().equals(groupData.getOwner()))
|
if (!admin.getAddress().equals(groupData.getOwner()))
|
||||||
return ValidationResult.INVALID_GROUP_OWNER;
|
return ValidationResult.INVALID_GROUP_OWNER;
|
||||||
}
|
|
||||||
// if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() )
|
|
||||||
else {
|
|
||||||
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
|
|
||||||
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
|
|
||||||
|
|
||||||
// if null ownership group, then check for admin approval
|
|
||||||
if(groupOwnedByNullAccount ) {
|
|
||||||
// Require approval if transaction relates to a group owned by the null account
|
|
||||||
if (!this.needsGroupApproval())
|
|
||||||
return ValidationResult.GROUP_APPROVAL_REQUIRED;
|
|
||||||
}
|
|
||||||
else if (!admin.getAddress().equals(groupData.getOwner()))
|
|
||||||
return ValidationResult.INVALID_GROUP_OWNER;
|
|
||||||
}
|
|
||||||
|
|
||||||
Account offender = getOffender();
|
Account offender = getOffender();
|
||||||
|
|
||||||
|
@ -2,7 +2,6 @@ package org.qortal.transaction;
|
|||||||
|
|
||||||
import org.qortal.account.Account;
|
import org.qortal.account.Account;
|
||||||
import org.qortal.asset.Asset;
|
import org.qortal.asset.Asset;
|
||||||
import org.qortal.block.BlockChain;
|
|
||||||
import org.qortal.crypto.Crypto;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.data.transaction.GroupInviteTransactionData;
|
import org.qortal.data.transaction.GroupInviteTransactionData;
|
||||||
import org.qortal.data.transaction.TransactionData;
|
import org.qortal.data.transaction.TransactionData;
|
||||||
@ -12,7 +11,6 @@ import org.qortal.repository.Repository;
|
|||||||
|
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
public class GroupInviteTransaction extends Transaction {
|
public class GroupInviteTransaction extends Transaction {
|
||||||
|
|
||||||
@ -87,16 +85,6 @@ public class GroupInviteTransaction extends Transaction {
|
|||||||
if (admin.getConfirmedBalance(Asset.QORT) < this.groupInviteTransactionData.getFee())
|
if (admin.getConfirmedBalance(Asset.QORT) < this.groupInviteTransactionData.getFee())
|
||||||
return ValidationResult.NO_BALANCE;
|
return ValidationResult.NO_BALANCE;
|
||||||
|
|
||||||
// if null ownership group, then check for admin approval
|
|
||||||
if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() ) {
|
|
||||||
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
|
|
||||||
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
|
|
||||||
|
|
||||||
// Require approval if transaction relates to a group owned by the null account
|
|
||||||
if (groupOwnedByNullAccount && !this.needsGroupApproval())
|
|
||||||
return ValidationResult.GROUP_APPROVAL_REQUIRED;
|
|
||||||
}
|
|
||||||
|
|
||||||
return ValidationResult.OK;
|
return ValidationResult.OK;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3,7 +3,6 @@ package org.qortal.transaction;
|
|||||||
import org.qortal.account.Account;
|
import org.qortal.account.Account;
|
||||||
import org.qortal.account.PublicKeyAccount;
|
import org.qortal.account.PublicKeyAccount;
|
||||||
import org.qortal.asset.Asset;
|
import org.qortal.asset.Asset;
|
||||||
import org.qortal.block.BlockChain;
|
|
||||||
import org.qortal.crypto.Crypto;
|
import org.qortal.crypto.Crypto;
|
||||||
import org.qortal.data.group.GroupData;
|
import org.qortal.data.group.GroupData;
|
||||||
import org.qortal.data.transaction.GroupKickTransactionData;
|
import org.qortal.data.transaction.GroupKickTransactionData;
|
||||||
@ -15,7 +14,6 @@ import org.qortal.repository.Repository;
|
|||||||
|
|
||||||
import java.util.Collections;
|
import java.util.Collections;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Objects;
|
|
||||||
|
|
||||||
public class GroupKickTransaction extends Transaction {
|
public class GroupKickTransaction extends Transaction {
|
||||||
|
|
||||||
@ -84,26 +82,9 @@ public class GroupKickTransaction extends Transaction {
|
|||||||
if (!admin.getAddress().equals(groupData.getOwner()) && groupRepository.adminExists(groupId, member.getAddress()))
|
if (!admin.getAddress().equals(groupData.getOwner()) && groupRepository.adminExists(groupId, member.getAddress()))
|
||||||
return ValidationResult.INVALID_GROUP_OWNER;
|
return ValidationResult.INVALID_GROUP_OWNER;
|
||||||
|
|
||||||
if( this.repository.getBlockRepository().getBlockchainHeight() < BlockChain.getInstance().getNullGroupMembershipHeight() ) {
|
|
||||||
// Can't kick if not group's current owner
|
// Can't kick if not group's current owner
|
||||||
if (!admin.getAddress().equals(groupData.getOwner()))
|
if (!admin.getAddress().equals(groupData.getOwner()))
|
||||||
return ValidationResult.INVALID_GROUP_OWNER;
|
return ValidationResult.INVALID_GROUP_OWNER;
|
||||||
}
|
|
||||||
// if( this.repository.getBlockRepository().getBlockchainHeight() >= BlockChain.getInstance().getNullGroupMembershipHeight() )
|
|
||||||
else {
|
|
||||||
String groupOwner = this.repository.getGroupRepository().getOwner(groupId);
|
|
||||||
boolean groupOwnedByNullAccount = Objects.equals(groupOwner, Group.NULL_OWNER_ADDRESS);
|
|
||||||
|
|
||||||
// if null ownership group, then check for admin approval
|
|
||||||
if(groupOwnedByNullAccount ) {
|
|
||||||
// Require approval if transaction relates to a group owned by the null account
|
|
||||||
if (!this.needsGroupApproval())
|
|
||||||
return ValidationResult.GROUP_APPROVAL_REQUIRED;
|
|
||||||
}
|
|
||||||
// Can't kick if not group's current owner
|
|
||||||
else if (!admin.getAddress().equals(groupData.getOwner()))
|
|
||||||
return ValidationResult.INVALID_GROUP_OWNER;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check creator has enough funds
|
// Check creator has enough funds
|
||||||
if (admin.getConfirmedBalance(Asset.QORT) < this.groupKickTransactionData.getFee())
|
if (admin.getConfirmedBalance(Asset.QORT) < this.groupKickTransactionData.getFee())
|
||||||
|
@ -123,7 +123,7 @@ public class RewardShareTransaction extends Transaction {
|
|||||||
final boolean isCancellingSharePercent = this.rewardShareTransactionData.getSharePercent() < 0;
|
final boolean isCancellingSharePercent = this.rewardShareTransactionData.getSharePercent() < 0;
|
||||||
|
|
||||||
// Creator themselves needs to be allowed to mint (unless cancelling)
|
// Creator themselves needs to be allowed to mint (unless cancelling)
|
||||||
if (!isCancellingSharePercent && !creator.canMint(false))
|
if (!isCancellingSharePercent && !creator.canMint())
|
||||||
return ValidationResult.NOT_MINTING_ACCOUNT;
|
return ValidationResult.NOT_MINTING_ACCOUNT;
|
||||||
|
|
||||||
// Qortal: special rules in play depending whether recipient is also minter
|
// Qortal: special rules in play depending whether recipient is also minter
|
||||||
|
@ -65,11 +65,11 @@ public abstract class Transaction {
|
|||||||
UPDATE_GROUP(23, true),
|
UPDATE_GROUP(23, true),
|
||||||
ADD_GROUP_ADMIN(24, true),
|
ADD_GROUP_ADMIN(24, true),
|
||||||
REMOVE_GROUP_ADMIN(25, true),
|
REMOVE_GROUP_ADMIN(25, true),
|
||||||
GROUP_BAN(26, true),
|
GROUP_BAN(26, false),
|
||||||
CANCEL_GROUP_BAN(27, true),
|
CANCEL_GROUP_BAN(27, false),
|
||||||
GROUP_KICK(28, true),
|
GROUP_KICK(28, false),
|
||||||
GROUP_INVITE(29, true),
|
GROUP_INVITE(29, false),
|
||||||
CANCEL_GROUP_INVITE(30, true),
|
CANCEL_GROUP_INVITE(30, false),
|
||||||
JOIN_GROUP(31, false),
|
JOIN_GROUP(31, false),
|
||||||
LEAVE_GROUP(32, false),
|
LEAVE_GROUP(32, false),
|
||||||
GROUP_APPROVAL(33, false),
|
GROUP_APPROVAL(33, false),
|
||||||
|
@ -1,250 +0,0 @@
|
|||||||
package org.qortal.utils;
|
|
||||||
|
|
||||||
import com.fasterxml.jackson.core.type.TypeReference;
|
|
||||||
import com.fasterxml.jackson.databind.ObjectMapper;
|
|
||||||
import com.fasterxml.jackson.databind.exc.InvalidFormatException;
|
|
||||||
import com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException;
|
|
||||||
import org.apache.commons.lang3.ArrayUtils;
|
|
||||||
import org.apache.logging.log4j.LogManager;
|
|
||||||
import org.apache.logging.log4j.Logger;
|
|
||||||
import org.qortal.api.SearchMode;
|
|
||||||
import org.qortal.arbitrary.ArbitraryDataFile;
|
|
||||||
import org.qortal.arbitrary.ArbitraryDataReader;
|
|
||||||
import org.qortal.arbitrary.exception.MissingDataException;
|
|
||||||
import org.qortal.arbitrary.misc.Service;
|
|
||||||
import org.qortal.controller.Controller;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryDataIndex;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryDataIndexDetail;
|
|
||||||
import org.qortal.data.arbitrary.ArbitraryResourceData;
|
|
||||||
import org.qortal.data.arbitrary.IndexCache;
|
|
||||||
import org.qortal.repository.DataException;
|
|
||||||
import org.qortal.repository.Repository;
|
|
||||||
import org.qortal.repository.RepositoryManager;
|
|
||||||
|
|
||||||
import java.io.IOException;
|
|
||||||
import java.nio.file.Files;
|
|
||||||
import java.nio.file.Paths;
|
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Map;
|
|
||||||
import java.util.Timer;
|
|
||||||
import java.util.TimerTask;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
import java.util.stream.Stream;
|
|
||||||
|
|
||||||
public class ArbitraryIndexUtils {
|
|
||||||
|
|
||||||
public static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
|
|
||||||
private static final Logger LOGGER = LogManager.getLogger(ArbitraryIndexUtils.class);
|
|
||||||
|
|
||||||
public static final String INDEX_CACHE_TIMER = "Arbitrary Index Cache Timer";
|
|
||||||
public static final String INDEX_CACHE_TIMER_TASK = "Arbitrary Index Cache Timer Task";
|
|
||||||
|
|
||||||
public static void startCaching(int priorityRequested, int frequency) {
|
|
||||||
|
|
||||||
Timer timer = buildTimer(INDEX_CACHE_TIMER, priorityRequested);
|
|
||||||
|
|
||||||
TimerTask task = new TimerTask() {
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
|
|
||||||
Thread.currentThread().setName(INDEX_CACHE_TIMER_TASK);
|
|
||||||
|
|
||||||
try {
|
|
||||||
fillCache(IndexCache.getInstance());
|
|
||||||
} catch (IOException | DataException e) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// delay 1 second
|
|
||||||
timer.scheduleAtFixedRate(task, 1_000, frequency * 60_000);
|
|
||||||
}
|
|
||||||
|
|
||||||
private static void fillCache(IndexCache instance) throws DataException, IOException {
|
|
||||||
|
|
||||||
try (final Repository repository = RepositoryManager.getRepository()) {
|
|
||||||
|
|
||||||
List<ArbitraryResourceData> indexResources
|
|
||||||
= repository.getArbitraryRepository().searchArbitraryResources(
|
|
||||||
Service.JSON,
|
|
||||||
null,
|
|
||||||
"idx-",
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
true,
|
|
||||||
null,
|
|
||||||
false,
|
|
||||||
SearchMode.ALL,
|
|
||||||
0,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
null,
|
|
||||||
true);
|
|
||||||
|
|
||||||
List<ArbitraryDataIndexDetail> indexDetails = new ArrayList<>();
|
|
||||||
|
|
||||||
LOGGER.debug("processing index resource data: count = " + indexResources.size());
|
|
||||||
|
|
||||||
// process all index resources
|
|
||||||
for( ArbitraryResourceData indexResource : indexResources ) {
|
|
||||||
|
|
||||||
try {
|
|
||||||
LOGGER.debug("processing index resource: name = " + indexResource.name + ", identifier = " + indexResource.identifier);
|
|
||||||
String json = ArbitraryIndexUtils.getJson(indexResource.name, indexResource.identifier);
|
|
||||||
|
|
||||||
// map the JSON string to a list of Java objects
|
|
||||||
List<ArbitraryDataIndex> indices = OBJECT_MAPPER.readValue(json, new TypeReference<List<ArbitraryDataIndex>>() {});
|
|
||||||
|
|
||||||
LOGGER.debug("processed indices = " + indices);
|
|
||||||
|
|
||||||
// rank and create index detail for each index in this index resource
|
|
||||||
for( int rank = 1; rank <= indices.size(); rank++ ) {
|
|
||||||
|
|
||||||
indexDetails.add( new ArbitraryDataIndexDetail(indexResource.name, rank, indices.get(rank - 1), indexResource.identifier ));
|
|
||||||
}
|
|
||||||
} catch (InvalidFormatException e) {
|
|
||||||
LOGGER.debug("invalid format, skipping: " + indexResource);
|
|
||||||
} catch (UnrecognizedPropertyException e) {
|
|
||||||
LOGGER.debug("unrecognized property, skipping " + indexResource);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.debug("processing indices by term ...");
|
|
||||||
Map<String, List<ArbitraryDataIndexDetail>> indicesByTerm
|
|
||||||
= indexDetails.stream().collect(
|
|
||||||
Collectors.toMap(
|
|
||||||
detail -> detail.term, // map by term
|
|
||||||
detail -> List.of(detail), // create list for term
|
|
||||||
(list1, list2) // merge lists for same term
|
|
||||||
-> Stream.of(list1, list2)
|
|
||||||
.flatMap(List::stream)
|
|
||||||
.collect(Collectors.toList())
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
LOGGER.info("processed indices by term: count = " + indicesByTerm.size());
|
|
||||||
|
|
||||||
// lock, clear old, load new
|
|
||||||
synchronized( IndexCache.getInstance().getIndicesByTerm() ) {
|
|
||||||
IndexCache.getInstance().getIndicesByTerm().clear();
|
|
||||||
IndexCache.getInstance().getIndicesByTerm().putAll(indicesByTerm);
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.info("loaded indices by term");
|
|
||||||
|
|
||||||
LOGGER.debug("processing indices by issuer ...");
|
|
||||||
Map<String, List<ArbitraryDataIndexDetail>> indicesByIssuer
|
|
||||||
= indexDetails.stream().collect(
|
|
||||||
Collectors.toMap(
|
|
||||||
detail -> detail.issuer, // map by issuer
|
|
||||||
detail -> List.of(detail), // create list for issuer
|
|
||||||
(list1, list2) // merge lists for same issuer
|
|
||||||
-> Stream.of(list1, list2)
|
|
||||||
.flatMap(List::stream)
|
|
||||||
.collect(Collectors.toList())
|
|
||||||
)
|
|
||||||
);
|
|
||||||
|
|
||||||
LOGGER.info("processed indices by issuer: count = " + indicesByIssuer.size());
|
|
||||||
|
|
||||||
// lock, clear old, load new
|
|
||||||
synchronized( IndexCache.getInstance().getIndicesByIssuer() ) {
|
|
||||||
IndexCache.getInstance().getIndicesByIssuer().clear();
|
|
||||||
IndexCache.getInstance().getIndicesByIssuer().putAll(indicesByIssuer);
|
|
||||||
}
|
|
||||||
|
|
||||||
LOGGER.info("loaded indices by issuer");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private static Timer buildTimer( final String name, int priorityRequested) {
|
|
||||||
// ensure priority is in between 1-10
|
|
||||||
final int priority = Math.max(0, Math.min(10, priorityRequested));
|
|
||||||
|
|
||||||
// Create a custom Timer with updated priority threads
|
|
||||||
Timer timer = new Timer(true) { // 'true' to make the Timer daemon
|
|
||||||
@Override
|
|
||||||
public void schedule(TimerTask task, long delay) {
|
|
||||||
Thread thread = new Thread(task, name) {
|
|
||||||
@Override
|
|
||||||
public void run() {
|
|
||||||
this.setPriority(priority);
|
|
||||||
super.run();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
thread.setPriority(priority);
|
|
||||||
thread.start();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
return timer;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
public static String getJsonWithExceptionHandling( String name, String identifier ) {
|
|
||||||
try {
|
|
||||||
return getJson(name, identifier);
|
|
||||||
}
|
|
||||||
catch( Exception e ) {
|
|
||||||
LOGGER.error(e.getMessage(), e);
|
|
||||||
return e.getMessage();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String getJson(String name, String identifier) throws IOException {
|
|
||||||
|
|
||||||
try {
|
|
||||||
ArbitraryDataReader arbitraryDataReader
|
|
||||||
= new ArbitraryDataReader(name, ArbitraryDataFile.ResourceIdType.NAME, Service.JSON, identifier);
|
|
||||||
|
|
||||||
int attempts = 0;
|
|
||||||
Integer maxAttempts = 5;
|
|
||||||
|
|
||||||
while (!Controller.isStopping()) {
|
|
||||||
attempts++;
|
|
||||||
if (!arbitraryDataReader.isBuilding()) {
|
|
||||||
try {
|
|
||||||
arbitraryDataReader.loadSynchronously(false);
|
|
||||||
break;
|
|
||||||
} catch (MissingDataException e) {
|
|
||||||
if (attempts > maxAttempts) {
|
|
||||||
// Give up after 5 attempts
|
|
||||||
throw new IOException("Data unavailable. Please try again later.");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Thread.sleep(3000L);
|
|
||||||
}
|
|
||||||
|
|
||||||
java.nio.file.Path outputPath = arbitraryDataReader.getFilePath();
|
|
||||||
if (outputPath == null) {
|
|
||||||
// Assume the resource doesn't exist
|
|
||||||
throw new IOException( "File not found");
|
|
||||||
}
|
|
||||||
|
|
||||||
// No file path supplied - so check if this is a single file resource
|
|
||||||
String[] files = ArrayUtils.removeElement(outputPath.toFile().list(), ".qortal");
|
|
||||||
String filepath = files[0];
|
|
||||||
|
|
||||||
java.nio.file.Path path = Paths.get(outputPath.toString(), filepath);
|
|
||||||
if (!Files.exists(path)) {
|
|
||||||
String message = String.format("No file exists at filepath: %s", filepath);
|
|
||||||
throw new IOException( message );
|
|
||||||
}
|
|
||||||
|
|
||||||
String data = Files.readString(path);
|
|
||||||
|
|
||||||
return data;
|
|
||||||
} catch (Exception e) {
|
|
||||||
throw new IOException(String.format("Unable to load %s %s: %s", Service.JSON, name, e.getMessage()));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
@ -24,7 +24,6 @@ import java.nio.file.attribute.BasicFileAttributes;
|
|||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
import java.util.Arrays;
|
import java.util.Arrays;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Optional;
|
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
|
||||||
import static java.nio.file.StandardCopyOption.REPLACE_EXISTING;
|
import static java.nio.file.StandardCopyOption.REPLACE_EXISTING;
|
||||||
@ -73,23 +72,23 @@ public class ArbitraryTransactionUtils {
|
|||||||
return latestPut;
|
return latestPut;
|
||||||
}
|
}
|
||||||
|
|
||||||
public static Optional<ArbitraryTransactionData> hasMoreRecentPutTransaction(Repository repository, ArbitraryTransactionData arbitraryTransactionData) {
|
public static boolean hasMoreRecentPutTransaction(Repository repository, ArbitraryTransactionData arbitraryTransactionData) {
|
||||||
byte[] signature = arbitraryTransactionData.getSignature();
|
byte[] signature = arbitraryTransactionData.getSignature();
|
||||||
if (signature == null) {
|
if (signature == null) {
|
||||||
// We can't make a sensible decision without a signature
|
// We can't make a sensible decision without a signature
|
||||||
// so it's best to assume there is nothing newer
|
// so it's best to assume there is nothing newer
|
||||||
return Optional.empty();
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
ArbitraryTransactionData latestPut = ArbitraryTransactionUtils.fetchLatestPut(repository, arbitraryTransactionData);
|
ArbitraryTransactionData latestPut = ArbitraryTransactionUtils.fetchLatestPut(repository, arbitraryTransactionData);
|
||||||
if (latestPut == null) {
|
if (latestPut == null) {
|
||||||
return Optional.empty();
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
// If the latest PUT transaction has a newer timestamp, it will override the existing transaction
|
// If the latest PUT transaction has a newer timestamp, it will override the existing transaction
|
||||||
// Any data relating to the older transaction is no longer needed
|
// Any data relating to the older transaction is no longer needed
|
||||||
boolean hasNewerPut = (latestPut.getTimestamp() > arbitraryTransactionData.getTimestamp());
|
boolean hasNewerPut = (latestPut.getTimestamp() > arbitraryTransactionData.getTimestamp());
|
||||||
return hasNewerPut ? Optional.of(latestPut) : Optional.empty();
|
return hasNewerPut;
|
||||||
}
|
}
|
||||||
|
|
||||||
public static boolean completeFileExists(ArbitraryTransactionData transactionData) throws DataException {
|
public static boolean completeFileExists(ArbitraryTransactionData transactionData) throws DataException {
|
||||||
@ -209,15 +208,7 @@ public class ArbitraryTransactionUtils {
|
|||||||
return ArbitraryTransactionUtils.isFileRecent(filePath, now, cleanupAfter);
|
return ArbitraryTransactionUtils.isFileRecent(filePath, now, cleanupAfter);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
public static void deleteCompleteFile(ArbitraryTransactionData arbitraryTransactionData, long now, long cleanupAfter) throws DataException {
|
||||||
*
|
|
||||||
* @param arbitraryTransactionData
|
|
||||||
* @param now
|
|
||||||
* @param cleanupAfter
|
|
||||||
* @return true if file is deleted, otherwise return false
|
|
||||||
* @throws DataException
|
|
||||||
*/
|
|
||||||
public static boolean deleteCompleteFile(ArbitraryTransactionData arbitraryTransactionData, long now, long cleanupAfter) throws DataException {
|
|
||||||
byte[] completeHash = arbitraryTransactionData.getData();
|
byte[] completeHash = arbitraryTransactionData.getData();
|
||||||
byte[] signature = arbitraryTransactionData.getSignature();
|
byte[] signature = arbitraryTransactionData.getSignature();
|
||||||
|
|
||||||
@ -228,11 +219,6 @@ public class ArbitraryTransactionUtils {
|
|||||||
"if needed", Base58.encode(completeHash));
|
"if needed", Base58.encode(completeHash));
|
||||||
|
|
||||||
arbitraryDataFile.delete();
|
arbitraryDataFile.delete();
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
return false;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1,319 +0,0 @@
|
|||||||
package org.qortal.utils;
|
|
||||||
|
|
||||||
import org.qortal.block.Block;
|
|
||||||
import org.qortal.crypto.Crypto;
|
|
||||||
import org.qortal.data.PaymentData;
|
|
||||||
import org.qortal.data.account.AccountBalanceData;
|
|
||||||
import org.qortal.data.account.AddressAmountData;
|
|
||||||
import org.qortal.data.account.BlockHeightRange;
|
|
||||||
import org.qortal.data.account.BlockHeightRangeAddressAmounts;
|
|
||||||
import org.qortal.data.transaction.ATTransactionData;
|
|
||||||
import org.qortal.data.transaction.BaseTransactionData;
|
|
||||||
import org.qortal.data.transaction.BuyNameTransactionData;
|
|
||||||
import org.qortal.data.transaction.CreateAssetOrderTransactionData;
|
|
||||||
import org.qortal.data.transaction.DeployAtTransactionData;
|
|
||||||
import org.qortal.data.transaction.MultiPaymentTransactionData;
|
|
||||||
import org.qortal.data.transaction.PaymentTransactionData;
|
|
||||||
import org.qortal.data.transaction.TransactionData;
|
|
||||||
import org.qortal.data.transaction.TransferAssetTransactionData;
|
|
||||||
|
|
||||||
import java.util.Comparator;
|
|
||||||
import java.util.HashMap;
|
|
||||||
import java.util.List;
|
|
||||||
import java.util.Map;
|
|
||||||
import java.util.Optional;
|
|
||||||
import java.util.concurrent.ConcurrentHashMap;
|
|
||||||
import java.util.concurrent.CopyOnWriteArrayList;
|
|
||||||
import java.util.function.Predicate;
|
|
||||||
import java.util.stream.Collectors;
|
|
||||||
|
|
||||||
public class BalanceRecorderUtils {
|
|
||||||
|
|
||||||
public static final Predicate<AddressAmountData> ADDRESS_AMOUNT_DATA_NOT_ZERO = addressAmount -> addressAmount.getAmount() != 0;
|
|
||||||
public static final Comparator<BlockHeightRangeAddressAmounts> BLOCK_HEIGHT_RANGE_ADDRESS_AMOUNTS_COMPARATOR = new Comparator<BlockHeightRangeAddressAmounts>() {
|
|
||||||
@Override
|
|
||||||
public int compare(BlockHeightRangeAddressAmounts amounts1, BlockHeightRangeAddressAmounts amounts2) {
|
|
||||||
return amounts1.getRange().getEnd() - amounts2.getRange().getEnd();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
public static final Comparator<AddressAmountData> ADDRESS_AMOUNT_DATA_COMPARATOR = new Comparator<AddressAmountData>() {
|
|
||||||
@Override
|
|
||||||
public int compare(AddressAmountData addressAmountData, AddressAmountData t1) {
|
|
||||||
if( addressAmountData.getAmount() > t1.getAmount() ) {
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
else if( addressAmountData.getAmount() < t1.getAmount() ) {
|
|
||||||
return -1;
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
public static final Comparator<BlockHeightRange> BLOCK_HEIGHT_RANGE_COMPARATOR = new Comparator<BlockHeightRange>() {
|
|
||||||
@Override
|
|
||||||
public int compare(BlockHeightRange range1, BlockHeightRange range2) {
|
|
||||||
return range1.getEnd() - range2.getEnd();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Build Balance Dynmaics For Account
|
|
||||||
*
|
|
||||||
* @param priorBalances the balances prior to the current height, assuming only one balance per address
|
|
||||||
* @param accountBalance the current balance
|
|
||||||
*
|
|
||||||
* @return the difference between the current balance and the prior balance for the current balance address
|
|
||||||
*/
|
|
||||||
public static AddressAmountData buildBalanceDynamicsForAccount(List<AccountBalanceData> priorBalances, AccountBalanceData accountBalance) {
|
|
||||||
Optional<AccountBalanceData> matchingAccountPriorBalance
|
|
||||||
= priorBalances.stream()
|
|
||||||
.filter(priorBalance -> accountBalance.getAddress().equals(priorBalance.getAddress()))
|
|
||||||
.findFirst();
|
|
||||||
if(matchingAccountPriorBalance.isPresent()) {
|
|
||||||
return new AddressAmountData(accountBalance.getAddress(), accountBalance.getBalance() - matchingAccountPriorBalance.get().getBalance());
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
return new AddressAmountData(accountBalance.getAddress(), accountBalance.getBalance());
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
public static List<AddressAmountData> buildBalanceDynamics(
|
|
||||||
final List<AccountBalanceData> balances,
|
|
||||||
final List<AccountBalanceData> priorBalances,
|
|
||||||
long minimum,
|
|
||||||
List<TransactionData> transactions) {
|
|
||||||
|
|
||||||
Map<String, Long> amountsByAddress = new HashMap<>(transactions.size());
|
|
||||||
|
|
||||||
for( TransactionData transactionData : transactions ) {
|
|
||||||
|
|
||||||
mapBalanceModificationsForTransaction(amountsByAddress, transactionData);
|
|
||||||
}
|
|
||||||
|
|
||||||
List<AddressAmountData> addressAmounts
|
|
||||||
= balances.stream()
|
|
||||||
.map(balance -> buildBalanceDynamicsForAccount(priorBalances, balance))
|
|
||||||
.map( data -> adjustAddressAmount(amountsByAddress.getOrDefault(data.getAddress(), 0L), data))
|
|
||||||
.filter(ADDRESS_AMOUNT_DATA_NOT_ZERO)
|
|
||||||
.filter(data -> data.getAmount() >= minimum)
|
|
||||||
.collect(Collectors.toList());
|
|
||||||
|
|
||||||
return addressAmounts;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static AddressAmountData adjustAddressAmount(long adjustment, AddressAmountData data) {
|
|
||||||
|
|
||||||
return new AddressAmountData(data.getAddress(), data.getAmount() - adjustment);
|
|
||||||
}
|
|
||||||
|
|
||||||
public static void mapBalanceModificationsForTransaction(Map<String, Long> amountsByAddress, TransactionData transactionData) {
|
|
||||||
String creatorAddress;
|
|
||||||
|
|
||||||
// AT Transaction
|
|
||||||
if( transactionData instanceof ATTransactionData) {
|
|
||||||
creatorAddress = mapBalanceModificationsForAtTransaction(amountsByAddress, (ATTransactionData) transactionData);
|
|
||||||
}
|
|
||||||
// Buy Name Transaction
|
|
||||||
else if( transactionData instanceof BuyNameTransactionData) {
|
|
||||||
creatorAddress = mapBalanceModificationsForBuyNameTransaction(amountsByAddress, (BuyNameTransactionData) transactionData);
|
|
||||||
}
|
|
||||||
// Create Asset Order Transaction
|
|
||||||
else if( transactionData instanceof CreateAssetOrderTransactionData) {
|
|
||||||
//TODO I'm not sure how to handle this one. This hasn't been used at this point in the blockchain.
|
|
||||||
|
|
||||||
creatorAddress = Crypto.toAddress(transactionData.getCreatorPublicKey());
|
|
||||||
}
|
|
||||||
// Deploy AT Transaction
|
|
||||||
else if( transactionData instanceof DeployAtTransactionData ) {
|
|
||||||
creatorAddress = mapBalanceModificationsForDeployAtTransaction(amountsByAddress, (DeployAtTransactionData) transactionData);
|
|
||||||
}
|
|
||||||
// Multi Payment Transaction
|
|
||||||
else if( transactionData instanceof MultiPaymentTransactionData) {
|
|
||||||
creatorAddress = mapBalanceModificationsForMultiPaymentTransaction(amountsByAddress, (MultiPaymentTransactionData) transactionData);
|
|
||||||
}
|
|
||||||
// Payment Transaction
|
|
||||||
else if( transactionData instanceof PaymentTransactionData ) {
|
|
||||||
creatorAddress = mapBalanceModicationsForPaymentTransaction(amountsByAddress, (PaymentTransactionData) transactionData);
|
|
||||||
}
|
|
||||||
// Transfer Asset Transaction
|
|
||||||
else if( transactionData instanceof TransferAssetTransactionData) {
|
|
||||||
creatorAddress = mapBalanceModificationsForTransferAssetTransaction(amountsByAddress, (TransferAssetTransactionData) transactionData);
|
|
||||||
}
|
|
||||||
// Other Transactions
|
|
||||||
else {
|
|
||||||
creatorAddress = Crypto.toAddress(transactionData.getCreatorPublicKey());
|
|
||||||
}
|
|
||||||
|
|
||||||
// all transactions modify the balance for fees
|
|
||||||
mapBalanceModifications(amountsByAddress, transactionData.getFee(), creatorAddress, Optional.empty());
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String mapBalanceModificationsForTransferAssetTransaction(Map<String, Long> amountsByAddress, TransferAssetTransactionData transferAssetData) {
|
|
||||||
String creatorAddress = Crypto.toAddress(transferAssetData.getSenderPublicKey());
|
|
||||||
|
|
||||||
if( transferAssetData.getAssetId() == 0) {
|
|
||||||
mapBalanceModifications(
|
|
||||||
amountsByAddress,
|
|
||||||
transferAssetData.getAmount(),
|
|
||||||
creatorAddress,
|
|
||||||
Optional.of(transferAssetData.getRecipient())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return creatorAddress;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String mapBalanceModicationsForPaymentTransaction(Map<String, Long> amountsByAddress, PaymentTransactionData paymentData) {
|
|
||||||
String creatorAddress = Crypto.toAddress(paymentData.getCreatorPublicKey());
|
|
||||||
|
|
||||||
mapBalanceModifications(amountsByAddress,
|
|
||||||
paymentData.getAmount(),
|
|
||||||
creatorAddress,
|
|
||||||
Optional.of(paymentData.getRecipient())
|
|
||||||
);
|
|
||||||
return creatorAddress;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String mapBalanceModificationsForMultiPaymentTransaction(Map<String, Long> amountsByAddress, MultiPaymentTransactionData multiPaymentData) {
|
|
||||||
String creatorAddress = Crypto.toAddress(multiPaymentData.getCreatorPublicKey());
|
|
||||||
|
|
||||||
for(PaymentData payment : multiPaymentData.getPayments() ) {
|
|
||||||
mapBalanceModificationsForTransaction(
|
|
||||||
amountsByAddress,
|
|
||||||
getPaymentTransactionData(multiPaymentData, payment)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return creatorAddress;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String mapBalanceModificationsForDeployAtTransaction(Map<String, Long> amountsByAddress, DeployAtTransactionData transactionData) {
|
|
||||||
String creatorAddress;
|
|
||||||
DeployAtTransactionData deployAtData = transactionData;
|
|
||||||
|
|
||||||
creatorAddress = Crypto.toAddress(deployAtData.getCreatorPublicKey());
|
|
||||||
|
|
||||||
if( deployAtData.getAssetId() == 0 ) {
|
|
||||||
mapBalanceModifications(
|
|
||||||
amountsByAddress,
|
|
||||||
deployAtData.getAmount(),
|
|
||||||
creatorAddress,
|
|
||||||
Optional.of(deployAtData.getAtAddress())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return creatorAddress;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String mapBalanceModificationsForBuyNameTransaction(Map<String, Long> amountsByAddress, BuyNameTransactionData transactionData) {
|
|
||||||
String creatorAddress;
|
|
||||||
BuyNameTransactionData buyNameData = transactionData;
|
|
||||||
|
|
||||||
creatorAddress = Crypto.toAddress(buyNameData.getCreatorPublicKey());
|
|
||||||
|
|
||||||
mapBalanceModifications(
|
|
||||||
amountsByAddress,
|
|
||||||
buyNameData.getAmount(),
|
|
||||||
creatorAddress,
|
|
||||||
Optional.of(buyNameData.getSeller())
|
|
||||||
);
|
|
||||||
return creatorAddress;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static String mapBalanceModificationsForAtTransaction(Map<String, Long> amountsByAddress, ATTransactionData transactionData) {
|
|
||||||
String creatorAddress;
|
|
||||||
ATTransactionData atData = transactionData;
|
|
||||||
creatorAddress = atData.getATAddress();
|
|
||||||
|
|
||||||
if( atData.getAssetId() != null && atData.getAssetId() == 0) {
|
|
||||||
mapBalanceModifications(
|
|
||||||
amountsByAddress,
|
|
||||||
atData.getAmount(),
|
|
||||||
creatorAddress,
|
|
||||||
Optional.of(atData.getRecipient())
|
|
||||||
);
|
|
||||||
}
|
|
||||||
return creatorAddress;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static PaymentTransactionData getPaymentTransactionData(MultiPaymentTransactionData multiPaymentData, PaymentData payment) {
|
|
||||||
return new PaymentTransactionData(
|
|
||||||
new BaseTransactionData(
|
|
||||||
multiPaymentData.getTimestamp(),
|
|
||||||
multiPaymentData.getTxGroupId(),
|
|
||||||
multiPaymentData.getReference(),
|
|
||||||
multiPaymentData.getCreatorPublicKey(),
|
|
||||||
0L,
|
|
||||||
multiPaymentData.getSignature()
|
|
||||||
),
|
|
||||||
payment.getRecipient(),
|
|
||||||
payment.getAmount()
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
public static void mapBalanceModifications(Map<String, Long> amountsByAddress, Long amount, String sender, Optional<String> recipient) {
|
|
||||||
amountsByAddress.put(
|
|
||||||
sender,
|
|
||||||
amountsByAddress.getOrDefault(sender, 0L) - amount
|
|
||||||
);
|
|
||||||
|
|
||||||
if( recipient.isPresent() )
|
|
||||||
amountsByAddress.put(
|
|
||||||
recipient.get(),
|
|
||||||
amountsByAddress.getOrDefault(recipient.get(), 0L) + amount
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
public static void removeRecordingsAboveHeight(int currentHeight, ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight) {
|
|
||||||
balancesByHeight.entrySet().stream()
|
|
||||||
.filter(heightWithBalances -> heightWithBalances.getKey() > currentHeight)
|
|
||||||
.forEach(heightWithBalances -> balancesByHeight.remove(heightWithBalances.getKey()));
|
|
||||||
}
|
|
||||||
|
|
||||||
public static void removeRecordingsBelowHeight(int currentHeight, ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight) {
|
|
||||||
balancesByHeight.entrySet().stream()
|
|
||||||
.filter(heightWithBalances -> heightWithBalances.getKey() < currentHeight)
|
|
||||||
.forEach(heightWithBalances -> balancesByHeight.remove(heightWithBalances.getKey()));
|
|
||||||
}
|
|
||||||
|
|
||||||
public static void removeDynamicsOnOrAboveHeight(int currentHeight, CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics) {
|
|
||||||
balanceDynamics.stream()
|
|
||||||
.filter(addressAmounts -> addressAmounts.getRange().getEnd() >= currentHeight)
|
|
||||||
.forEach(addressAmounts -> balanceDynamics.remove(addressAmounts));
|
|
||||||
}
|
|
||||||
|
|
||||||
public static BlockHeightRangeAddressAmounts removeOldestDynamics(CopyOnWriteArrayList<BlockHeightRangeAddressAmounts> balanceDynamics) {
|
|
||||||
BlockHeightRangeAddressAmounts oldestDynamics
|
|
||||||
= balanceDynamics.stream().sorted(BLOCK_HEIGHT_RANGE_ADDRESS_AMOUNTS_COMPARATOR).findFirst().get();
|
|
||||||
|
|
||||||
balanceDynamics.remove(oldestDynamics);
|
|
||||||
return oldestDynamics;
|
|
||||||
}
|
|
||||||
|
|
||||||
public static Optional<Integer> getPriorHeight(int currentHeight, ConcurrentHashMap<Integer, List<AccountBalanceData>> balancesByHeight) {
|
|
||||||
Optional<Integer> priorHeight
|
|
||||||
= balancesByHeight.keySet().stream()
|
|
||||||
.filter(height -> height < currentHeight)
|
|
||||||
.sorted(Comparator.reverseOrder()).findFirst();
|
|
||||||
return priorHeight;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
|
||||||
* Is Reward Distribution Range?
|
|
||||||
*
|
|
||||||
* @param start start height, exclusive
|
|
||||||
* @param end end height, inclusive
|
|
||||||
*
|
|
||||||
* @return true there is a reward distribution block within this block range
|
|
||||||
*/
|
|
||||||
public static boolean isRewardDistributionRange(int start, int end) {
|
|
||||||
|
|
||||||
// iterate through the block height until a reward distribution block or the end of the range
|
|
||||||
for( int i = start + 1; i <= end; i++) {
|
|
||||||
if( Block.isRewardDistributionBlock(i) ) return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
// no reward distribution blocks found within range
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user