Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release branch into testnet #525

Merged
merged 38 commits into from
Dec 12, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
f8f7e0d
Reduce the max-block-size that is used in new networks by default
xeroc Oct 13, 2017
ebe0755
Market his: properly remove old buckets. #472
abitmore Nov 12, 2017
af11a5b
Market his: cap volume at max value of int64. #467
abitmore Nov 12, 2017
5ee5642
Market his: prune old order matching data. #454
abitmore Nov 13, 2017
07ed6f1
Market his: change by_id index to ordered_unique
theoreticalbts Feb 29, 2016
cecc926
Change get_trade_history API to use by_market_time
abitmore Nov 13, 2017
33f7552
Update new suppored boost versions to README
oxarbitrage Nov 15, 2017
1a264ac
Merge pull request #486 from bitshares/oxarbitrage-patch-6
oxarbitrage Nov 15, 2017
8cd335f
[docker] Create the symlink in the launch script, not the Dockerfile
xeroc Nov 16, 2017
c39fa7c
Merge pull request #488 from bitshares/docker
oxarbitrage Nov 16, 2017
c881a81
Fix error message referring to the old p2p-port option
knaperek Nov 18, 2017
a9a55d7
Merge pull request #489 from knaperek/fix-docs-p2p-port
oxarbitrage Nov 18, 2017
e8da8e0
Merge pull request #478 from abitmore/472-market-his-size
oxarbitrage Nov 20, 2017
02b2fd7
Ensure receive_blind is only called after a successfull receive_from_…
btcinshares Nov 22, 2017
93d72c1
Fix small typo.
btcinshares Nov 22, 2017
bae4f8f
Merge pull request #494 from btcinshares/insufficent_balance_typo_fix
oxarbitrage Nov 22, 2017
eb152a5
Unit tests for get_potential_signatures API. #496
abitmore Nov 22, 2017
eb19ec7
get_potential_signatures API returns owner keys
abitmore Nov 22, 2017
4263c13
Merge pull request #493 from btcinshares/blind_history_order_fix
oxarbitrage Nov 22, 2017
28397c6
[docker] Add cURL lib #476 and environmental variables for configuration
xeroc Nov 23, 2017
01c198f
Merge pull request #498 from bitshares/docker
oxarbitrage Nov 23, 2017
7272b69
Merge pull request #497 from abitmore/496-fix-get-potential-sigs
oxarbitrage Nov 23, 2017
6d2dadc
Merge pull request #1 from bitshares/develop
btcinshares Nov 23, 2017
4d8e6b6
Add sign_memo and read_memo wallet APIs.
btcinshares Nov 27, 2017
3ac7940
remove invalid field from fork database issue #66
oxarbitrage Nov 27, 2017
db14a50
Merge pull request #508 from oxarbitrage/issue66
oxarbitrage Nov 28, 2017
ef374cd
Merge pull request #507 from btcinshares/sign_memo_api
oxarbitrage Nov 29, 2017
ed11ede
Market his plugin: precalculate ticker data. #509
abitmore Nov 30, 2017
43a8d04
get_ticker API now use precalculated data. #509
abitmore Nov 30, 2017
19a421f
Change default max block size to even smaller
abitmore Dec 11, 2017
b860946
Merge pull request #419 from bitshares/reduce-genesis-max-block-size
abitmore Dec 11, 2017
ffeebe5
remove the not required logging
crazybits Dec 8, 2017
b373ad4
remove the log lines instead of comment
crazybits Dec 8, 2017
c13780c
Merge pull request #522 from bitshares/516-cli-logging
abitmore Dec 11, 2017
ee06ec1
Merge pull request #513 from abitmore/509-prepared-ticker
abitmore Dec 11, 2017
282f995
add version command to node
oxarbitrage Dec 11, 2017
9c3e4f1
add version to cli wallet + shortcuts
oxarbitrage Dec 12, 2017
b6d95fd
Merge pull request #524 from oxarbitrage/issue521
oxarbitrage Dec 12, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 2 additions & 4 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,13 @@ RUN \
cmake \
git \
libbz2-dev \
libreadline6-dev \
libreadline-dev \
libboost-all-dev \
libcurl4-openssl-dev \
libssl-dev \
libncurses-dev \
doxygen \
libcurl4-openssl-dev \
&& \
apt-get update -y && \
apt-get install -y fish && \
Expand Down Expand Up @@ -49,9 +50,6 @@ RUN chown bitshares:bitshares -R /var/lib/bitshares
# Volume
VOLUME ["/var/lib/bitshares", "/etc/bitshares"]

# default settings
RUN ln -f -s /etc/bitshares/config.ini /var/lib/bitshares

# rpc service:
EXPOSE 8090
# p2p service:
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ To build after all dependencies are installed:

**NOTE:** BitShares requires an [OpenSSL](https://www.openssl.org/) version in the 1.0.x series. OpenSSL 1.1.0 and newer are NOT supported. If your system OpenSSL version is newer, then you will need to manually provide an older version of OpenSSL and specify it to CMake using `-DOPENSSL_INCLUDE_DIR`, `-DOPENSSL_SSL_LIBRARY`, and `-DOPENSSL_CRYPTO_LIBRARY`.

**NOTE:** BitShares requires a [Boost](http://www.boost.org/) version in the range [1.57, 1.60]. Versions earlier than
1.57 or newer than 1.60 are NOT supported. If your system Boost version is newer, then you will need to manually build
**NOTE:** BitShares requires a [Boost](http://www.boost.org/) version in the range [1.57, 1.63]. Versions earlier than
1.57 or newer than 1.63 are NOT supported. If your system Boost version is newer, then you will need to manually build
an older version of Boost and specify it to CMake using `DBOOST_ROOT`.

After building, the witness node can be launched with:
Expand Down
96 changes: 73 additions & 23 deletions docker/bitsharesentry.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,32 +4,82 @@ BITSHARESD="/usr/local/bin/witness_node"
# For blockchain download
VERSION=`cat /etc/bitshares/version`

## seed nodes come from doc/seednodes.txt which is
## installed by docker into /etc/bitsharesd/seednodes.txt
# SEED_NODES="$(cat /etc/bitsharesd/seednodes.txt | awk -F' ' '{print $1}')"

## if user did not pass in any desired
## seed nodes, use the ones above:
#if [[ -z "$BITSHARESD_SEED_NODES" ]]; then
# for NODE in $SEED_NODES ; do
# ARGS+=" --seed-node=$NODE"
# done
#fi
## Supported Environmental Variables
#
# * $BITSHARESD_SEED_NODES
# * $BITSHARESD_RPC_ENDPOINT
# * $BITSHARESD_PLUGINS
# * $BITSHARESD_REPLAY
# * $BITSHARESD_RESYNC
# * $BITSHARESD_P2P_ENDPOINT
# * $BITSHARESD_WITNESS_ID
# * $BITSHARESD_PRIVATE_KEY
# * $BITSHARESD_TRACK_ACCOUNTS
# * $BITSHARESD_PARTIAL_OPERATIONS
# * $BITSHARESD_MAX_OPS_PER_ACCOUNT
# * $BITSHARESD_ES_NODE_URL
# * $BITSHARESD_TRUSTED_NODE
#

## Link the bitshares config file into home
## This link has been created in Dockerfile, already
#ln -f -s /etc/bitshares/config.ini /var/lib/bitshares
ARGS=""
# Translate environmental variables
if [[ ! -z "$BITSHARESD_SEED_NODES" ]]; then
for NODE in $BITSHARESD_SEED_NODES ; do
ARGS+=" --seed-node=$NODE"
done
fi
if [[ ! -z "$BITSHARESD_RPC_ENDPOINT" ]]; then
ARGS+=" --rpc-endpoint=${BITSHARESD_RPC_ENDPOINT}"
fi

if [[ ! -z "$BITSHARESD_PLUGINS" ]]; then
ARGS+=" --plugins=\"${BITSHARESD_PLUGINS}\""
fi

if [[ ! -z "$BITSHARESD_REPLAY" ]]; then
ARGS+=" --replay-blockchain"
fi

if [[ ! -z "$BITSHARESD_RESYNC" ]]; then
ARGS+=" --resync-blockchain"
fi

if [[ ! -z "$BITSHARESD_P2P_ENDPOINT" ]]; then
ARGS+=" --p2p-endpoint=${BITSHARESD_P2P_ENDPOINT}"
fi

if [[ ! -z "$BITSHARESD_WITNESS_ID" ]]; then
ARGS+=" --witness-id=$BITSHARESD_WITNESS_ID"
fi

## get blockchain state from an S3 bucket
# echo bitsharesd: beginning download and decompress of s3://$S3_BUCKET/blockchain-$VERSION-latest.tar.bz2
if [[ ! -z "$BITSHARESD_PRIVATE_KEY" ]]; then
ARGS+=" --private-key=$BITSHARESD_PRIVATE_KEY"
fi

## get blockchain state from an S3 bucket
#s3cmd get s3://$S3_BUCKET/blockchain-$VERSION-latest.tar.bz2 - | pbzip2 -m2000dc | tar x
#if [[ $? -ne 0 ]]; then
# echo unable to pull blockchain state from S3 - exiting
# exit 1
#fi
if [[ ! -z "$BITSHARESD_TRACK_ACCOUNTS" ]]; then
for ACCOUNT in $BITSHARESD_TRACK_ACCOUNTS ; do
ARGS+=" --track-account=$ACCOUNT"
done
fi

## Deploy Healthcheck daemon
if [[ ! -z "$BITSHARESD_PARTIAL_OPERATIONS" ]]; then
ARGS+=" --partial-operations=${BITSHARESD_PARTIAL_OPERATIONS}"
fi

if [[ ! -z "$BITSHARESD_MAX_OPS_PER_ACCOUNT" ]]; then
ARGS+=" --max-ops-per-account=${BITSHARESD_MAX_OPS_PER_ACCOUNT}"
fi

if [[ ! -z "$BITSHARESD_ES_NODE_URL" ]]; then
ARGS+=" --elasticsearch-node-url=${BITSHARESD_ES_NODE_URL}"
fi

if [[ ! -z "$BITSHARESD_TRUSTED_NODE" ]]; then
ARGS+=" --trusted-node=${BITSHARESD_TRUSTED_NODE}"
fi

## Link the bitshares config file into home
## This link has been created in Dockerfile, already
ln -f -s /etc/bitshares/config.ini /var/lib/bitshares

$BITSHARESD --data-dir ${HOME} ${ARGS} ${BITSHARESD_ARGS}
11 changes: 11 additions & 0 deletions libraries/app/application.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,8 @@

#include <boost/range/adaptor/reversed.hpp>

#include <graphene/utilities/git_revision.hpp>

namespace graphene { namespace app {
using net::item_hash_t;
using net::item_id;
Expand Down Expand Up @@ -943,6 +945,7 @@ void application::set_program_options(boost::program_options::options_descriptio
("resync-blockchain", "Delete all blocks and re-sync with network from scratch")
("force-validate", "Force validation of all transactions")
("genesis-timestamp", bpo::value<uint32_t>(), "Replace timestamp from genesis.json with current time plus this many seconds (experts only!)")
("version,v", "Display version information")
;
command_line_options.add(_cli_options);
configuration_file_options.add(_cfg_options);
Expand All @@ -953,6 +956,14 @@ void application::initialize(const fc::path& data_dir, const boost::program_opti
my->_data_dir = data_dir;
my->_options = &options;

if( options.count("version") )
{
std::cout << "Version: " << graphene::utilities::git_revision_description << "\n";
std::cout << "SHA: " << graphene::utilities::git_revision_sha << "\n";
std::cout << "Timestamp: " << fc::get_approximate_relative_time_string(fc::time_point_sec(graphene::utilities::git_revision_unix_timestamp)) << "\n";
std::exit(EXIT_SUCCESS);
}

if( options.count("create-genesis-json") )
{
fc::path genesis_out = options.at("create-genesis-json").as<boost::filesystem::path>();
Expand Down
93 changes: 39 additions & 54 deletions libraries/app/database_api.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ class database_api_impl : public std::enable_shared_from_this<database_api_impl>
vector<collateral_bid_object> get_collateral_bids(const asset_id_type asset, uint32_t limit, uint32_t start)const;
void subscribe_to_market(std::function<void(const variant&)> callback, asset_id_type a, asset_id_type b);
void unsubscribe_from_market(asset_id_type a, asset_id_type b);
market_ticker get_ticker( const string& base, const string& quote )const;
market_ticker get_ticker( const string& base, const string& quote, bool skip_order_book = false )const;
market_volume get_24_volume( const string& base, const string& quote )const;
order_book get_order_book( const string& base, const string& quote, unsigned limit = 50 )const;
vector<market_trade> get_trade_history( const string& base, const string& quote, fc::time_point_sec start, fc::time_point_sec stop, unsigned limit = 100 )const;
Expand Down Expand Up @@ -1154,7 +1154,7 @@ market_ticker database_api::get_ticker( const string& base, const string& quote
return my->get_ticker( base, quote );
}

market_ticker database_api_impl::get_ticker( const string& base, const string& quote )const
market_ticker database_api_impl::get_ticker( const string& base, const string& quote, bool skip_order_book )const
{
const auto assets = lookup_asset_symbols( {base, quote} );
FC_ASSERT( assets[0], "Invalid base asset symbol: ${s}", ("s",base) );
Expand All @@ -1178,11 +1178,6 @@ market_ticker database_api_impl::get_ticker( const string& base, const string& q
auto quote_id = assets[1]->id;
if( base_id > quote_id ) std::swap( base_id, quote_id );

history_key hkey;
hkey.base = base_id;
hkey.quote = quote_id;
hkey.sequence = std::numeric_limits<int64_t>::min();

// TODO: move following duplicate code out
// TODO: using pow is a bit inefficient here, optimization is possible
auto asset_to_real = [&]( const asset& a, int p ) { return double(a.amount.value)/pow( 10, p ); };
Expand All @@ -1194,44 +1189,31 @@ market_ticker database_api_impl::get_ticker( const string& base, const string& q
return asset_to_real( p.quote, assets[0]->precision ) / asset_to_real( p.base, assets[1]->precision );
};

const auto& history_idx = _db.get_index_type<graphene::market_history::history_index>().indices().get<by_key>();
auto itr = history_idx.lower_bound( hkey );

bool is_latest = true;
price latest_price;
fc::uint128 base_volume;
fc::uint128 quote_volume;
while( itr != history_idx.end() && itr->key.base == base_id && itr->key.quote == quote_id )

const auto& ticker_idx = _db.get_index_type<graphene::market_history::market_ticker_index>().indices().get<by_market>();
auto itr = ticker_idx.find( std::make_tuple( base_id, quote_id ) );
if( itr != ticker_idx.end() )
{
if( is_latest )
price latest_price = asset( itr->latest_base, itr->base ) / asset( itr->latest_quote, itr->quote );
result.latest = price_to_real( latest_price );
if( itr->last_day_base != 0 && itr->last_day_quote != 0 // has trade data before 24 hours
&& ( itr->last_day_base != itr->latest_base || itr->last_day_quote != itr->latest_quote ) ) // price changed
{
is_latest = false;
latest_price = itr->op.fill_price;
result.latest = price_to_real( latest_price );
price last_day_price = asset( itr->last_day_base, itr->base ) / asset( itr->last_day_quote, itr->quote );
result.percent_change = ( result.latest / price_to_real( last_day_price ) - 1 ) * 100;
}

if( itr->time < yesterday )
if( assets[0]->id == itr->base )
{
if( itr->op.fill_price != latest_price )
result.percent_change = ( result.latest / price_to_real( itr->op.fill_price ) - 1 ) * 100;
break;
base_volume = itr->base_volume;
quote_volume = itr->quote_volume;
}

if( itr->op.is_maker )
else
{
if( assets[0]->id == itr->op.receives.asset_id )
{
base_volume += itr->op.receives.amount.value;
quote_volume += itr->op.pays.amount.value;
}
else
{
base_volume += itr->op.pays.amount.value;
quote_volume += itr->op.receives.amount.value;
}
base_volume = itr->quote_volume;
quote_volume = itr->base_volume;
}

++itr;
}

auto uint128_to_double = []( const fc::uint128& n )
Expand All @@ -1242,9 +1224,12 @@ market_ticker database_api_impl::get_ticker( const string& base, const string& q
result.base_volume = uint128_to_double( base_volume ) / pow( 10, assets[0]->precision );
result.quote_volume = uint128_to_double( quote_volume ) / pow( 10, assets[1]->precision );

const auto orders = get_order_book( base, quote, 1 );
if( !orders.asks.empty() ) result.lowest_ask = orders.asks[0].price;
if( !orders.bids.empty() ) result.highest_bid = orders.bids[0].price;
if( !skip_order_book )
{
const auto orders = get_order_book( base, quote, 1 );
if( !orders.asks.empty() ) result.lowest_ask = orders.asks[0].price;
if( !orders.bids.empty() ) result.highest_bid = orders.bids[0].price;
}

return result;
}
Expand All @@ -1256,7 +1241,7 @@ market_volume database_api::get_24_volume( const string& base, const string& quo

market_volume database_api_impl::get_24_volume( const string& base, const string& quote )const
{
const auto& ticker = get_ticker( base, quote );
const auto& ticker = get_ticker( base, quote, true );

market_volume result;
result.time = ticker.time;
Expand Down Expand Up @@ -1348,11 +1333,6 @@ vector<market_trade> database_api_impl::get_trade_history( const string& base,
auto quote_id = assets[1]->id;

if( base_id > quote_id ) std::swap( base_id, quote_id );
const auto& history_idx = _db.get_index_type<graphene::market_history::history_index>().indices().get<by_key>();
history_key hkey;
hkey.base = base_id;
hkey.quote = quote_id;
hkey.sequence = std::numeric_limits<int64_t>::min();

auto asset_to_real = [&]( const asset& a, int p ) { return double( a.amount.value ) / pow( 10, p ); };
auto price_to_real = [&]( const price& p )
Expand All @@ -1367,13 +1347,12 @@ vector<market_trade> database_api_impl::get_trade_history( const string& base,
start = fc::time_point_sec( fc::time_point::now() );

uint32_t count = 0;
uint32_t skipped = 0;
auto itr = history_idx.lower_bound( hkey );
const auto& history_idx = _db.get_index_type<graphene::market_history::history_index>().indices().get<by_market_time>();
auto itr = history_idx.lower_bound( std::make_tuple( base_id, quote_id, start ) );
vector<market_trade> result;

while( itr != history_idx.end() && count < limit && !( itr->key.base != base_id || itr->key.quote != quote_id || itr->time < stop ) )
{
if( itr->time < start )
{
market_trade trade;

Expand Down Expand Up @@ -1418,12 +1397,6 @@ vector<market_trade> database_api_impl::get_trade_history( const string& base,
result.push_back( trade );
++count;
}
else // should skip
{
// TODO refuse to execute if need to skip too many entries
// ++skipped;
// FC_ASSERT( skipped <= 200 );
}

++itr;
}
Expand Down Expand Up @@ -1867,6 +1840,9 @@ set<public_key_type> database_api_impl::get_potential_signatures( const signed_t
const auto& auth = id(_db).active;
for( const auto& k : auth.get_keys() )
result.insert(k);
// Also insert owner keys since owner can authorize a trx that requires active only
for( const auto& k : id(_db).owner.get_keys() )
result.insert(k);
return &auth;
},
[&]( account_id_type id )
Expand All @@ -1879,6 +1855,15 @@ set<public_key_type> database_api_impl::get_potential_signatures( const signed_t
_db.get_global_properties().parameters.max_authority_depth
);

// Insert keys in required "other" authories
flat_set<account_id_type> required_active;
flat_set<account_id_type> required_owner;
vector<authority> other;
trx.get_required_authorities( required_active, required_owner, other );
for( const auto& auth : other )
for( const auto& key : auth.get_keys() )
result.insert( key );

wdump((result));
return result;
}
Expand Down
1 change: 0 additions & 1 deletion libraries/chain/fork_database.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,6 @@ void fork_database::_push_block(const item_ptr& item)
auto& index = _index.get<block_id>();
auto itr = index.find(item->previous_id());
GRAPHENE_ASSERT(itr != index.end(), unlinkable_block_exception, "block does not link to known chain");
FC_ASSERT(!(*itr)->invalid);
item->prev = *itr;
}

Expand Down
2 changes: 1 addition & 1 deletion libraries/chain/include/graphene/chain/config.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@

#define GRAPHENE_DEFAULT_BLOCK_INTERVAL 5 /* seconds */
#define GRAPHENE_DEFAULT_MAX_TRANSACTION_SIZE 2048
#define GRAPHENE_DEFAULT_MAX_BLOCK_SIZE (GRAPHENE_DEFAULT_MAX_TRANSACTION_SIZE*GRAPHENE_DEFAULT_BLOCK_INTERVAL*200000)
#define GRAPHENE_DEFAULT_MAX_BLOCK_SIZE (2*1000*1000) /* < 2 MiB (less than MAX_MESSAGE_SIZE in graphene/net/config.hpp) */
#define GRAPHENE_DEFAULT_MAX_TIME_UNTIL_EXPIRATION (60*60*24) // seconds, aka: 1 day
#define GRAPHENE_DEFAULT_MAINTENANCE_INTERVAL (60*60*24) // seconds, aka: 1 day
#define GRAPHENE_DEFAULT_MAINTENANCE_SKIP_SLOTS 3 // number of slots to skip for maintenance interval
Expand Down
5 changes: 0 additions & 5 deletions libraries/chain/include/graphene/chain/fork_database.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -44,11 +44,6 @@ namespace graphene { namespace chain {

weak_ptr< fork_item > prev;
uint32_t num; // initialized in ctor
/**
* Used to flag a block as invalid and prevent other blocks from
* building on top of it.
*/
bool invalid = false;
block_id_type id;
signed_block data;
};
Expand Down
2 changes: 1 addition & 1 deletion libraries/net/node.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -4564,7 +4564,7 @@ namespace graphene { namespace net { namespace detail {
error_message_stream << "Unable to listen for connections on port " << listen_endpoint.port()
<< ", retrying in a few seconds\n";
error_message_stream << "You can wait for it to become available, or restart this program using\n";
error_message_stream << "the --p2p-port option to specify another port\n";
error_message_stream << "the --p2p-endpoint option to specify another port\n";
first = false;
}
else
Expand Down
Loading