CypherPunk Movement

THE CYPHERPUNK MOVEMENT

Let’s make a journey back in time to see where blockchain technology and cryptocurrencies came from. It will take us back to the CypherPunk Movement starting in the 1970’s.

Cryptography for the People

Encryption was primarily used for military purposes before the 1970s. People at that time were living in an analog world. Few had computers and even fewer could imagine a technology that would connect almost every human being on the planet – the internet.

Two publications brought cryptography into the open, namely the “Data Encryption Standard” published by the US Government, and a paper called “New Directions in Cryptography” by Dr. Whitfield Diffie and Dr. Martin Hellman, published in 1976.

Dr. David Chaum started writing on topics such as anonymous digital cash and pseudonymous reputation systems in the 1980s, such as the ones described in “Security without Identification: Transaction Systems to make Big Brother Obsolete”. This was the first step toward the digital currencies we see today.

The Cypherpunks

We walk on shoulders of Giants!
Hughes, May, Back, Finney, Gilmore, Szabo

It wasn’t until 1992 that a group of cryptographers in the San Francisco Bay area started meeting up on a regular basis to discuss their work and related ideas. They built a basis for years of cryptographic research to come.

Besides their regular meetings, they also started the Cypherpunk mailing list in which they discussed many ideas including those which led to the birth of Bitcoin.

In late 1992 Eric Hughes, one of the first cypherpunks, wrote “A Cypherpunk’s Manifesto” laying out the ideals and vision of the movement.

Note: We encourage you to read A Cypherpunk’s Manifesto. The Manifesto is just as relevant today as it was in 1992. This short read takes only a few minutes of your time. It’s astonishing to see how much foresight the early members had when most people didn’t even think about computers yet.


A Cypherpunks’s Manifesto

An excerpt from the Manifesto:

“Privacy is necessary for an open society in the electronic age.

Privacy is not secrecy.

A private matter is something one doesn’t want the whole world to know, but a secret matter is something one doesn’t want anybody to know.

Privacy is the power to selectively reveal oneself to the world.”

“Privacy in an open society also requires cryptography.

If I say something, I want it heard only by those for whom I intend it.

If the content of my speech is available to the world, I have no privacy.

To encrypt is to indicate the desire for privacy, and to encrypt with weak cryptography is to indicate not too much desire for privacy.”

“We must defend our own privacy if we expect to have any.

We must come together and create systems which allow anonymous transactions to take place.

People have been defending their own privacy for centuries with whispers, darkness, envelopes, closed doors, secret handshakes, and couriers.

The technologies of the past did not allow for strong privacy, but electronic technologies do.”

“We the Cypherpunks are dedicated to building anonymous systems.

We are defending our privacy with cryptography, with anonymous mail forwarding systems, with digital signatures, and with electronic money.”


Electronic Cash

Although you might have just heard about this movement for the first time, you have most definitely benefitted from the efforts of some of their members in building Tor, BitTorrent, SSL, and PGP encryption. It should not surprise you that many concepts and ideas that originated from this group led to the emergence of cryptocurrencies.

In 1997, Dr. Adam Back created HashCash, which he proposed as a measure against spam. A little later, in 1998, Wei Dai published his idea for b-money and conceived the ideas of Proof-of-Work and Proof-of-Stake to achieve consensus across a distributed network. In 2005 Nick Szabo published a proposal for Bit Gold. There was no cap on the maximum supply but he introduced the idea to value each unit of Bit Gold by the amount of computational work that went into producing it. Although this is not how cryptocurrencies are valued, the price of production (comprised of hardware and electricity cost) plays a role in the pricing of these digital assets.

In 2008, Satoshi Nakamoto released the Bitcoin white paper, citing and building upon HashCash and b-money. Citations from his early communications and parts of his white paper, such as the following on privacy, suggest Nakamoto was close to the cypherpunk movement.

“The traditional banking model achieves a level of privacy by limiting access to information to the parties involved and the trusted third party. The necessity to announce all transactions publicly precludes this method, but privacy can still be maintained by breaking the flow of information in another place: by keeping public keys anonymous. The public can see that someone is sending an amount to someone else, but without information linking the transaction to anyone. This is similar to the level of information released by stock exchanges, where the time and size of individual trades, the ‘tape’, is made public, but without telling who the parties were.”

Technology did not enable strong privacy prior to the 20th century, but neither did it enable affordable mass surveillance. We believe in the human right to privacy and work towards enabling anyone who wishes to claim his or her privacy to do so. We see a cryptocurrency with selective privacy as a good step in the right direction of reclaiming our privacy.





Smart Contracts by Nick Szabo-1994


Nick Szabo

A smart contract is a computerized transaction protocol that executes the terms of a contract. The general objectives of smart contract design are to satisfy common contractual conditions (such as payment terms, liens, confidentiality, and even enforcement), minimize exceptions both malicious and accidental, and minimize the need for trusted intermediaries. Related economic goals include lowering fraud loss, arbitration and enforcement costs, and other transaction costs[1].

Some technologies that exist today can be considered as crude smart contracts, for example POS terminals and cards, EDI, and agoric allocation of public network bandwidth.

Digital cash protocols[2,3] are fine examples of smart contracts. They enable online payment while honoring the characteristics desired of paper cash: unforgeability, confidentiality, and divisibility.

When we take a second glance at digital cash protocols, considering them in the wider context of smart contract design, we see that these protocols can be used to implement a wide variety of electronic bearer securities, not just cash.

We also see that to implement a full customer-vendor transaction, we need more than just the digital cash protocol; we need a protocol that guarantees that product will be delivered if payment is made, and vice versa.

Current commercial systems use a wide variety of techniques to accomplish this, such as certified mail, face to face exchange, reliance on credit history and collection agencies to extend credit, etc.

Smart contracts have the potential to greatly reduce the fraud and enforcement costs of many commercial transactions. Digital cash protocols use several of the rich new building blocks coming out of the fields of cryptography and computer science.

Most of these components have not yet been widely exploited to facilitate contractual arrangements, but the potential is vast. These subprotocols include Byzantine agreement, symmetric and asymmetric encryption, digital signatures, blind signatures, cut & choose, bit commitment, multiparty secure computations, secret sharing, oblivious transfer, and multiparty secure computation. All of these except the first are described in [2,3].

The consequences of smart contract design on contract law and economics, and on strategic contract drafting, (and vice versa), have been little explored. As well, I suspect the possibilities for greatly reducing the transaction costs of executing some kinds of contracts, and the opportunities for creating new kinds of businesses and social institutions based on smart contracts, are vast but little explored.

The “cypherpunks”[4] have explored the political impact of some of the new protocol building blocks. The field of Electronic Data Interchange (EDI), in which elements of traditional business transactions (invoices, receipts, etc.) are exchanged electronically, sometimes including encryption and digital signature capabilities, can be viewed as a primitive forerunner to smart contracts. Indeed those business forms can provide good starting points and channel markers for smart contract designers.

One important task of smart contracts, that has been largely overlooked by traditional EDI, is communicating the semantics of the transaction to the parties involved.

There is ample opportunity in smart contracts for “smart fine print”: actions taken by the software hidden from a party to the transaction.

For example, grocery store POS machines don’t tell customers whether or not their names are being linked to their purchases in a database. The clerks don’t even know, and they’ve processed thousands of such transactions under their noses.

Thus, via hidden action of the software, the customer is giving away information they might consider valuable or confidential, but the contract has been drafted, and transaction has been designed, in such a way as to hide those important parts of that transaction from the customer.

To communicate transaction semantics well, we need good visual metaphors for the elements of the contract. These would hide the details of the protocol without surrendering control over the knowledge and execution of contract terms.

A primitive but good example is provided by the SecureMosiac software from CommerceNet. Encryption is shown by putting the document in an envelope, and a digital signature by affixing a seal onto the document or envelope. On the other hand, Mosaic servers log connections, and sometimes even transactions, without warning users — classic hidden actions.

Another area that might be considered in smart contract terms is synthetic assets[5]. These new securities are formed by combining securities (such as bonds) and derivatives (options and futures) in a wide variety of ways.

Very complex term structures for payments (ie, what payments get made when, the rate of interest, etc.) can now be built into standardized contracts and traded with low transaction costs, due to computerized analysis of these complex term structures.

Synthetic assets allow us to arbitrage the different term structures desired by different customers, and they allow us to construct contracts that mimic other contracts, minus certain liabilities.

As an example of the latter, synthetic assets have been constructed that mimic the returns of stocks in German companies, without requiring payment of the tax foreigners must pay to the German government for capital gains in German stocks.

It’s important to note that these synthetics do _not_ confer voting rights as do the originals. It might be possible to add smart contract protocols to transfer voting rights to the synthetic.

Of course, these protocols might have to be quite secure to withstand attacks from the third party jurisdiction, whose transaction cost (the tax) is being arbitraged away by the synthetic asset.

Finally, we can extend the concept of smart contracts to property. Smart property might be created by embedding smart contracts in physical objects. These embedded protocols would automatically give control of the keys for operating the property to the agent who rightfully owns that property, based on the terms of the contract.

For example, a car might be rendered inoperable unless the proper challenge-response protocol is completed with its rightful owner, preventing theft. If a loan was taken out to buy that car, and the owner failed to make payments, the smart contract could automatically invoke a lien, which returns control of the car keys to the bank. This smart lien might be much cheaper and more effective than a repo man.

Also needed is a protocol to provably remove the lien when the loan has been paid off, as well as hardship and operational exceptions. For example, it would be rude to revoke operation of the car while it’s doing 75 down the freeway.

Smart property may be a ways off, but digital cash and synthetic assets are here today, and more smart contract mechanisms are being designed. So far the design criteria important for automating contract execution have come from disparate fields like economics and cryptography, with little cross-communication: little awareness of the technology on the one hand, and little awareness of its best business uses other.

The idea of smart contracts is to recognize that these efforts are striving after common objectives, which converge on the concept of smart contracts.

Copyright (c) 1994 by Nick Szabo
permission to redistribute without alteration hereby granted

Redistributed with respect & admiration from:

https://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html

Nick Szabo is so deeply ingrained in the modern digital currency landscape that 1/1000000000000th of an Ether is called a “szabo”.





Running bitcoin – Hal Finney


Wonder In Peace Bright Mind

Join Honorary Chair Fran Finney and the Running Bitcoin Challenge Committee as we honor legendary cypher punk, Hal Finney.

This is THE EVENT that combines Hal Finney’s love of running and Bitcoin and is raising funds and awareness to help defeat ALS, which ultimately claimed his life in 2014.

You are challenged to run (or walk, roll, or hike) the equivalent of a half marathon — cumulatively or all at once — by the end of January 10, 2023.

From wherever you are, spread the word about Bitcoin, participate in a healthy activity, feel good about doing your part to defeat ALS, and start the year off right


Hal Finney, one of the earliest bitcoin contributors, died eight years ago from complications of nervous system disease amyotrophic lateral sclerosis (ALS).

His spouse, Fran Finney, is now organizing a half marathon to raise funds for ALS research via bitcoin.



The “Running Bitcoin Challenge” is set to take place between Jan. 1 and Jan. 10. The timing of the occasion leads up to the anniversary of Hal Finney’s “Running bitcoin” tweet, in which Finney famously disclosed he was deploying a Bitcoin node.

There is no set location — participants can choose to join anywhere they wish. Players are encouraged to either run, walk, roll or hike the equivalent of a half marathon (Hal’s favorite distance) either in one go or over the entire 10-day period.

Donors contributing at least $100 will receive an official shirt with the half marathon’s logo, while the event’s top 25 fundraisers will get a Hal Finney collectible signed by his wife.

As of Wednesday morning, the event has already managed to secure nearly $10,000 in bitcoin donations.

An advocate of cryptography and digital privacy, Finney was the recipient of the first-ever bitcoin transfer from the network’s pseudonymous creator Satoshi Nakamoto.

The bitcoin community often suspected Finney was Nakamoto, a claim he consistently denied. He reportedly found out about his condition in 2009 and decided to move away from the project.

Hal’s name is high in the Bitcoin pantheon as one of the first people to voice support for Satoshi Nakamoto’s invention and for being the first person to receive a Bitcoin transaction from Satoshi.

He was, for a time, considered one of the top contenders on the list of potential Satoshis himself (many in blockchain who reject Dr. Craig Wright’s statements still falsely believe Finney to be Bitcoin’s real creator).

Hal, who referred to himself as a “cypherpunk,” was a cryptographic activist who went from developing video games to working on the Pretty Good Privacy (PGP) project in the 1990s. He described his PGP work as “dedicated to the goal of making Big Brother obsolete.”

PGP creator Phil Zimmerman hired Hal as his first employee when PGP became PGP Corporation in the early 2000s. He described Hal as a “gregarious man” who loved skiing and long-distance running.

Despite gradual paralysis that eventually forced him to stop working, Hal continued to code software and follow the Bitcoin project.

Almost as famous as his 2009 tweet is his “Bitcoin and me” post on BitcoinTalk.org in March 2013, the last he’d ever make.

It’s a long post, and Hal was “essentially paralyzed” at the time, using an eye tracker to type. Forum stats show the post has been read over 278,000 times.

“When Satoshi announced the first release of the software, I grabbed it right away,” he wrote. “I think I was the first person besides Satoshi to run bitcoin. I mined block 70-something, and I was the recipient of the first bitcoin transaction when Satoshi sent ten coins to me as a test.

I carried on an email conversation with Satoshi over the next few days, mostly me reporting bugs and him fixing them.”

Hal himself always denied being Satoshi Nakamoto, adding later that he’d sold most of the Bitcoins he mined (at pre-2014 prices) to pay for his treatments. He also mentioned putting some in a safe deposit box for his children.

“And, of course, the price gyrations of bitcoins are entertaining to me.

I have skin in the game.

But I came by my bitcoins through luck, with little credit to me.

I lived through the crash of 2011.

So I’ve seen it before.

Easy come, easy go.”

Hal Finney

www.runningbitcoin.us

Admiration and great Respect


With 🧡

Trilemma of International Finance

Trilemma of International Finance

The relative value of any two curren-
cies—the exchange rate—is determined
through their sale and purchase on the global foreign exchange market. If government policy interferes with this market by changing the relative supply or demand of currencies, the exchange rate is managed.

The trilemma of international finance, is a restriction on government policy that follows immediately from the interaction of exchange rates, monetary policy and international capital flows.


Trilemma of International Finance

The trilemma states that any country can have only two of the following:

  • (1) Unrestricted international capital markets.
  • (2) A managed exchange rate.
  • (3) An independent monetary policy.

If the government wants a managed exchange rate but does not want to interfere
with international capital flows, it must use
monetary policy to accommodate changes
in the demand for its currency in order to
stabilize the exchange rate.

In the extreme, this would take the form of a currency board arrangement, where the domestic currency is fully backed by a foreign currency (as in the case of Hong Kong).

In such a situation, monetary policy can no longer be used for domestic purposes (it is no longer independent).

If a country wishes to maintain control over monetary policy to reduce domestic unemployment or inflation, for example, it must limit trades of its currency in the international capital market (it no longer has free international capital markets).

A country that chooses to have both unrestricted inter-national capital flows and an independent monetary policy can no longer influence its exchange rate and, therefore, cannot have a managed exchange rate.



Pieters and Vivanco (2016), government
attempts to regulate the globally accessible
bitcoin markets are generally unsuccessful,
and, as shown in Pieters (2016), bitcoin exchange rates tend to reflect the
market, not official exchange rates.

Should the flows allowed by bitcoin become big enough, all countries will have, by default, unrestricted international capital markets.

Thus, with bitcoin, (1) unrestricted
international capital markets is chosen by
default.

Therefore, the only remaining policy choice is between (2) managed exchange rates or (3) independent monetary policy.

If the country chooses (1) and (2), it must use reactive monetary policy to achieve the managed exchange rate.

If the country chooses (1) and (3), it must have a floating exchange rate because it has no remaining tools with which to maintain a managed exchange rate.

Ali et al. (2014), the European Central
Bank (2015) and the Bank for International
Settlements (2015) all concur that cryptocur-
rencies may eventually undermine monetary policy.





With 💚

Au – 💲 – ₿



Gold is a chemical element with the symbol Au (from Latin: aurum) and atomic number 79, making it one of the higher atomic number elements that occur naturally.

It is a bright, slightly orange-yellow, dense, soft, malleable, and ductile metal in a pure form.

Chemically, gold is a transition metal and a group 11 element. It is one of the least reactive chemical elements and is solid under standard conditions.

Gold often occurs in free elemental (native) form, as nuggets or grains, in rocks, veins, and alluvial deposits. It occurs in a solid solution series with the native element  silver (as electrum), naturally alloyed with other metals like copper and palladium, and mineral inclusions such as within pyrite.

Less commonly, it occurs in minerals as gold compounds, often with tellurium (gold tellurides).

A relatively rare element, gold is a precious metal that has been used for coinage,  jewelry, and other arts throughout recorded history.

In the past, a gold standard was often implemented as a monetary policy.

Still, gold coins ceased to be minted as a circulating currency in the 1930s, and the world gold standard was abandoned for a fiat currency system after 1971.

As of 2017, the world’s largest gold producer by far was China, with 440 tonnes per year.

A total of around 201,296 tonnes of gold exists above ground, as of 2020. This is equal to a cube with each side measuring roughly 21.7 meters (71 ft).

Gold’s high malleability, ductility, resistance to corrosion and most other chemical reactions, and conductivity of electricity have led to its continued use in corrosion-resistant electrical connectors in all types of computerized devices (its chief industrial use).

The world consumption of new gold produced is about 50% in jewelry, 40% in investments and 10% in industry.

Gold is also used in infrared shielding,  colored-glass production, gold leafing, and tooth restoration. Certain gold salts are still used as anti-inflammatories in medicine.



F I A T


Fiat money (from Latinfiat“let it be done”) is a type of money that is not backed by any commodity such as gold or silver, and typically declared by a decree from the government to be legal tender.

Throughout history, fiat money was sometimes issued by local banks and other institutions. In modern times, fiat money is generally established by government regulation.

Yuan dynasty banknotes are a
medieval form of fiat money

Fiat money does not have intrinsic value  and does not have use value. It has value only because the people who use it as a medium of exchange agree on its value. They trust that it will be accepted by merchants and other people.

Fiat money is an alternative to commodity money, which is a currency that has intrinsic value because it contains a precious metal such as gold or silver which is embedded in the coin.

Fiat also differs from representative money, which is money that has intrinsic value because it is backed by and can be converted into a precious metal or another commodity.

Fiat money can look similar to representative money (such as paper bills), but the former has no backing, while the latter represents a claim on a commodity (which can be redeemed to a greater or lesser extent).

Government-issued fiat money  banknotes  were used first during the 11th century in China.

Fiat money started to predominate during the 20th century.

Since President Richard Nixon‘s decision to default on the US dollar convertibility to gold in 1971, a system of national fiat currencies has been used globally.

Fiat money can be:

  • Any money that is not backed by a commodity.
  • Money declared by a person, institution or government to be legal tender, meaning that it must be accepted in payment of a debt in specific circumstances.
  • State-issued money which is neither convertible through a central bank to anything else nor fixed in value in terms of any objective standard.
  • Money used because of government decree.
  • An otherwise non-valuable object that serves as a medium of exchange (also known as fiduciary money.)

The term fiat derives from the Latin word  fiat, meaning “let it be done” used in the sense of an order, decree or resolution.


Bitcoin – Digital Gold

The most common, and best, ways to think about bitcoin is as “digital gold”.

Like gold, bitcoin doesn’t rely on a central issuer, can’t have its supply manipulated by any authority, and has fundamental properties long considered important for a monetary good and store of value.

Unlike gold, bitcoin is extremely easy and cheap to “transport”, and trivial to verify its authenticity.

Bitcoin is also “programmable”. This means custody of bitcoin can be extremely flexible. It can be split amongst a set of people (“key holders”), backed up and encrypted, or even frozen-in-place until a certain date in the future. This is all done without a central authority managing the process.

You can walk across a national border with bitcoin “stored” in your head by memorizing a key.

The similarities to gold, plus the unique features possible because bitcoin is purely digital, give it the “digital gold” moniker.

Sharing fundamental properties with gold means it shares use-cases with gold, such as hedging inflation and political uncertainty.

But being digital, bitcoin adds capabilities that are especially relevant in our modern electronic times.

The world does indeed need a digital version of gold.


People’s Money



With 💚

Bitcoin Mining – Where the Profitable Future Lies



The Times – January 3, 2009

Bitcoin Genesis Block
Mined 03 January 2009

Cypherpunks Write Code

CODE IS LAW
THE SOONER HUMANKIND ACCEPTS IT,
THE SOONER IT CAN BUILD AROUND IT

Yeah.. I wonder Why 😂


Bitcoin made easy

How a Bitcoin transaction works

A humble Miner


How Bitcoin Mining Works

Mining Difficulty

Bitcoin Halving

Bitcoin Previous Halvings

Pools

Bitcoin Wallets

Bitcoin Stakeholders

Bitcoin Facts

Power to the People

Totalitarian Governments can kiss my 256-bit key

Bitcoin – People’s Money

Bitcoin cannot be Shut Down


The power of the long tail…



Central Bank’s 3 Strategies

F**k them, Enough !!!



Upcoming Smart Contracts Networks

Bitcoin Yearly Candles

Bitcoin Price History – Log Scale

Bitcoin Mining Ecosystem Map

Defi Ecosystem in Ethereum

DeFi Stack: Product& Application View

Syscoin Ecosystem


Syscoin

BSC Ecosystem

Popular Cryptocurrency

Crpto Ecosystem

Public Companies that own Bitcoin

Top Banks investing in Crypto

Bitcoin Inflation vs. Time

When you’re Ready…



Choose Wisely

Make bitcoin thrive, let fiat become humus…



Veritas non Auctoritas
Facit Legem

Most people misunderstand what bitcoin miners actually do, and as a result they don’t fully grasp the level of security provided by bitcoin’s hashrate.

In this article, we’ll explain proof of work in a non-technical way so that you’ll be able to counter the misinformation about supercomputers and quantum computers attacking the Bitcoin network in the future. 

Simply put, mining is a lottery to create new blocks in the Bitcoin blockchain. There are two main purposes for mining:

  1. To permanently add transactions to the blockchain without the permission of any entity.
  2. To fairly distribute the 21 million bitcoin supply by rewarding new coins to miners who spend real world resources (i.e. electricity) to secure the network.

To understand what is actually happening in this lottery system, let’s look at a simple analogy where every Bitcoin hash is equivalent to a dice roll.


Luck, Gambling, and SHA-256


Imagine that miners in the Bitcoin Network are all individuals gambling at a casino. In this example, each of these gamblers have a 1000 sided dice. They roll their die as quickly as possible, trying to get a number less than 10. Statistically, this may take a very long time, but as more gamblers join the game, the time it takes to hit a number less than 10 gets reduced. In short, more gamblers equals quicker rounds.

Once somebody successfully rolls a number less than 10, all gamblers at the table can look down and verify the number. This lucky gambler takes the prize money and the next round begins.

Ultimately, the process of mining bitcoin is very similar. All miners on the network are using Application Specific Integrated Circuits (ASICs), which are specialized computers designed to compute hashes as quickly as possible.

To “compute a hash” simply means plugging any random input into a mathematical function and producing an output.

More hashes per second (i.e. higher hashrate) is equivalent to more dice rolls per second, and thus a greater probability of success.

Miners propose a potential Bitcoin block of transactions, and use this for an input. The block is plugged into the SHA256 hash function which yields a fixed-sized output, known as a hash. A single hash can be computed in less than a millisecond, as it involves no complex math.

If the hash value is lower than the Bitcoin Network difficulty, then the miner who proposed the block wins. If not, then the miner continues trying by computing more hashes.

The successful miner’s block is then added to the blockchain, the miner is rewarded with newly issued bitcoin for their work, and the “next round” begins.


Sources :

https://wikipedia.com/

https://braiins.com/

https://blockdata.com/

https://coin98analytics.com/

https://scoopwhoop.com/

https://stakingrewards.com/

https://syscoin.org/

https://galaxydigitalresearch.com/

https://surveycrest.com/

The Times

The Economist

"Internet of Money" - Andreas Antonopoulus

Hal Finney Quotes

Timothy C. May Quote

Free Spirit Digital Art

!°! If I forgot someone, sorry ! Do tell and I'll add you as a source of inspiration on the list !!! Thanks for understanding !!!


Questions, opinions, critics and requests always welcomed and as time allows will be accomodated !!! 🤓 🙂 😉


Did you find this article helpful?

If so, please consider a donation to help the evolution and development of more helpful articles in the future, and show your support for alternative articles.

Your generosity is 💚 ly appreciated

You can donate in any crypto your 💚 desires 😊

Thank you all for your time !!!

✌ & 💚


Bitcoin (BTC) :

1P1tTNFGRZabK65RhqQxVmcMDHQeRX9dJJ


LiteCoin(LTC) :

LYAdiSpsTJ36EWCJ5HF9EGy9iWGCwoLhed


Ethereum(ETH) :

0x602e8Ca3984943cef57850BBD58b5D0A6677D856


EthereumClassic(ETC) :

0x602e8Ca3984943cef57850BBD58b5D0A6677D856


Cardano(ADA) :

addr1q88c5cccnrqy6xesszzvf7rd4tcz87klt0m0h6uvltywqe8txwmsrrqdnpq27594tyn9vz59zv0n8367lvyc2atvrzvqlvdm9d


BinanceCoin(BNB) :

bnb1wwfnkzs34knsrv2g026t458l0mwp5a3tykeylx


BitcoinCash (BCH)

1P1tTNFGRZabK65RhqQxVmcMDHQeRX9dJJ


Bitcoin SV (BSV)

1P1tTNFGRZabK65RhqQxVmcMDHQeRX9dJJ


ZCash(ZEC) :

t1fSSQX4gEhove9ngcvFafQaMPq5dtNNsNF


Dash(DASH) :

XcWmbFw1VmxEPxvF9CWdjzKXwPyDTrbMwj


Shiba(SHIB) :

0x602e8Ca3984943cef57850BBD58b5D0A6677D856


Tron(TRX) :

TCsJJkqt9xk1QZWQ8HqZHnqexR15TEowk8


Stellar(XLM) :

GBL4UKPHP2SXZ6Y3PRF3VRI5TLBL6XFUABZCZC7S7KWNSBKCIBGQ2Y54


A world where anything is possible…
The choice is yours People !!!


With 💚

The other 6 Billion

A Design For An Efficient Coordinated Financial Computing Platform

A Design For An Efficient Coordinated Financial Computing Platform

Jag Sidhu

Feb 25, 2021·41 min read

Abstract

Bitcoin was the first to attempt to offer a practical outcome in the General’s Dilemma using Crypto Economic rationale and incentives. Ethereum was the first to abstract the concept of Turing completeness within similar frameworks assumed by Bitcoin.

What Syscoin presents is a combination of both Bitcoin and Ethereum with intuitions built on top to achieve a more efficient financial computing platform which leverages coordination to achieve consensus using Crypto Economic rationale and incentives.

We propose a four-layer tech stack using Syscoin as the base (host) layer, which provides an efficient (ie, low gas cost per transaction) platform.

Some of the main advantages include building scalable decentralized applications, the introduction of a decentralized cost model around Ethereum Gas fees.

This new model proposes state-less parallelized execution and verification models while taking advantage of the security offered by the Bitcoin protocol. We may also refer to this as Web 3.0.

Table Of Contents

  • Abstract
  • Introduction
  • Syscoin Platform
  • Masternode Configuration
  • Chain Locks
  • Blockchain as a Computational Court
  • Scalability and Security
  • Efficiency
  • State Liveness and State Safety
  • Avoiding Re-execution of Transactions
  • Validity Proof Systems Overtop Proof-of-Work Systems
  • Quantum Resistance:
  • A Design Proposal for Web 3.0
  • Optimistic vs ZkRollup
  • Decentralized Cost Model
  • State-less Layer 1 Design
  • Related Works
  • Commercial Interests
  • Functional Overview
  • Give Me The Goods
  • Blockchain Foundry
  • Acknowledgements
  • References

Introduction

Syscoin is a cryptocurrency borrowing security and trust models of Bitcoin but with services on top which are conducive for businesses to build distributed applications through tokenization capabilities.

Syscoin has evolved since being introduced in 2013 where it offered a unique set of services through a coloured coin implementation on top of Bitcoin.

These services included aliases (identity), assets (tokens), offers (marketplace), escrow (multisig payments between aliases and marketplaces), and certificates (digital credentials).

In its current iteration, it has evolved to serve availability of consensus rather than data storage itself which requires some liveness guarantees better suited to systems like Filecoin and IPFS.

The recent iteration of Syscoin, version 4.0, streamlined the on-chain footprint to exclusively serve assets, a service which requires on-chain data availability for double-spend protection.

Ultimately, the only data that belongs on the blockchain are proofs that executions occurred (eg, coin transfers, smart contract executions, etc.) and information required to validate those proofs.

We introduced high-throughput payment rails for our asset infrastructure through an innovation we called Z-DAG [1]. This innovation offered real-time probabilistic guarantees of double-spend protection and ledger settlement for real-time point-of-sale. As a result, the token platform is one step closer to mass adoption by providing scalable infrastructure and speed that met or exceeded what was necessary to transact with digital tokens in real-life scenarios.

In addition, a two-way bridge to trustlessly interoperate with Ethereum. This enables Ethereum users to benefit from fast, cheap and secure transactions on Syscoin, and Syscoin users to leverage the Turing complete contract capabilities and ecosystem of Ethereum, all of which exclude custodians or third-parties.

Every decision we’ve made has been with security in mind. We believe that one of the biggest advantages of Syscoin is that it is merge-mined with Bitcoin.

Rather than expend more energy, Syscoin recycles the same energy spent by Bitcoin miners in order to solve blocks while being secured by the most powerful cryptocurrency mining network available.

With this energy efficiency we were able to reduce the subsidy to miners and increase subsidy to masternodes without raising the overall inflation; see Fig 1 for configuration.

Unlike Dashpay, these masternodes are not what you expect, as they have the specific job of running full nodes.

Fig 1: Masternode setup

Syscoin Platform

Today, Syscoin offers an asset protocol and deterministic validators as an enhancement on top of Bitcoin, as summarized below:

  • UTXO Assets
  • Compliance through Notary
  • Fungible and Non-Fungible tokens (Generic Asset infrastructure named SPT — Syscoin Platform Tokens)
  • Z-DAG for fast probabilistic onchain payments, working alongside payment channel systems like Lightning Networks
  • Deterministic validators (Masternodes) which run as Long-Living Quorums for distributed consensus decisions such as Chain Locks
  • Decentralized Governance, 10% of block subsidy is saved to pay out in a governance mechanism through a network wide vote via masternodes
  • Merged-mined with Bitcoin for shared work alongside Bitcoin miners

Masternode Configuration

With 2400+ masternodes running fullnodes, Z-DAG becomes much more dependable, as does the propagation of blocks and potential forks.

The masternodes are bonded through a loss-less strategy of putting 100000 Syscoin in an output and running full nodes in exchange for block rewards.

A seniority model incentivizes the masternodes to share long-term growth by paying them more for the longer period of service. Half of the transaction fees are also shared between the PoW miners and masternodes to ensure long term alignment once subsidy becomes negligible.

The coins are not locked at any point, and there is no slashing condition if masternodes decide to move their coins, the rewards to those masternodes simply stop.

Sharing Bitcoin’s compact block design, it consumes very little bandwidth to propagate blocks assuming the memory pool of all these nodes is roughly synchronized [2].

The traffic on the network primarily consists of propagating the missing transactions to validate these blocks. Having a baseline for a large number of full-nodes that are paid to be running allows us to create a very secure environment for users.

It proposes higher costs to would-be attackers who either have to attempt a 51% attack of Syscoin (effectively also trying to attack the Bitcoin network), or try to game the mesh network by propagating bad information which is made more difficult by incentivized full-nodes.

The health of a decentralized network consists of the following;

(a) the mining component or consensus to produce blocks, and

(b) the network topology to disseminate information in a timely manner in conditions where adversaries might be lurking.

Other attacks related to race conditions in networking or consensus code are mostly negligible as Syscoin follows a rigorous and thorough continuous development process.

This includes deterministic builds, Fuzz tests, ASAN/MSAN/TSAN, functional/unit tests, multiple clients and adequate code coverage.

Syscoin and Bitcoin protocol code bases are merged daily such that the build/signing/test processes are all identical, allowing us to leverage the massive developer base of Bitcoin.

The quality of code is reflective of taking worst case situations into account. The most critical engineers and IT specialists need confidence that value is secure should they decide to move their business to that infrastructure.

It’s true that there are numerous new ideas, new consensus protocols and mechanisms for achieving synchronization among users in a system through light/full node implementations.

However, in our experience in the blockchain industry over the last 8 years, we understand that it takes years, sometimes generations to bring those functionalities to production level quality useful for commercial applications.

Chain Locks

With a subset of nodes offering sybil resistance through the requirement of bonding 100,000 SYS to become active, plus the upcoming deterministic masternode feature in Syscoin 4.2, we have enabled Chain Locks which attempts to solve a long-standing security problem in Bitcoin [3], where Dashcore was the first project to implement this idea [4] which the industry has since widely accepted as a viable solution [5].

Our implementation is an optimized version of this, in that we do not implement Instant Send or Private Send transactions and thus Syscoin’s Chain Lock implementation is much simpler.

Because of merged-mining functionality with Bitcoin, we believe our chain coupled with Chain Locks becomes the most secure via solving Bitcoin’s most vulnerable attack vector, selfish mining.

These Chain Locks are made part of Long-Living Quorums (LLMQ) which leverage aggregatable Boneh–Lynn–Shacham (BLS) signatures that have the property of being able to combine multiple signers in a Distributed Key Generation (DKG) event to sign on decisions. In this setup, a signature can be signed on a group of parties under threshold constraints without any one of those parties holding the private key associated with that signature. In our case, the signed messages would be a ChainLock Signature (CLSIG) which represent claims on what the block hashes represent of the canonical chain [4].

This model suggests a very efficient threshold signature design was needed to be able to quickly come to consensus across the Masternode layer to decide on chain tips and lock chains preventing selfish mining attacks. See [6] to understand the qualities of BLS signatures in the context of multi-sig use cases.

Ethereum 2.0 design centers around the use of BLS signatures through adding precompile opcodes in the Ethereum Virtual Machine (EVM) for the BLS12–381 curve [7] which Syscoin has adopted.

This curve was first introduced in 2017 by Bowe [8] to the ZCash protocol. Masternodes on Syscoin use this curve and have a BLS key that is associated with each validator. There is the performance comparison to ECDSA (Secp256k1) [9] that shows its usefulness in contrast to what Bitcoin and Syscoin natively use for signature verification.

Blockchain as a Computational Court

A computational court is a way of enforcing code execution on the blockchain’s state. This was first introduced by de la Rouvier [10].

Since the inception of  Syscoin  and  Blockchain Foundry we have subscribed to the idea that the blockchain should be used as a court system rather than a transaction processor.

This debate has stemmed from the block size debate in the Bitcoin community [11]. However, with recent revelations in cryptography surrounding Zero-Knowledge Proofs (ZKP) [12] and particularly Zero-Knowledge Succinct Non-Interactive Argument of Knowledge (zk-STARK) [13], we propose a secure ledger strategy using the Bitcoin protocol as a court (ie, host layer), an EVM or eWASM (ie, operating system layer), computational scaling through ZKP (ie, SDK layer) and business verticals (ie, application layer); see Fig 2

Fig 2: Four-layer tech stack

Scalability and Security

Scalability in blockchain environments is typically measured by Total Transactions per Second (TPS).

This means full trustlessness, decentralization and liveness properties as evidenced by something like Bitcoin.

If trade-offs are made to achieve higher scale it means another property is affected.

A full node is one that creates blocks and/or fully validates every block of transactions.

For the purpose of this discussion, we will refrain on expounding on designs where light-clients are used to give semblance of higher throughput, etc.

However, if two nodes are running the same hardware and doing the same work, the one that provides more TPS performance than the other is considered more scalable. This is not to be confused with throughput which is the measure of output that can be increased by simply adding more hardware resources. Hence, more throughput does not mean more scalable.

Some blockchains require the producers of blocks to run on higher specifications, offering higher throughput but not necessarily more scale.

However, there are projects which employ parallel processing to try to achieve higher scale whilst also enforcing more capable hardware to provide a more efficient overall system [33].

As a logical experiment, the throughput of a system divided by the scalability of the system is what we define as efficiency.

In the following sections, we will outline our proposal for improved efficiency.

Efficiency

The holy grail of blockchain design resides in the ability to have a ledger that can claim to be sublinear while retaining consistency, fault tolerance and full availability (ie, CAP Theorem).

This means there are roughly constant costs for an arbitrary amount of computation performed and being secured by that ledger.

This has always been thought of as impossible and it mostly is unless acceptable trade-offs appear in application designs and they are easy to understand and work around.

Most experts make the assumption that an O(1) ledger is simply impossible and thus design blockchains and force applications to work in certain ways as a result.

We will remove such assumptions and let business processes dictate how they work by giving the ability to achieve O(logk n) for some constant k (ie, polylogarithmic) efficiency with trade-offs.

A polylogarithmic design would give the ability for almost infinite scaling over time for all intents and purposes.

The only bottlenecks would be how fast information can be propagated across the network which would improve over time as telecom infrastructure naturally evolves and increases in both capability and affordability.

Put in context, even Lightning Networks for transactional counts qualifies as a form of sublinear scaling on a transactional basis but not per user, as users must necessarily enter the main chain first before entering a payment channel.

It requires the state of the blockchain to include the users joining the system.

This state (the UTXO balances) is the single biggest factor of efficiency degradation in Bitcoin.

Users need to first start on the main chain and then move into the payment channel system to receive money, meaning that scale is at best O (N) where N is the number of users.

There are some solutions to this problem of state storage on Bitcoin by reducing it via an alternative accumulator strategy to the cost of increased bandwidth [14].

This approach would make the chain state-less, however the validation costs would remain linear to the number of transactions being done. When combined with payment channels, only the costs to get in/out are factored into the validation and this offers an interesting design for payments themselves while providing for on-chain availability.

We consider this as a good path for futuristic scalable payments.

Hence, it is not possible to employ that strategy with general computations. With this design, we are still left with the issue on how to do general computations at higher efficiency.

What we present is the ability to have a polylogarithmic chain at the cost of availability for both payments and general computations where business processes dictate availability policies, and users fully understand these limitations when using such systems.

Users may also be provided the ability to ensure availability for themselves and others at their discretion. This will be expounded upon in the following sections.

State Liveness and State Safety

While many compelling arguments can be made migrating to a state-less design [15], it is not possible to achieve sublinear efficiency without sacrificing some other desired component that we outlined above.

To achieve polylogarithmic efficiency it’s necessary to have a mix of state and stateless nodes working together in harmony on a shared ledger [15].

This should be accomplished in such a way that business processes can dictate direction, and users can choose to pay a little more for security either by using a stateful yet very scalable ledgering mechanism or by paying to ensure their own data availability amortized over the life of that user on such systems.

Presenting the ability for users to make these choices allows us to separate the consensus of such systems and reduce overall complexity.

However, in whatever solution we adopt , we need to ensure that the final implementation allow for both the liveness and safety of that state, which are defined as follows:

State Liveness — Transferring coins in a timely manner

State Safety — Private custody

It is important to adhere to these concepts; if one cannot move one’s coins, then it is as useful as if one burned their coins. Hence, if we had third party custody in place, this would give rise to custodial solutions, and lose decentralized and trustless aspects of the solution, which again is not desired.

The options as described would allow users to decide their state liveliness at their own discretion, while state safety is a required constraint throughout any system design we provide. The doorway to possibilities of sublinear design is opened by giving users the ability to decide.

Avoiding Re-execution of Transactions

In order to scale arbitrarily, independent of the number of transactions — a desired property of increasing throughput — one requires a mechanism to avoid re-executing transactions.

Further, ideally it would be able to batch these transactions together for a two-fold scaling proposition.

There are a few mechanisms in literature that attempted to solve re-execution:

(a) TrueBit; (b) Plasma; and © Arbitrum avoided re-execution.

Unfortunately, they require challenge response systems to ensure security, which leads to intricate attack vectors of unbounded risk/reward scenarios.

Multi-Party Computation (MPC) is a mechanism to have parties act under a threshold to decide on actions such as computational integrity of a smart contract. MPC is used in Syscoin for BLS threshold signatures for Chain Locks and Proof-of-Service in quorums of validators deterministically chosen using Fiat-Shamir heuristics on recent block hashes.

The problem with this approach is that validators may become corrupt, hence need to be wrapped in a consensus system along with DKG and random deterministic selection. This was an interesting topic of discovery for the Syscoin team early-on as a way to potentially scale smart contract execution but was ultimately discarded due to the incentive for risk/reward scenarios to favour attacks as the value of the transactions increases.

Hardware enclaves (eg, Intel SGX through remote attestation) were also of particular interest to the Syscoin team as a way to offload execution and avoid re-execution costs.

However, there are a myriad of attack vectors and censorship concerns on the Intel platform . We also should note that the Antarctica model was interesting but required a firmware update from Intel to support such a feature which raises concerns over censorship long term.

The theme amongst all of these approaches is that although re-execution is avoided the communication complexity is largely still linear with the number of transactions on the main chain. The security and trust models are also different from that of the layer 1 assumptions which was not desired.
Lacking solvent solutions to avoid re-execution and enable sublinear overall complexity, we were led — in the development of Syscoin 4.0 — to build a trust-minimized two-way bridge between Syscoin and the Ethereum mainchain, offloading the concerns around smart contracts to Ethereum.

With the advent of such promising technology as ZKP and the optimizations happening around them, we have re-considered the possibilities and believe this will play an important role in the development of Web 3.0. This mathematical breakthrough led us to re-test our assumptions and options related to our desired design.

ZKP allows us the desired superlinear scaling trait we had been looking to achieve but they also offer other benefits; namely privacy is very easy to introduce and will not add detectable costs and complexities to verification on the mainchain.

With users controlling their own data, the mainchain and systems may be designed such that only balance adjustments are recorded, not transaction sets (we will explain the case with full data availability below). In this scenario there is no advantage for a miner to gain to be able to collude with users to launch attacks on systems such as Decentralize Finance (DeFi) pools and provenance of transactions.

The flexibility has to be there though for application developers that need experiences consistent with those we have today with Bitcoin/Syscoin/Ethereum, and to enable the privacy use-cases without requiring extra work, knowledge or costs.

Fig 3: Host and EVM layer

Validity Proof Systems Overtop Proof-of-Work Systems

Prior to the use of Proof Systems, the only option for “Validity Proofs” in a permissionless system involved naive replay, and as such greatly limited scalability; in essence this replay is what is still done today in Layer-1 blockchain (L1) solutions, with the known penalty to scalability.

Proof Systems offer a very appealing trait known as succinctness: in order to validate a state transition, one needs to only verify a proof, and this is done at a cost that is effectively independent of the size of the state transition (ie, polylogarithmic in the size of the state transition).

For maximal financial security, the amount of value being stored should depend on the amount of security provided on the settlement side of the ledger.

Proof-of-Work offers the highest amount of security guarantees. Our next generation financial systems begin with optimal ledgering security and add proof systems on top for scaling. Block times are not as important in a world where most users and activity are on Layer-2 blockchain (L2) validity proof based systems.

This liberates engineers who are focused on scalability to define blocks better; safe block times plus the maximal amount of data bandwidth that can be safely propagated in a time sensitive manner across full nodes in the network.

In Syscoin there are incentivized full nodes (ie, deterministic masternodes), so again we can maximize the bandwidth of ledgering capabilities while retaining Bitcoin Proof-of-Work (PoW) security through merged-mining.

Quantum Resistance:

Table 1: Estimates of quantum resilience for current cryptosystems (see [20])

As seen in Table 1, hashing with the SHA256 algorithm is regarded to be quantum safe because it requires Grover’s algorithm to crack in the post-quantum world, and at best the quantum computer will offer only 50% reduction in time to break.

On the other hand, where Shor’s algorithm applies, any pair based cryptographic system will be broken in hours.

For L2, we propose to implement ZKP in the SDK Layer (see Fig 2); namely Non-Interactive Zero Knowledge Proofs (NIZKP).

Popular implementations of NIZKP include Zero-Knowledge Succinct Non-interactive ARgument of Knowledge (zk-SNARKS) and Zero-Knowledge Scalable Transparent ARguments of Knowledge (zk-STARKS).

There are some zk-STARK/zk-SNARK friendly cipher’s employed in zkRollup designs such as MiMC and Pederson hashes for which we lack certainty on classical security, yet are hopeful and would offer quantum resistance within ZKPs.

It is important to note that Bitcoin was developed with change addresses in mind exposing the hash of a public key requires a quantum computer to use Grover’s Algorithm in order to attempt stealing that Bitcoin. Each time a Bitcoin Unspent Transaction Output (UTXO) is spent, the public key is exposed and a new change address — which does not expose the public key — is used as change.

With this in mind, any scalable L2 solution should be quantum resistant because otherwise we undermine Bitcoin design as the gold standard of security.

Fig 4: zkSync Rollup design

A Design Proposal for Web 3.0

The following describes the 4-layers (see Fig 2) of Syscoin’s proposed tech stack for Web 3.0:

[Host Layer] Bitcoin’s design is the gold standard for security and decentralization.

Proof-of-work and Nakamoto Consensus settlement security are widely regarded by academics as the most hardened solution for ledgering value.

It’s possible this may change, however it’s also arguable that the intricate design encompassing Game Theory, Economics, risk reward ratios for attack, and the minimal amounts of compromising attack vectors is likely not to change for the foreseeable future.

UTXO’s (and payments with them) are more efficient than account-based or EVM-based. That said, Bitcoin itself suffers from not being expressive enough to build abstraction for general computation.

[Operating System Layer]

EVM/eWASM is the gold standard for general computation because of its wide adoption in the community.

Anyone building smart contracts are likely using this model or will continue to use it as the standard for autonomous general computation with consensus.

[SDK Layer]

Zero-knowledge proofs are the gold standard for generalized computation scaling for blockchain applications. They enable one-time execution via a prover and enable aggregate proof checking instead of re-execution of complex transactions.

zk-STARKs or zk-SNARKs using collision resistant hash functions that work with only weak cryptographic assumptions and therefore are quantum safe.

At the moment generalized smart contracts are not there yet but we are quickly approaching the day (eg, Cairo, Zinc) when there will be abstractions made to have most Solidity code trans-compile into a native zero-knowledge aware compiler similar to how .NET runtime and C# allows an abstraction on top of C/C++ as an interpretive layer on top

[Application Layer]

Verticals or applications applying the above SDK to define business goals.

Surprisingly, these ideals represent a design that is not shared with any other project in the industry, including Bitcoin or Ethereum.

We feel these ideals, fashioned together in a singular protocol, could possibly present a grand vision for a “World Computer” blockchain infrastructure.

Syscoin has already implemented Geth + Syscoin nodes in one application instance already (ie, release 4.2), we foresee that it will not prove too challenging to have them cooperate on a consensus basis working together to form a dual chain secured by Syscoin’s PoW.

Fig 5: Proposed design

Fig 5 describes a system where nodes are running two sets of software processes, the Syscoin chain protocol and an EVM/eWASM chain protocol which are kept in sync through putting the EVM tip hash into the Syscoin block. Both have their own individual mempools and effectively the Ethereum contracts, tools and processes can directly integrate as is into the EVM chain as it stands.

Note that the two chains are processes running on the same computer together. Thus a SYS NODE and EVM NODE would be operating together on one machine instance (ie, Masternode) with ability to communicate with each other directly through Interprocess Communication (IPC).

The intersection between the two processes happens in three points:

Miner of the EVM chain collects the latest block hash and places it into the Syscoin block.

When validating Syscoin blocks, nodes confirm the validity of the EVM tip by consulting the EVM chain software locally.

Fees for the EVM chain are to be paid in SYS. We need an asset representing SYS on the EVM chain, which will be SYSX.

We will enable this through a similar working concept that we’ve already established (SysEthereum Bridge).

We may also enable pre-compiles on the EVM chain side to extract Syscoin block hashes and merkle roots to confirm validity of SYS to SYSX burn transactions.

This design separates concerns by not complicating the PoW chain with EVM execution information, keeping the processes separate yet operating within the same node.

To further delineate point 1 (see above), a miner would mine both chains. With Syscoin being merged-mined, the work spent on Bitcoin would be shared to create a Syscoin block that includes the EVM chain within it as a ledgering event representing the latest smart contract execution state (composed of Chain Hash, State Root, Receipt Root, and Transaction Trie Root).

Since the EVM chain has no consensus attached, technically a block can be created at any point in time. Creation of Syscoin and EVM blocks will be near simultaneous, and occur every one minute on average.

Fig 6: Merge mining on Syscoin

As seen in Fig 6, work done on BTC is reused to create SYS blocks through the  merged-mining specification. Concurrently, the miner will execute smart contracts in the memory pool of the node running the EVM chain. Once a chain hash has been established post-execution, it will be put into the coinbase of the Syscoin block and published to the network. Upon receiving these blocks, every node would verify that the EVM chain which they would locally execute (ie, similar to the miner) matches the state described by the Syscoin block.

Technically, one would want to ensure both the latest and previous EVM block hashes inside of their respective Syscoin blocks are valid.

The block->evmblock == evmblock && block->prev == evmblock->prev is all that is needed to link the chains together with work done by Bitcoin which is propagated to Syscoin through AUXPOW and can serve as a secure ledgering mechanism for the EVM chain.

Since (a) we may use eWASM; (b) there are paid full nodes running on the network; and © the mining costs are shared with Bitcoin miners, we should be able to safely increase the amount of bandwidth available in the EVM chain while remaining secure from large uncle orphan rates.

There has been much discussion as to what the safe block size should be on Ethereum. Gas limits are increasing as optimizations are made on the Ethereum network.

However, since this network would be ledgered by the Syscoin chain through PoW, there would be no concern for uncle orphaning of blocks since the blocks must adhere to the policy set inside of the Syscoin block. We should therefore be able to increase bandwidth significantly and parameterize for a system that will scale globally yet still be centered around L2 rollup designs.

A very important distinction here is that the design of Ethereum 2.0 centers around a Beacon chain and sharding served by a Casper consensus algorithm. The needs of the algorithm require a set of finality guarantees necessitating a move towards Proof-of-Stake (PoS).1

This has large security implications for which we may not have formal analysis for a long time, however we do know it comes with big risk.

We offer similar levels of scalability on a network while retaining Nakamoto Consensus security. The simpler design which has been market tested and academically verified to work would lead to a more efficient system as a whole with less unknown and undocumented attack vectors.

The only research that would need to be made therefore is on the optimal parameterization of the gas limit taking into account an L2 centric system but also a safe number of users we expect to be able to serve before fee market mechanisms begin to regulate the barrier of entry for these users.

This proposed system should be scalable enough to serve the needs of global generalized computation while sticking to the core fundamentals set forth in the design ideals above. Our upcoming whitepaper will have more analysis on these numbers but we include some theoretical scaling metrics at the end of this article.

Optimistic vs ZkRollup

ZKP are excellent for complex calculations above and beyond simple balance transfers. For payments, we feel UTXO payment channels combined with something like Z-DAG is an optimal solution.

However, we are left with rollup solutions for generalized computation involving more complex calculations requiring consensus.

Whatever solution we adopt has to be secured by L1 consensus that is considered decentralized and secure, which we achieve via merged-mining with Bitcoin.

There are two types of rollup solutions today:

(a) Optimistic roll ups (OR); and (b) zkRollups; which offer trade-offs.

Consensus about which chain or network you’re on is a really hard problem that is solved for us by Nakamoto consensus. We build on that secure longest chain rule (supplemented by Chain Locks to prevent selfish mining) to give us the world-view of the rollup states. The executions themselves can be done once by a market of provers, never to be re-executed, only verified, meaning it becomes an almost constant cost on an arbitrarily large number of executions batched together. With OR you have the same world-view but the world-view is editable without verifying executions. The role of determining the validity of that world-view is delegated to someone watching who provides guarantees through crypto-economics. Zero-knowledge proofs remove crypto-economics on execution guarantees and replace them with cryptography.

See [26] to see  between fraud proofs (optimistic) vs validity proofs (zk)

Key takeaways from this article are as follows

  • Eliminate a nasty tail risk: theft of funds from OR via intricate yet viable attack vectors;
  • Reduce withdrawal times from 1–2 weeks to a few minutes;
  • Enable fast tx confirmations and exits in practically unlimited volumes;
  • Introduce privacy by default.

One point missing is interoperability. A generalized form of cross-chain bridging can be seen in Chain A locking tokens based on a preimage commitment by Chain B to create a zero-knowledge proof, followed by verification of that proof as the basis for manifesting equivalence on Chain B. Any blockchain with the functionality to verify these proofs could participate in the ecosystem.

Our vision here is described using a zkRollup centric world-view, yet it can be replaced with other technologies should they be able to serve the same purpose. As an infrastructure we are not enforcing one or the other; developers can build on what they feel best suits their needs. We believe we are close to achieving this, and that the technology is nearing the point of being ready for the vision set forth in this article.

Decentralized Cost Model

Decentralized cost models lead to exponential efficiency gains in economies of scale. We set forth a more efficient design paradigm for execution models reflective of user intent. This design uses the UTXO model to reflect simple state transitions and a ZKP system for complex computations leading to state transitions. This leads to better scalability for a system by allowing people to actively make their trade-off within the same ecosystem, driven by the same miners securing that ecosystem backed by Bitcoin itself.

Furthermore, a decentralized cost model contributes to scalability in that ZKP gates can generalize complex computation better than fee-market resources like gas or the CPU/memory markets of EOS, etc.

This leads to better scalability for a system by allowing people to actively make their trade-off within the same ecosystem, driven by the same miners securing that ecosystem backed by Bitcoin itself.

Furthermore, a decentralized cost model contributes to scalability in that ZKP gates can generalize complex computation better than fee-market resources like gas or the CPU/memory markets of EOS, etc. This leads to more deterministic and efficient consumption of resources maximizing efficiency in calculations, and gives opportunity for those to scale up or down based on economic incentives without creating monopolistic opportunities unlike ASIC mining.

In other words, the cost is dictated by what the market can offer, via the cost of compute power (as dictated by Moore’s law), rather than the constrained costs of doing business on the blockchain itself.

This model could let the computing market dictate the price for Gas instead of being managed by miners of the blockchain. The miners would essentially only dictate the costs of the verification of these proofs when they enter the chain rather than the executions themselves.

 happening with ZKP and with a decentralized cost model it will be much easier to understand costs of running prover services as well as know how the costs scale based on the number of users and parameters of systems that businesses would like to employ. All things considered, it will be easier to make accurate decisions on data availability policies and the consensus systems needed to keep the system censorship resistant and secure.

Rollups will be friends, that is, users of one rollup system doing X TPS and users of another doing Y TPS, with the same trust model, will in effect get us to global rates of X*Y (where X is TPS of the sidechains/rollups and Y is the number of sidechains and rollups that exist). X is fairly static in that the execution models of rollups do not change drastically (and if they do, the majority of those rollup or sidechain designs end up switching to the most efficient design for execution over time).

State-less Layer 1 Design

The single biggest limiting factor of throughput in blockchains is  and access to the global state.

More specifically, in Bitcoin it is the UTXO set, and in Ethereum it is the Account Storage and World State tries. State lookups typically require SSD in Ethereum full nodes because real-time processing of transactions of block arrivals are critical to reaching consensus, this is especially the case for newly arriving blocks (ie, every 10–15 seconds).

As state and storage costs rise, the number of full verifying nodes decreases due to the resource consumption of fully validating nodes and providing timely responses to peers. Consequently, network health suffers due to the risks of centralization of consensus amongst the subset peers running full nodes.

State-less designs are an obvious preference to solve problems using alternative mechanisms to validate the chain without requiring continuous updates to the global state.

In a rollup, smart contracts on L1 do not access the global state unless entering or exiting a rollup. Therefore smart contracts that provide full data availability on-chain (ie, zkRollup), would only require state updates to the local set of users within that L2. Under designs where data availability is kept off-chain, there is no state update on L1, unless entering and exiting.

Therefore, it classifies as purely state-less, whereas in zkRollup mode we can consider this partially state-less. Since these L1 contracts are state-less to the global state, nodes on the network can parallelize verification of any executions to the contracts which do not involve entering or exiting. This is in addition to the organic and natural parallel executions of transactions that are composing these rollup aggregated transactions posted on L1.

State-less layer 1 designs also allow for parallelizable smart contract execution verification. The parallelization of smart contracts running on L1 in the EVM model is a recent topic of research that  which involves defining “intent” for the execution of a contract (because nodes do not know ahead of time what the smart contract execution will entail in terms of accessing global state).

Adding in the intent of a transaction as supplied as part of the commitment of that transaction would allow nodes to reject if the execution of that contract did not correspond with the intent, possibly costing the user fees for invalid commitments.

Although these designs may be flexible, they come at the cost of additional complexity through sorting, filtering and general logic that may be susceptible to intricate attacks.

In our case, the transaction can include a field that is understood by the EVM to denote if it is intending to use global state in any way (for rollups typically this would be false) then we can simply reject any access to global states for those specific types of executions.

This would allow nodes to execute these specific types of transactions in parallel knowing that no global state is allowed to access executions. If a transaction is rejected due to incorrectly setting this field the fees are still spent to prevent users from purposefully setting this field incorrectly.

Related Works

The following organizations offer various open source third party L2 scaling solutions:

Starkware is built using a general purpose language (Cairo) with Solidity (EVM) in mind, as is Matter labs with the (Zinc) language. Hermez developed custom circuits tailor-suited to fast transactions and Decentralized Exchange (DEX) like capability. These will be able to directly integrate into Syscoin without modification.

As such, the optimizations and improvements they make should directly be portable to Syscoin, hence becoming partners to our ecosystem.

Aleo uses Zero knowledge EXEcution (Zexe) for zkSNARK proof creation through circuits created from R1CS constraints. The interesting thing about Aleo is that there is a ledger itself that is purpose-built to only verify these Zexe proofs for privacy preserving transactability. The consensus is PoW, while the proof system involves optimizing over the ability to calculate the verifications of these proofs efficiently.

The more efficient these miners become at verifying these proofs, the faster they are able to mine and thus the system provides sybil resistance through providing resources to verify Zexe proofs as a service in exchange for block creation.

However, these proof creations can be done in parallel based on the business logic for the systems the developers need to create. There is no direct need for on-chain custom verification as these can be done in an EVM contract, similar to what Cairo Generic Proving Service (GPS) verifier and Zinc Verification do.

The goal of Aleo is to incentivize miners to create specialized hardware to more efficiently mine blocks with verification proofs.

However, provers can also do this as we have seen with Matter Labs’ recent release of  [27]. It is a desirable property to use PoW to achieve “world-view” consensus in Aleo; however they focus on private transactions. They are typically not batched and employ a recursive outer proof to guarantee execution of an inner proof where the outer proof is sent to the blockchain to be verified. This proof is a limited 2-step recursion, consequently batching of arbitrary amounts of transactions is not supported.

However, as a result the cost of proof verification is relatively constant with a trade-off of limiting the recursion depth. Aleo is not meant to be a scalable aggregator of transactions, but mainly oriented towards privacy in their zk-SNARK constructions using Zexe.

Commercial Interests

Commercial enterprises may start to create proprietary prover technologies where costs will be lower than market in an attempt to create an advantage for user adoption. This design is made possible since the code for the prover is not required for the verifier to ensure that executions are correct. The proof is succinct whether or not the code to make the proof is available.

While the barrier of entry is low in this industry, we’ve seen the open source model and its communities optimize hardware and software and undergo academic peer review using strategies that outpace private funded corporations.

That is plausible to play out over the long term. However, an organic market will likely form on its own, forging its own path leading to mass adoption through capitalist forces.

The point here is that the privately funded vs open source nature of proving services does not change the mechanism of secure and scalable executions of calculations that are eventually rooted to decentralized and open ledgers secured by Bitcoin.

The utmost interesting propositions are the verticals that become possible by allowing infrastructure that is parameterized to scale into those economies where they are needed most, and where trust, security and auditability of value are concerns.

Smart cities, IoT, AI and Digital sovereignty are large markets that intersect with blockchain as a security blanket.

Although ZKP are tremendously useful on their own, applying them to consensus systems for smart contract executions drive them to another level due to the autonomous nature of “code-is-law” and provable deterministic state of logic. We believe a large majority of the next generation economy will depend on many of the ideas presented here.

 is working with commercial and enterprise adopters of blockchain technology. Our direct interaction with clients combined with our many collective years of experience in this field are reflected in this design.

Functional Overview

Fig 7: High-level description

For scalable simple payments, one can leverage our Syscoin Platform Token (SPT) asset infrastructure and payment channels to transact at scale.

Unique characteristics of SPTs include a generalized 8 byte field for the asset ID which is split between the upper and lower 4 bytes; the upper 4 are issued and definable (ie, NFT use cases) and lower 4 are deterministic. This enables the ability to have a generalized asset model supporting both Non-fungible Tokens (NFT) and Fungible Tokens (FT) without much extra cost at the consensus layers. 1 extra byte is used for all tokens at best case and 5 extra bytes are used for NFT at worst case.

See [28] for more information on .

This model promotes multiple assets to be used as input and consequently as outputs, suggesting that atomic swaps between different assets are possible within 1 transaction. This has some desirable implications when using payment channels for use cases such as paying in one currency when merchants receive another atomically.

A multi-asset payment channel is a component that is desired so users are not constrained to single tokens within a network. Composability of assets as well as composability across systems (such as users from one L2 to another) is a core fundamental to UX and convenience that needs to be built into our next generation blockchain components that we believe will enable mass adoption.

The Connext box shows how potentially you can  as described in [29]. This would promote seamless cross-chain L2 communication without the high gas fees. Since these L2’s are operating under an EVM/eWASM model, there are many ways to enable this cross-communication.

An EVM layer will support general smart contracts compatible with existing Ethereum infrastructure and L2 rollups will enable massive scale. The different types of zkRollups will allow businesses and rollup providers to offer ability for custom fee markets (ie, pay for fees in tokens other than base layer token SYS).

In addition, it will remove costs and thus improve scale of systems by offering custom data availability consensus modules. This design discussed here shares similarities to the  where a smart contract would sign off on data availability checks that would get put into the ZKP as part of the validity of a zkBlock which goes on chain.

The overall idea of the zkPorter design is that the zkRollup system would be called a “shard”, and each shard would have a type either operating in “zkRollup” mode or operating in “normal” mode.

Taken from the zkPorter article the essence of it is:

If a shard type is zkRollup, then any transaction that modifies an account in this shard must contain the changes in the state that must be published as L1 calldata (same as a zkRollup).

Any transaction that modifies accounts in at least two different shards must be executed in zkRollup mode.

All other transactions that operate exclusively on the accounts of a specific shard can be executed in normal shard mode (we will call them shard transactions). If a block contains some shard transactions for a shard S, then the following rules must be observed:

  1. The root hash of the subtree of the shard S must be published once, as calldata on L1. This guarantees that users of all other shards will be able to reconstruct their part of the state.
  2. The smart contract of the data availability policy of this shard must be invoked to enforce additional requirements (e.g. verify the signature of the majority of the shard consensus participants).

This concludes that shards can define different consensus modules for data availability (censorship resistance mechanisms) via separating concerns around ledgering the world-view of the state (ie, ZKP that is put on L1 and the data that represents the state. Doing so would allow shards to increase scale, offload costs of data availability to consensus participants.

A few note-worthy examples of consensus for data availability are:

  1. Non-committee, non fraud proof based consensus for data availability checks. No ⅔ online assumption; see  [30].
  2. Sublinear block validation of ZKP system. Use something like  as a data availability proof engine and majority consensus; see  [31].
  3. Use a combination of above, as well as masternode quorum signatures for any of the available quorums to sign a message committing to data availability checks as well as data validity. Using masternodes can provide a deterministic set of nodes to validate decisions as a service. The data can be stored elsewhere accessible to the quorums as they reach consensus that it is indeed valid and available.

Give Me The Goods

You may be wondering what a system like this can offer in terms of scale …

Simple payments: since payment channels work with UTXO’s and also benefit from on-chain scaling via Z-DAG, 16MB blocks (with segwit weight) assumed, we will see somewhere around 8MB-12MB effectively per minute (per block).

We foresee that is sufficient to serve 7 Billion people who may enter and exit the once a year (ie, 2 transactions on chain per person per year) for a total of 14 Billion transactions.

Let’s conservatively assume 8MB blocks and 300 bytes per transaction. Once on a payment channel, the number of transactions is not limited to on-chain bandwidth, but to network related latencies and bandwidth costs. Therefore, we will conclude that our payment scalability will be able to serve billions of people doing 2 on-chain transactions per year which is arguably realistic based on the way we envision payments to unfold; whether that is an L2 or payment channel network that will hold users to pay through instant transaction mechanisms.

On-chain, we have some  [1]; in those cases someone needs to transact for point-of-sale using the Syscoin chain. The solution for payments ends up looking like a hybrid mechanism of on-chain (Z-DAG) and off-chain (ie, payment channel) style payments.

Complex transactions such as smart contracts using zkRollups require a small amount of time to verify each proof. In this case, we assume that we will host data off-chain while using an off-chain consensus mechanism to ensure data availability for censorship resistance; so the only thing that goes on the chain are validity proofs. We will assume that we will assign 16MB blocks for the EVM chain per minute.

A proof size will be about 300kB for about 300k transactions batched together which will take about 60–80ms to verify and roughly 5 to 10 minutes to create such proofs.

These are the   using zk-STARKs which present quantum resistance and no trusted setup.

After speaking with Eli Ben-Sasson, we were made aware that proving and verifications metrics are already developed compared to what is currently presented by Starkware [34].

Hence, zk-SNARKs offer even smaller proofs and verification times at the expense of trusted setups and stronger cryptography assumptions (not post-quantum safe).

We foresee that these numbers will improve over time as the cryptography improves, but current estimates suggest a rough theoretical capacity of around 1 Million TPS.

Starkware was able to process 300k transactions over 8 blocks with a total cost of 94.5M gas; final throughput was 3000 TPS (see Reddit bake-off estimates). As a result, or the following calculations, let’s assume one batch-run to be 300k transactions.

Ethereum can process ~200kB of data per minute, with a cost limit of 50M gas per minute. Therefore, considering the Starkware benchmark test, and assuming a block interval of 13 seconds, we would achieve ~ 3000 TPS (ie, 300 k transactions per batch-run / (8 blocks per batch-run * 13 seconds per block))

It is estimated that Syscoin will be able to process ~16MB of data per minute on the EVM layer (ie, SYSX in Fig 3), which is ~80x gain over Ethereum; thus a cost limit of 4B gas (ie, 80*50M) per minute.

Therefore, if the Starkware benchmark test was run on Syscoin, it is estimated that Syscoin could run the equivalent of 42 batch-runs per minute (ie, 4B gas per minute / 94.5 M gas per batch-run).

That would result in an equivalent of 210 k TPS (ie, 42 batch-runs per minute * 300 k transactions per batch-run / 60 seconds per minute).

If we were to consider using Validum on the Syscoin EVM layer, we estimate that we could achieve 800 batch-runs per minute (ie, 4B gas per minute / 5 M gas per batch-run). That would equate to an equivalent of 4M TPS (ie, 800 batch-runs per minute * 300 k transactions per batch-run / 60 seconds per minute).

Table 2: Gas costs and Total throughput

* Because all transactions are on-chain, which would include state lookups and modifications, it would likely result in a smaller total throughput depending on the node. This would be on average somewhere between 50–150 TPS total due to the state lookup bottlenecks, which are not an issue in a rollup design and can be done in a state-less way on-chain (meaning the throughput can instead be bounded by computational verification of the ZKPs)

**Rollups post the transitions on-chain and Validium does not, but note that the transitions on chain are account transitions and not transactions and so if some accounts interact within the same batch it will be just those account transitions recorded to the chain regardless of how many actual transactions are done between them.This is the minimum TPS with full layer 1 decentralized security. The amortized cost per Tx thus drops as accounts are reused within the This is the minimum TPS with full layer 1 decentralized security. The amortized cost per Tx thus drops as accounts are reused within the batch and the total TPS would subsequently rise.

Optimizations to the verification process are likely and would be required to get to those numbers, but the bandwidth would allow for such scale should those optimizations come to fruition.

For example 800 zk-STARK verifications at roughly 80ms per zk-STARK would take around 64 seconds, however these proofs can be verified in parallel so with a 32-core machine. It would take ~2–3 seconds total spent on these proofs, and likely decrease further with optimizations (note that TPS includes total account adjustments).

Because of the higher throughput capabilities of baseline EVM, we may look to  [32] to thwart DOS attacks.

The aforementioned calculations demonstrate the full State Safety of the mainchain secured by Bitcoin, and no asynchronous network assumptions which make theoretical calculations impractical in many other claims of blockchain throughput due to execution model bottlenecks.

These results were extrapolated based on real results with constant overhead added that becomes negligible with optimizations. It is imperative to note that transactions in this strategy are not re-executable; there is little to no complexity in this model other than verifying succinct proofs. The proof creation strategy is parallelized organically using this model. The verifications on the main chain can also be parallelized as they are executed on separate shards or rollup networks. Dual parallel execution and verification gives exponentially more scalability than other architectures.

Additionally, privacy can be built into these models at minimal to no extra cost, depending on the business model. Lastly, we suggest these are sustainable throughput calculations and not burst capacity numbers which would be much higher (albeit with a marginally higher fee based on fee markets).

For example Ethereum is operating at 15 TPS but there are around 150k transactions pending, and the average cost is about 200 gWei currently. The fee rate is based on the calculation that it takes around 10000 seconds to clear, assuming this many transactions, no new transactions, and there is demand to settle earlier.

Extrapolating on 4M TPS the ratio would become 40B transactions pending with 4M TPS to achieve the same fee rate on Ethereum today assuming the memory pool is big enough on nodes to support that many pending transactions.

Since masternodes on Syscoin are paid to provide uptime, we can expect network bandwidth to scale up naturally to support higher throughput as demand for transaction settlement increases.

Today, the ability to transact at a much higher rate using the same hardware provides the ability for a greater scale than the state-of-the-art in blockchain design without the added desired caveat of avoiding asynchronous network assumptions.

We believe this proposed design will become the new state-of-the-art blockchain, which is made viable due to its security, flexibility and parallelizable computational capacity.

In regards to uncle rates with higher block sizes, keep in mind we make uncle rates and re-organizations in general negligible through the use of the PoW chain mining Syscoin along with Chain Locks. We provide intuition that block sizes can be increased substantially without affecting network health.

Furthermore, the gas limits can be adjusted by miners up to 0.1% from the previous block and so a natural equilibrium can form where even if more than 4B gas is required it can be established based on demand and how well the network behaves with such increases.

There is a lot to unpack with such statements and so we will cover this in a separate technical post as it is out-of-scope for this discussion.

Blockchain Foundry

One of the main reasons for a profit company is to take advantage of some of the aforementioned verticals which we expect to underpin the economies of tomorrow with infrastructure similar to what is presented here.

Since the company’s beginning in 2016, we have spent the majority of our existence designing architecture parameterized to global financial markets.

Breakthroughs in cryptography and consensus designs as described here lead us to formalize these designs to apply to market verticals, formulating new applications and solutions that would not have been possible before.

Specifically, , we believe these ideas can be IP protected without requiring privatization of the entire tech stack. These value-added ideas that will use existing open-source tech stacks enabling a massive network effect of value through incentivization of commercial and enterprise adoption.

These new ideas, innovations and proprietary production quality solutions could steer in a new wave of  for civilization.


References

[1] J. Sidhu, E, Scott, and A. Gabriel, Z-DAG: An interactive DAG protocol for real-time crypto payments with Nakamoto consensus security parameters, Blockchain Foundry Inc, Feb. 2018. Accessed on: Feb 2021. [Online]. Available: 

[2] Bitcoin Core FAQ, Compact Blocks FAQ Accessed on: Feb 2021. [Online]. Available: 

[3] I. Eyal and E. G. Sirer, Majority is not enough: Bitcoin mining is vulnerableProceedings of International Conference on Financial Cryptography and Data Security, pp. 436–454, 2014.

[4] A. Block, Mitigating 51% attacks with LLMQ-based ChainLocks. Accessed on: Feb 2021. [Online], Nov 2018. Available: 

[5] J. Valenzuela, Andreas Antonopoulos Calls Dash ChainLocks “a Smart Way of” Preventing 51% Attacks. Aug 22, 2019. Accessed on: Feb 2021. [Online]. Available: 

[6] D. Boneh, M. Drijvers, and G. Neven, BLS Multi-Signatures With Public-Key Aggregation, Mar 2018. Accessed on: Feb 2021. [Online]. Available: 

[7] J. Drake. Pragmatic signature aggregation with BLS, May 2018. Accessed on: Feb 2021. [Online]. Available: 

[8] S. Bowe, BLS12–381: New zk-SNARK Elliptic Curve Construction, Mar 2017. Accessed on: Feb 2021. [Online]. Available: 

[9] A. Block, BLS: Is it really that slow?, Jul 2018. Accessed on: Feb 2021. [Online]. Available: 

[10] S. de la Rouvier. Interplanetary Linked Computing: Separating Merkle Computing from Blockchain Computational Courts, Jan 2017. Accessed on: Feb 2021. [Online]. Available: 

[11] Anonymous Kid, Why the fuck did Satoshi implement the 1 MB blocksize limit? [Online forum comment], Jan 2018, Accessed on: Feb 2021. [Online]. Available: 

[12] Zero-Knowledge Proofs What are they, how do they work, and are they fast yet? Accessed on: Feb 2021. [Online]. Available: 

[13] E. Ben-Sasson, I. Bentov, Y. Horesh, and M. Riabzev, Scalable, transparent, and post-quantum secure computational integrity, IACR Cryptol, 2018, pp 46

[14] Dryja, T, Utreexo: A dynamic hash-based accumulator optimized for the bitcoin UTXO set, IACR Cryptol. ePrint Arch., 2019, p. 611.

[15] G.I. Hotchkiss, The 1.x Files: The State of Stateless Ethereum, Dec 2019. Accessed on: Feb 2021. [Online]. Available: 

[16] S. Bowe, A. Chiesa, M. Green, I. Miers, P. Mishra, H. Wu: Zexe: Enabling decentralized private computation. Cryptology ePrint Archive, Report 2018/962 (2018). Accessed on: Feb 2021. [Online]. Available: 

[17] A. Nilsson, P.N. Bideh, J. Brorsson, A survey of published attacks on Intel SGX. 2020, arXiv:2006.13598

[18] C. Nelson, Zero-Knowledge Proofs: Privacy-Preserving Digital Identity, Oct 2018. Feb 2021. Accessed on: [Online]. Available: 

[19] D. Boneh, Discrete Log based Zero-Knowledge Proofs, Apr 2019, Accessed on: Feb 2021 [Online]. Available: 

[20] Quantum Computing’s Implications for Cryptography (Chapter 4), National Academies of Sciences, Engineering, and Medicine: Quantum Computing: Progress and Prospects. The National Academies Press, Washington, DC, 2018.

[21] S. Naihin, Goodbye Bitcoin… Hello Quantum, Apr 2019, Accessed on: Feb 2021 [Online]. 

[22] L.T. do Nascimento, S. Kumari, and V. Ganesan, Zero Knowledge Proofs Applied to Auctions, May 2019, Accessed on: Feb 2021 [Online]. Available: 

[23] G., Proof of Stake Versus Proof of Work. Technical Report, BitFury Group, 2015. Accessed on: Feb 2021. [Online]. Available: 

[24] V. Buterin and V. Griffith, Casper the Friendly Finality Gadget. CoRR, Vol. abs/1710.09437, 2017. arxiv: 1710.09437, 

[25] M. Neuder, D.J. Moroz, R. Rao, and D.C. Parkes, Low-cost attacks on Ethereum 2.0 by sub-1/3 stakeholders, 2021. arXiv:2102.02247, 

[26] Starkware, Validity Proofs vs. Fraud Proofs, Jan 2019, Accessed on: Feb 2021, [Online]. Available: 

[27] A. Gluchowski, World’s first practical hardware for zero-knowledge proofs acceleration, Jul 2020, Accessed on: Feb 2021 [Online]. Available: 

[28] Introducing an NFT Platform Like No Other, Accessed on: Feb 2021. [Online]. Available: 

[29] A. Bhuptani, Vector 0.1.0 Mainnet Release, The beginning of a multi-chain Ethereum ecosystem, Jan 2021, Accessed on: Feb 2021. [Online]. Available: 

[30] V. Buterin, With fraud-proof-free data availability proofs, we can have scalable data chains without committees, Jan 2020, Accessed on: Feb 2021. [Online]. Available: 

[31] M. Al-Bassam, A data availability blockchain with sub-linear full block validation, Jan 2020, Accessed on: Feb 2021. [Online]. Available: 

[32] T. Chen, X. Li, Y. Wang, J. Chen, Z Li, X. Luo, M. H. Au, and X. Zhang. An adaptive gas cost mechanism for Ethereum to defend against under-priced DoS attacks. Proceedings of Information Security Practice and Experience — 13th International Conference ISPEC, 2017

[33] Y. Sompolinsky, and A. Zohar, Secure High-rate Transaction Processing in Bitcoin, Proc. 19th Int. Conf. Financial Cryptogr, Data Secur. (FC’20), Jan 2015, pp. 507–527

[34] Starkware Team, Rescue STARK Documentation — Version 1.0, Jul 2020

Shared with 💚 by Free Spirit

✌ & 💚

BitHouse with 💚

Satoshi Nakamoto Quotes


CODE IS LAW

“ It might make sense just to get some in case it catches on.

If enough people think the same way, that becomes a self fulfilling prophecy.

Once it gets bootstrapped, there are so many appli­ca­tions if you could effort­lessly pay a few cents to a website as easily as dropping coins in a vending machine. ”

Get some in case it catches on

“ In this sense, it’s more typical of a precious metal.

Instead of the supply changing to keep the value the same, the supply is prede­ter­mined and the value changes.

As the number of users grows, the value per coin increases.

It has the poten­tial for a positive feedback loop; as users increase, the value goes up, which could attract more users to take advan­tage of the increasing value. ”

Potential for a positive feedback loop

“ Maybe it could get an initial value circu­larly as you’ve suggested, by people foreseeing its poten­tial useful­ness for exchange. (I would definitely want some)

Maybe collec­tors, any random reason could spark it.

I think the tradi­tional quali­fi­ca­tions for money were written with the assump­tion that there are so many competing objects in the world that are scarce, an object with the automatic bootstrap of intrinsic value will surely win out over those without intrinsic value.

But if there were nothing in the world with intrinsic value that could be used as money, only scarce but no intrinsic value, I think people would still take up something. (I’m using the word scarce here to only mean limited poten­tial supply) ”

“ A rational market price for something that is expected to increase in value will already reflect the present value of the expected future increases. “

Rational market price

In your head, you do a proba­bility estimate balancing the odds that it keeps increasing. ”

Probability

“ I’m sure that in 20 years there will either be very large trans­ac­tion volume or no volume. ”

In 20 Years

“ Bitcoins have no dividend or poten­tial future dividend, there­fore not like a stock.

More like a collectible or commodity.“

Collectible vs Commodity

” [Lengthy exposition of vulnerability of a systm to use-of-force monopolies ellided.]

You will not find a solution to political problems in cryptography.

Yes, but we can win a major battle in the arms race and gain a new territory of freedom for several years.

Governments are good at cutting off the heads of a centrally controlled networks like Napster, but pure P2P networks like Gnutella and Tor seem to be holding their own. “

Pure P2P networks

” It’s very attractive to the libertarian viewpoint if we can explain it properly.

I’m better with code than with words though. “

Libertarian Viewpoint

” The proof-of-work is a Hashcash style SHA-256 collision finding.

It’s a memoryless process where you do millions of hashes a second, with a small chance of finding one each time.

The 3 or 4 fastest nodes’ dominance would only be proportional to their share of the total CPU power.

Anyone’s chance of finding a solution at any time is proportional to their CPU power.

There will be transaction fees, so nodes will have an incentive to receive and include all the transactions they can.

Nodes will eventually be compensated by transaction fees alone when the total coins created hits the pre-determined ceiling. “

Transactions Fees

” Right, it’s ECC digital signatures.

A new key pair is used for every transaction.

It’s not pseudonymous in the sense of nyms identifying people, but it is at least a little pseudonymous in that the next action on a coin can be identified as being from the owner of that coin.”

Pseudonymous

Bitcoin is a new electronic cash system that uses a peer-to-peer network to prevent double-spending.

It’s completely decentralized
with no server or central authority

New electronic cash system

Total circulation will be 21,000,000 coins.

It’ll be distributed to network nodes when they make blocks, with the amount cut in half every 4 years

first 4 years: 10,500,000 coins

next 4 years: 5,250,000 coins

next 4 years: 2,625,000 coins

next 4 years: 1,312,500 coins
etc…

When that runs out, the system can support transaction fees if needed.

It’s based on open market competition, and there will probably always be nodes willing to process transactions for free.

Open Market Competition

” I would be surprised if 10 years from now we’re not using electronic currency in some way, now that we know a way to do it that won’t inevitably get dumbed down when the trusted third party gets cold feet.

It could get started in a narrow niche like reward points, donation tokens, currency for a game or micropayments for adult sites.

Initially it can be used in proof-of-work applications for services that could almost be free but not quite.

POW applications

It can already be used for pay-to-send e-mail.

The send dialog is resizeable and you can enter as long of a message as you like.

It’s sent directly when it connects.

The recipient doubleclicks on the transaction to see the full message.

If someone famous is getting more e-mail than they can read, but would still like to have a way for fans to contact them, they could set up Bitcoin and give out the IP address on their website. “

Pay-to-Send Email

“Send X bitcoins to my priority hotline at this IP and I’ll read the message personally.”

Send bitcoin

You can securely control neither your land nor your digitally centralized financial assets without the help of government. Thus the locality & importance of legal ownership in these things. You can securely control your globally seamless Bitcoin without the help of government.

Nick Szabo


From the People For the People !!! Be your Own Bank !!! REVOLUTIONARY IMMUTABLE PUBLIC COLLABORATIVE OPEN RESISTANT DECENTRALIZED

Made with 💚 by Free Spirit

✌ & 💚

Did you find this article helpful?

If so, please consider a donation to help the evolution and development of more helpful articles in the future, and show your support for alternative articles.

Your generosity is 💚 ly appreciated

You can donate in any crypto your 💚 desires 😊

Thank you all for your time !!!

✌ & 💚


Bitcoin (BTC) :

1P1tTNFGRZabK65RhqQxVmcMDHQeRX9dJJ


LiteCoin(LTC) :

LYAdiSpsTJ36EWCJ5HF9EGy9iWGCwoLhed


Ethereum(ETH) :

0x602e8Ca3984943cef57850BBD58b5D0A6677D856


EthereumClassic(ETC) :

0x602e8Ca3984943cef57850BBD58b5D0A6677D856


Cardano(ADA) :

addr1q88c5cccnrqy6xesszzvf7rd4tcz87klt0m0h6uvltywqe8txwmsrrqdnpq27594tyn9vz59zv0n8367lvyc2atvrzvqlvdm9d


BinanceCoin(BNB) :

bnb1wwfnkzs34knsrv2g026t458l0mwp5a3tykeylx


BitcoinCash (BCH)

1P1tTNFGRZabK65RhqQxVmcMDHQeRX9dJJ


Bitcoin SV (BSV)

1P1tTNFGRZabK65RhqQxVmcMDHQeRX9dJJ


ZCash(ZEC) :

t1fSSQX4gEhove9ngcvFafQaMPq5dtNNsNF


Dash(DASH) :

XcWmbFw1VmxEPxvF9CWdjzKXwPyDTrbMwj


Shiba(SHIB) :

0x602e8Ca3984943cef57850BBD58b5D0A6677D856


Tron(TRX) :

TCsJJkqt9xk1QZWQ8HqZHnqexR15TEowk8


Stellar(XLM) :

GBL4UKPHP2SXZ6Y3PRF3VRI5TLBL6XFUABZCZC7S7KWNSBKCIBGQ2Y54


Arise…

Timothy C. May

Arise, you have nothing to lose but your barbed wired fences!

Timothy C. May

Wonder In Peace bright mind!

Thanks for the guidance and wisdom!

The world will never know how much they owe you!

✌ & 💚


Shared with 💚 by Free Spirit
& 💚



B-Money

Web Dai – B-Money

I am fascinated by Tim May's crypto-anarchy. 

Unlike the communities
traditionally associated with the word "anarchy", in a crypto-anarchy the
government is not temporarily destroyed but permanently forbidden and
permanently unnecessary.

It's a community where the threat of violence is
impotent because violence is impossible, and violence is impossible because its participants cannot be linked to their true names or physical locations.
 
Until now it's not clear, even theoretically, how such a community could operate.

A community is defined by the cooperation of its participants, and efficient cooperation requires a medium of exchange (money) and a way to enforce contracts.

Traditionally these services have been provided by the government or government sponsored institutions and only to legal entities.

In this article I describe a protocol by which these services can be provided to and by untraceable entities.
 
I will actually describe two protocols. The first one is impractical,because it makes heavy use of a synchronous and unjammable anonymous
broadcast channel. However it will motivate the second, more practical protocol.

In both cases I will assume the existence of an untraceable network, where senders and receivers are identified only by digital
pseudonyms (i.e. public keys) and every messages is signed by its sender
and encrypted to its receiver.
 
In the first protocol, every participant maintains a (seperate) database of how much money belongs to each pseudonym. These accounts collectively define the ownership of money, and how these accounts are updated is the subject of this protocol.
 
1. The creation of money. Anyone can create money by broadcasting the
solution to a previously unsolved computational problem. The only
conditions are that it must be easy to determine how much computing effort
it took to solve the problem and the solution must otherwise have no
value, either practical or intellectual. The number of monetary units
created is equal to the cost of the computing effort in terms of a
standard basket of commodities. For example if a problem takes 100 hours
to solve on the computer that solves it most economically, and it takes 3
standard baskets to purchase 100 hours of computing time on that computer
on the open market, then upon the broadcast of the solution to that
problem everyone credits the broadcaster's account by 3 units.
 
2. The transfer of money. If Alice (owner of pseudonym K_A) wishes to
transfer X units of money to Bob (owner of pseudonym K_B), she broadcasts
the message "I give X units of money to K_B" signed by K_A.
 
Upon the broadcast of this message, everyone debits K_A's account by X units and
credits K_B's account by X units, unless this would create a negative
balance in K_A's account in which case the message is ignored.
 
3. The effecting of contracts. A valid contract must include a maximum
reparation in case of default for each participant party to it. It should
also include a party who will perform arbitration should there be a
dispute. All parties to a contract including the arbitrator must broadcast
their signatures of it before it becomes effective. Upon the broadcast of
the contract and all signatures, every participant debits the account of
each party by the amount of his maximum reparation and credits a special
account identified by a secure hash of the contract by the sum the maximum
reparations. The contract becomes effective if the debits succeed for
every party without producing a negative balance, otherwise the contract
is ignored and the accounts are rolled back. A sample contract might look
like this:
 
K_A agrees to send K_B the solution to problem P before 0:0:0 1/1/2000.
K_B agrees to pay K_A 100 MU (monetary units) before 0:0:0 1/1/2000. K_C
agrees to perform arbitration in case of dispute. K_A agrees to pay a
maximum of 1000 MU in case of default. K_B agrees to pay a maximum of 200
MU in case of default. K_C agrees to pay a maximum of 500 MU in case of
default.
 
4. The conclusion of contracts. If a contract concludes without dispute,
each party broadcasts a signed message "The contract with SHA-1 hash H
concludes without reparations." or possibly "The contract with SHA-1 hash
H concludes with the following reparations: ..." Upon the broadcast of all
signatures, every participant credits the account of each party by the
amount of his maximum reparation, removes the contract account, then
credits or debits the account of each party according to the reparation
schedule if there is one.
 
5. The enforcement of contracts. If the parties to a contract cannot agree
on an appropriate conclusion even with the help of the arbitrator, each
party broadcasts a suggested reparation/fine schedule and any arguments or
evidence in his favor. Each participant makes a determination as to the
actual reparations and/or fines, and modifies his accounts accordingly.
 
In the second protocol, the accounts of who has how much money are kept by
a subset of the participants (called servers from now on) instead of
everyone. These servers are linked by a Usenet-style broadcast channel.

The format of transaction messages broadcasted on this channel remain the
same as in the first protocol, but the affected participants of each
transaction should verify that the message has been received and
successfully processed by a randomly selected subset of the servers.
 
Since the servers must be trusted to a degree, some mechanism is needed to
keep them honest. Each server is required to deposit a certain amount of
money in a special account to be used as potential fines or rewards for
proof of misconduct. Also, each server must periodically publish and
commit to its current money creation and money ownership databases. Each
participant should verify that his own account balances are correct and
that the sum of the account balances is not greater than the total amount
of money created. This prevents the servers, even in total collusion, from
permanently and costlessly expanding the money supply. New servers can
also use the published databases to synchronize with existing servers.
 
The protocol proposed in this article allows untraceable pseudonymous
entities to cooperate with each other more efficiently, by providing them
with a medium of exchange and a method of enforcing contracts. The
protocol can probably be made more efficient and secure, but I hope this
is a step toward making crypto-anarchy a practical as well as theoretical
possibility.
 
-------
 
Appendix A: alternative b-money creation
 
One of the more problematic parts in the b-money protocol is money
creation. This part of the protocol requires that all of the account
keepers decide and agree on the cost of particular computations.
Unfortunately because computing technology tends to advance rapidly and
not always publicly, this information may be unavailable, inaccurate, or
outdated, all of which would cause serious problems for the protocol.
 
So I propose an alternative money creation subprotocol, in which account
keepers (everyone in the first protocol, or the servers in the second
protocol) instead decide and agree on the amount of b-money to be created
each period, with the cost of creating that money determined by an
auction. Each money creation period is divided up into four phases, as
follows:
 
1. Planning. The account keepers compute and negotiate with each other to
determine an optimal increase in the money supply for the next period.

Whether or not the account keepers can reach a consensus, they each
broadcast their money creation quota and any macroeconomic calculations
done to support the figures.
 
2. Bidding. Anyone who wants to create b-money broadcasts a bid in the
form of <x, y> where x is the amount of b-money he wants to create, and y
is an unsolved problem from a predetermined problem class. Each problem in
this class should have a nominal cost (in MIPS-years say) which is
publicly agreed on.
 
3. Computation. After seeing the bids, the ones who placed bids in the
bidding phase may now solve the problems in their bids and broadcast the
solutions.
 
4. Money creation. Each account keeper accepts the highest bids (among
those who actually broadcasted solutions) in terms of nominal cost per
unit of b-money created and credits the bidders' accounts accordingly

http://www.weidai.com/bmoney.txt

Shared with 💚 by Free Spirit

✌ & 💚

Mining Calculators

How to Calculate Mining Profitability: Top 6 Mining Calculators

Before we can even start mining, we should use one of the many profitability calculators online, that should give us beforehand a better understanding if the GPU, FPGA, ASIC we choose to mine with, will be profitable or not!


🔹️ Online Calculators 🔹️


🔸️ WhatToMine

🔸️ Rubin Mining Calculator

🔸️ CoinWarz

🔸️ CryptoCompare

🔸️ Minerstat

🔸️ Crypto-Coinz

Before even entering sites to buy hardware, Do GOOD…

Do VERY GOOD your R&D

If you think reading is for dorks,nerds, geeks and boring people … Well…

WELCOME TO THE REALM OF THOSE WHO ❤ TO READ !!!


Made with 💚 by Free Spirit

✌ & 💚




With 💚