Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lab goal: "Distributed Trust Design" #3571

Open
synctext opened this issue Apr 6, 2018 · 18 comments
Open

Lab goal: "Distributed Trust Design" #3571

synctext opened this issue Apr 6, 2018 · 18 comments
Assignees
Milestone

Comments

@synctext
Copy link
Member

synctext commented Apr 6, 2018

overall aim of the lab for coming 10 years.

We now have the ability to create trust with software, as shown by Ebay, AirBnB, and Uber. Key is the star rating mechanism. We want to generalize this. Our audacious ambition is to establish a new scientific field, we call Distributed Trust Design.

Distributed Trust Design provides a coherent framework to transform mutually non-trusting people into trustworthy online communities of arbitrary size, that have a process for democratic decision-making, the ability to punish those who freeride on the community or are dishonest in order to manage a marketplace, govern a resource sustainably, schedule activities, or achieve a common purpose in general.

Underlying issues:

@synctext
Copy link
Member Author

synctext commented Apr 16, 2018

The Internet is sick. We need a formula for Trust and Truth. Our cure needs to prove itself and work on a large scale. A killer application is needed to bring "distributed trust design" to a wide audience. Options to further explore:

  • Youtube-like alternative to the central platforms. Current Tribler focus for over a decade.
  • Facebook-like system with privacy. Distributed social network, building upon our early 2007 work, operational system deployed in 2008 and sophisticated algorithms with privacy from 2014
  • Wikipedia-like system with reputations. Building upon our operational system before Wikipedia existed. However, after 19 years of work it has proven difficult to build this envisioned a single code base for different distributed public writable databases.
  • Disrupt scientific publishing. Scientific articles can be published and read without any central gatekeeper. See our early work to parse and search .pdf articles and their citations. Shown below is a PageRank analysis of our dataset of distributed systems articles. Each edge is a citation, obviously a familiar central node appears. This could be the future of science, together with "reproducible science" that Marc is doing with Jupyter.
    pr-neato-01
  • Decentral web. Re-decentralise the web again by supporting torrents with markdown or html content. See our 2007 implementation of this concept.
    image

@ichorid
Copy link
Contributor

ichorid commented Feb 5, 2019

Let me try to put your point a little bit more Martin Luther way 😁 :

" I envision a world where all the knowledge produced by humanity is available to everyone for free.

Knowledge is power. No greedy corporations with their opaque algorithms or governments with their power-hungry politicians decide what information they feed you. Only you decide what you get. It is your natural right that is with you from birth till your last gasp. What, when, and why to watch, listen and read. Any other outcome is a covered-up agression against you.

You decide what knowledge you share and with whom you share it, on your conditions. And by knowledge I mean everything: news, songs, movies, books, pictures, designs... Everything.
In the world I envision, creators are not confined by their producers' whim. Instead, they are directly connected to their audience.
In the world I envision, there are no "fake news" - everyone instantly knows if the source is trustworthy.
In the world I envision, no bank decides your trustworthiness - your record is just there to share it with anyone, if you're willing to.

In the world I envision, there is no price for knowledge. It is just there for everyone to use, as the air is there free for everyone to breathe and the sea is there free for everyone to swim in. And if someone tries to pollute your air, you immediately see the smoke trail and trace it back to the offender.
The word "paywall" is forgotten in my world. The only paywall left there is the demo one, to show schoolkids how frustrating it was to use the Internet at the beginng of the 21th century. It is a harmless boogeyman, like a tiny section of the Berlin Wall left as a monument for children to paint on.

There are no "social networks" in that world. The Internet is the social Network, just as it was at its wake, and as it was envisioned by its creators. This Network constantly generates trust between its peers, facilitating democratic self-organizational process at all levels. People actually get into the Network to make friends. Contrast that with how we use our current "social" networks to shun the old enemies and compete in the exhausting race for social approvement (who gets more "likes"?) And mind you, the Network I envision is not fragile. It is anti-fragile. Every strike against it eventually makes it more resilent. Surviving a battle, platoon's trust between brothers-in-arms become stronger.

Collective action based on mutual trust brought humanity to its glory. During the 20th century mutual mistrust almost resulted in our species extinction. If we want to prove ourselves as truly sentient beings, if our ancestors' lives and sacrifices and hopes still mean something to us - the time to act is now.
We need to build Trust between us. All of us."

Hope something like this manifesto will be a nice fit for some hacker-ish news-site publication. This kind of things is good at selling stuff to people, I've heard 😉

@synctext
Copy link
Member Author

synctext commented Mar 7, 2019

We now have a basic design which may be implemented in 2019 and deployed to understand the fundamental of engineering online trust. First "coherent framework to transform mutually non-trusting people into trustworthy online communities of arbitrary size". It is based on the following principles.

  • We use the work graph model or ledger-based recording of interactions.
  • The impossibility result by Seuken forms our starting point
  • Uses a layered model to avoid single-report responsiveness, prevent network collapses and defend against gradual information poisoning.
  • Should be possible to prove some performance bounds

My first draft of a layered "incremental trust growth algorithm". It is a three-step defense against various known attacks:

  • Global peer discovery
    • goal: discover any participant in the global trust overlay
    • we conduct an unbiased random walk across all agents and all continents
    • to prevent network partitions we specifically avoid any trust or community bias
    • to prevent attacks in general we avoid storing any information in this layer
  • Probabilistic proofs-of-interaction discovery
    • Goal: discover proofs-of-interaction and audit them
    • by using global peer discovery we talk to any stranger and selectively request proof-of-interaction
    • to prevent the Sybil attack we must avoid requesting information from random strangers
    • we use the probabilistic request function to decide if we request information
    • probabilistic request function heavily uses physical proximity (network latency)
    • global attacks are prevented with this bias and attackers are forced to move close to you
    • far away stranger interaction is still required with low probability to prevent information bubble
  • Inclusion of information decision
    • Goal: prevent pollution of your proofs-of-interaction database
    • newly discovered proof-of-information from strangers needs to be filtered
    • only permanently store information which has sufficient trustworthiness
    • use any suitable trustworthiness function, for instance, Monte Carlo based personalised random walk

@cuileri
Copy link
Contributor

cuileri commented Mar 7, 2019

I have been reading many papers this week on trust design as well as Seuken et al.'s work paper attesting the impossibility of preventing Sybil attacks. I have doubts on accepting his impossibility result as our guideway, because:

  • Firstly, the model of paper, called "accounting mechanism", has a strict assumption that "Work interactions are bilateral and private, i.e. no outside agent can observe or monitor an interaction". However, the main aspect of our model is that everybody can learn about and validate an interaction, which makes his preposition unsuitable for us.

  • Secondly, "single-report responsiveness" is NOT the only condition for his theorem. Theorem 3 on page 18 is as follows:

For every accounting mechanism M that
- satisfies independence of disconnected agents,
- is symmetric,
- has the single-report responsiveness property,
- and is misreport-proof,
there exists a (...) sybil attack.

Let me briefly explain them in a very very rough way, for the ones who have not read the paper:

Independence of disconnected agents: Existence or inexistence of any other node at any time does not affect the trust relation between two nodes. (e.g. My trust score for TU Delft does not depend how many universities there are in the world)

Symmetry (They call it anonymity here): Trust scores depend only on what you have done but not your identity. (If A and B do the same work, they will have the same trust score from me.)

Single-report responsiveness: Trust scores may be affected by a single report about the agent. (If I do not know anything about you, I will believe the first thing I heard about you)

Misreport proofness: If you want to consume work from me, I neglect all your reports about the others.

Yes, only a system which satisfies all the above assumptions is proven to be open to sybil-attack.

On the same work paper, they attacked the third assumption (single-report responsiveness), claiming that "The only property that we can reasonably relax for the design of useful accounting mechanism is the single-report responsiveness property".

However, I suppose we can attack other assumptions too, and in fact, what Johan wrote in the previous post and what we have discussed during the last weeks in the lab require the relaxation of some of this constraints.

In short, we can present a new model based on / derived from 'accounting mechanism' (which would automatically invalidate the impossibility result) and do our own theoretical analysis. And I have a sense that we can prevent or limit the effects of sybil attacks.

I have some immature ideas about that but they need to be formalized more before being opened to your discussions.

@cuileri
Copy link
Contributor

cuileri commented Apr 18, 2019

I will summarize my thoughts on our trust research below. However for the actual collection of my notes, see here.

The eventual aim of our lab is to generalize the trust notion. Due to our practical way of thinking throughout the lab, we tend to define and interpret trust according to our technical lab problem, which I'd call designing decentralized, accountable and anonymous P2P file transfer systems. I think, to model and abstract our problem without completely generalizing it would be a good initial step for us.

I present below what I come up with through literature.

  • Finding 1: The meaning of trust is problem dependent.
    There is not a standard definition of trust yet (see a recent survey).

  • Finding 2: A problem may have different interpretations (dimensions) of trust.

  • Finding 3: Our specific problem has multiple dimensions of trust.
    Some of the dimensions are performance (e.g. bandwidth, total amount of seeded bytes), relay-ability (i.e. how does a node performs well as a relay), accountability (i.e. does the node obey the trustchain protocol and report all his transactions correctly), informativeness (i.e. how correct is the information that a node gives me about others, how correctly does it gossip). What we have done so far for the random walk study in GUI: random walk with real-time updates #10 and Incremental update of trust levels in a dynamic blockchain graph #2805 were to deal with the performance dimension of trust.

  • Finding 4: Trust dimensions may have different accounting mechanisms.
    As an example, when we consider trust on performance, we generally sum up all the contributions of a node to the others. On the other hand, when we consider reliability, we generally take the average of the opinions of others.
    As another example, a node A may share with the others the performance of B (i.e. what the node B has done for A), but may prefer not to share his opinion on the credibility of B. Here, trust on performance is transitive while trust on credibility (or informativeness) is not transitive.

I suggest to analyze the dimensions of trust in our problem and detect the interdependencies between them. Then we come up with a complex model which outputs a node's trust 1) for all his neighbors, and 2) in each dimension. We can then attack the misbehaviors by updating our sub-models and tuning our parameters during simulations.

To generalize accordingly, the steps towards designing a problem-specific trust function would be:

  • Define the system formally
  • Define the types of misbehaviors that the trust system should prevent/penalize/recover from
  • Define the dimensions of trust as well as their accounting mechanisms
  • Decide which trust dimension is supposed to resolve which kind of misbehavior
  • Investigate interdependence of dimensions of trust (e.g. when I evaluate the performance of C, the trust on informativeness of B may determine how I weigh B's opinion on the performance of C).
  • Design mathematical models (seemingly non-linear and complex)
  • Mathematical proofs -if any- for the prevention of misbehaviors.
  • Simulations for each kind of attack (misbehavior)
  • Simulations for collection of attacks (misbehaviors)

The steps above actually give what I plan to do for the next sprints. Any comment, correction, suggestion is appreciated.

@grimadas
Copy link
Contributor

grimadas commented May 1, 2019

@ichorid
Copy link
Contributor

ichorid commented May 1, 2019

AFAIK, NEM uses a variant of PageRank called NCDRank.

@synctext
Copy link
Member Author

synctext commented May 2, 2019

MIT is now also doing some initial work in our domain. Still at the early idea stage and no concrete steps towards implementation and community building. https://www.trust.mit.edu/projects

  • Open Algorithms (OPAL) principles paper (PDF)
  • A. Pentland, D. Shrier, T. Hardjono, and I. Wladawsky-Berger, “Towards an Internet of Trusted Data: Input to the Whitehouse Commis- sion on Enhancing National Cybersecurity,” in Trust::Data - A New Framework for Identity and Data Sharing, T. Hardjono, A. Pentland, and D. Shrier, Eds. Visionary Future, 2016, pp. 21–49.
  • T. Nishikata, T. Hardjono and A. Pentland, Social Capital Accounting, October 2018.

Especially the last bit of writing on social capital is vastly superior marketing compared to our work. With Delft+Harvard+Berkley we only came up with "Work Accounting Mechanisms" 10 years earlier. Ai.

@synctext
Copy link
Member Author

Trust is what the Internet always needed, but never had. We are trying to solve this problem directly and have little competition. Very few team work on this because it is so hard, no solution is in sight, and profit-making models are problematic. TUDelft seems to be the only (academic) entity focused on building running code for the public infrastructure of identity&trust. Strong identities are a critical building block for trust. We aim to provide legal certainty around identities, digital signatures and multi-party signed contracts.

Trust is a difficult to define concept. We place trust in agreements. However, a business contract is nothing more than a promise enforceable in court.
The power from a traditional contract critically relies on a functioning legal system.
If power comes from higher authorities and laws, why do we need smart contracts? A naive though is that we can digitize all business contracts and don't need arbiters.

Bitcoin never succeeded because it is based on the "closed world assumption". Smart contracts are dumb if they can not interact with the legacy analog world and legal system. Before we have any smartness in smart contracts we need to integrate well known concepts such as: legal certainty, counterparty risk, investment durability, shifting business models, changes in existing laws, etc. Governance is vital: who actually owns and controls everything? Maintenance ability is a cardinal requirements, both in short term and quantum-proof hashing in the far future.

@ichorid
Copy link
Contributor

ichorid commented Sep 23, 2019

Genealogical identity system

One solution to the trust problem could be "genealogical bootstrap", where users issue certificates to other users forming lineages. Every user will have one (or two, or several) "parents" who certified the creation of his account (essentially giving him the birth certificate). This will make it easy to distinguish Sybil regions.

In this system, you will only crawl subtrees of your family tree. You can ask other people for peers with the specific chain of predecessors. We can run two bootstrap certifying instances (Adam and Eve 😉) and only give certificates to real people who ask for it. Then, these people can give other people certificates, etc.

Example

When you want a "birth certificate" you create your private key and send it to one of your parents, say, Adam. Adam then "proposes" to whomever he wants to procreate with, say, Eve (or maybe the child selects both parents instead). Then, Adam sends to Eve the proposal to sign his block. They both sigh it and put on their TrustChains. Then, the child gets the certificate. Note that Eve can reject the offer if Adam looks suspicious to her 😉.

This system has some interesting properties. For instance, it naturally rejects inbreeding, because the progeny of close-relatives pairing has shorter paths to the most recent common ancestor (MRCA), which makes it more vulnerable in case MRCA is found to become corrupted.

So, the system promotes diversity of connections. In a sense #4481 is a particular case of this system, where instead of a tree we use the Internet traffic exchange graph. But #4481 looks more like bacterial asexual reproduction system which is more about producing clones. From this point of view, @grimadas's accountability protocol #4719 looks like a horizontal gene transfer, and double spend detection system is analogous to CRISPR.

Applications

  • Social networks: people can naturally issue certificates for their friends and family
  • Groups of interest: political parties, groups of interest can use it to detect "relatives" with similar interests
  • Finance and business: genealogical tree is a natural way to express corporative hierarchies, ownership rights, etc. One cool feature here is that there is no circular ownership problem in this system.

Remind that all highly complex organisms use sexual reproduction...

@qstokkink
Copy link
Contributor

@ichorid Could you expand upon how the genealogical identity system is different from PGP's web of trust?

@ichorid
Copy link
Contributor

ichorid commented Sep 25, 2019

@ichorid Could you expand upon how the genealogical identity system is different from PGP's web of trust?

  • PGP is about dynamic horizontal connections. The genealogical identity system is about vertical static links. These systems complement each other.
  • PGP is not concerned about relationships between users, only about connectivity. Instead, the genealogical system is about providing strong incentives for "children" to select proper "parents", and for "parents" to only accept well-behaving "children". The closer you are in the "family tree" to someone, the more you care about them well-behaving.
  • There already exists a similar hierarchical higher-level system built atop PGP.

@synctext
Copy link
Member Author

synctext commented Mar 31, 2020

Problem: Trust is too generic to "sell".

One of the hard things about getting adoption for distributed trust is that it is a general societal concept, not a specifically useful tool. In other words, distributed trust can be used for numerous of different undertakings. This is a strength long-term but a bit of a detriment short-term. We need to find the killer usage that drives distributed trust adoption now or we won't be able to stick around long enough to see distributed trust in action for all the various use cases (Youtube-alternative, decentralised Uber, self-organising Amazon, etc. ) we imagine. [1]

Key usage: global file system early USENIX work from nearly 30 years ago, Internet sharing. NFS was used to unify different FTP sites 30 years ago. Now the IPFS people are trying to create a unified hash space with the native ⨎, FileCoin token economy by Q3 2020

DEC VMS:
00README.TXT;7 6 9-APR-1991 18:14 [ANONYMOUS] (RWED,RE,RE,RE)
PUB.DIR;1 8 18-OCT-1990 07:20 [ANONYMOUS] (RWED,RE,RE,RE)
Unix:
dr-xr-xr-x 3 system 320 Jul 16 14:25 0.7alpha
-rw-rw-rw- 1 system 853 Jul 16 13:41 directory
IBM VM/CMS
FUSION BIBLIO4 V 80 2219 45 11/25/91 7:00:07 FUSION
FUSION BIBLIO5 V 78 132 3 11/27/91 10:01:11 FUSION

@synctext
Copy link
Member Author

synctext commented Jul 16, 2020

What is our 2025 or 2026 objective?
We need a single concrete objective to guide the roadmap of the entire lab with 12 people. In 2026 our current funding also runs out, so a good focal point. Draft ideas:

  1. distributed trust design (current goal)
Distributed Trust Design provides a coherent framework to transform mutually non-trusting people into 
trustworthy online communities of arbitrary size, that have a process for democratic decision-making, the
ability to punish those who freeride on the community or are dishonest.
  1. large-scale cooperative systems.
Humans have cooperated for causes such as building pyramids, understanding the building blocks of life, and
putting a man on the moon. We devised new mechanisms which build a cooperative networks online between
millions strangers, thereby architecting another cooperation inducer besides currency, kindness and kinship.
  1. Durable existential autonomy
Our trustworthy and leaderless organisational method relies on complete automation (e.g. DAO). Continuous
external inputs are required for peer-production (Linux, Wikipedia). We achieved existential autonomy with the
first sustainable DAO, operating by selling privacy-as-a-service and using this income for self-maintenance.
  1. Breaking monopolies with a protective "firewall"
Private companies now control much critical infrastructure. Adversarial interoperability using open source
solutions will weaken their stronghold. Our "monopoly firewall" with decentralization immunity will shield us
from the resulting harassment, press slander, domain hijacks, DDoS, lawyer-based attacks and digital sabotage.
  1. The cure for hypercapitalism
Economic disparities increased over the centuries and technology played a role. Society has never been more
unequal than at present, in terms of the distribution of income and wealth. Our technology gives power to
citizens by removing power from both companies and governments.
  1. Freedom to innovate
Bitcoin pioneered permissionless finance in 2009. CloudOmate introduced permissionless computation in 2017.
Our DAO offers the basic blueprint for sustainable economic activity and gives anybody the freedom to build 
upon its foundations without asking any gatekeeper for permission.
  1. Technological sovereignty for Europe
The European Commission has made technological sovereignty a priority. An alternative is needed for American tech
giants and their dominance over everything from social media and online search to cloud computing and e-commerce.
We will demonstrate with a million users that technological sovereignty is possible, trustworthy, and durable.
  1. Proof that we can overcome the Tragedy of the Commons at scale
Depletion of our physical resources is the main threat for humanity. By showcasing a sustainable digital commons we
provide hope for the future. Requires extraordinary proof with at least a million people.

@grimadas
Copy link
Contributor

You have to learn the rules of the game. And then you play better than anyone else. [Albert Einstein]

Rules of thumb of Trust Design

As one of the main goals for the lab is to build infrastructure to foster cooperation, I thought it would be useful to know insights from the social sciences. These are very useful when designing any feature/algorithm that deals with humans.

There were many experimental and theoretical studies on the behavioural game theory, which models players as bounded rational. Although classical games (Prisoners Dilemma, Social Dilemmas, Trust and Ultimatum games) are useful models, Nash equilibrium is a only limited predictor of reality. Here are insights and heuristic s took from an excellent paper:

Prisoners Dilemma

People cooperate more than predicted in PD, but why?

  • Cooperation might be more attractive because of cooperative reputation, as it gives more chances for reciprocal altruism.
  • By default, people believe that their counterpart is altruistic, giving a chance to build a reputation.

Social Dilemmas

Can be viewed as an expansion for PD as n-player game. This extension has allowed identifying a number of interesting challenges:

Ways to solve the dilemmas:

  • Address greed and fear

The main incentives for defection are greed and fear, with greed being predominant.
Typical ways to address them: guarantee a money back or design incentives such that defecting will not increase the outcome.

  • Create group identities

One way to solve the dilemma is to create a superordinate group identity.
People believe that groups have a competitive advantage than individuals.

Intergroup conflict also increases intragroup cooperation.

  • Create punishments

Altruistic punishment encourages collective cooperation.
People appreciate punishments and strong social norm, especially after they experience free-riding.

Relatively small punishments only decrease cooperation, might induce revenge and decrease interpersonal trust.

  • Create rewards

Might not be as effective as punishment.

  • Positive framing

Different labels affect cooperation: Take some (harvesting) is more cooperative than Give some(donating).
Taking and claiming makes people think about fairness, control and responsibility.

  • Play on guilt

Trustees felt obligated to reciprocate when trustors showed high levels of trust (for example, sending a lot of money).

@synctext
Copy link
Member Author

synctext commented Dec 16, 2020

... DRAFT ....

Current status of our civilisation

Once upon a time it was believed that the internet would lead to greater equality. Today we are less naive and see the real impact. The Internet eroded unity, wealth, privacy and security.

Once upon a time it was believe that our time was special and we witnessed the end of history. We now see the continued centralisation of our economies with monopolies that only increase in size and power. The dystopian vision that megacorporations would run the work has been replace with Wall Street steering our global economy. Within the industrial age we see a systemic move towards less competition in an increasing number of markets.

Once upon a time monopolies where rare. Now most digital markets have huge profits with winner-takes-all dynamics, Predatory pricing, monopoly formation, buy&eliminate competition, and market failure in general. Piketty showed that income from wealth is greater then income from labor. Systemic bias towards the elite. Inter-generational wealth is on the rise, https://hbr.org/2014/04/pikettys-capital-in-a-lot-less-than-696-pages Ref1 ref2 ref3

Outlook of our civilisation

Robot economy will further amplify inequality. Climate change may be the next upcoming financial crisis, great depression level. Will require trillions of investments. "Climate change-induced migration and violent conflict", https://doi.org/10.1016/j.polgeo.2007.05.001

The new generation of citizens, those who have been born this year will face a radical transformation. Newly born citizens will see future technology beyond self-driving cars with access to space which is currently still "astronomic expensive". The Space Economy is emerging and expected to further amplify-the-amplification of centralisation. Single winner within the whole raw resource economy. https://doi.org/10.1016/j.actaastro.2019.05.009, also https://asteroidminingcorporation.co.uk/aep-1

Alternatives to extractive capitalism

Economic principle for any economic activity, scales to any size. Alternative economic principle for the global economy.
Common good as a first concern, beyond stockholder value. We need to develop global economic primitives which are permissionless. Real life issues:

  • What if someone sold you something they don't own?
  • What if someone takes your money and runs?
  • What if someone pretends to be someone else?
  • What is a fairness within trade and commerce context?
  • What is truthful information (or fake news)?

Steps toward the first Economic transformation. Music industry, scientific publication and movie industry are vulnerable. This industry is expected to transform coming decades further into a live experience economy. Expect more meet-the-vlogger, liver performances, theme parks and ocean cruises with a theme (e.g. Disney++).

Explore an isolated microeconomy with "existential freedom" that serves as a training ground for alternatives to capitalism by employing large-scale collaboration between individuals.Economic system with democratic governance, no central authority. No fantasy project, real governance, running code with Blockchain, AI, and democratic voting mechanism which does away with winner-takes-all systemic bias in capitalism. Operating by selling privacy-as-a-service and using this income for self-maintenance. Feature incorruptible saints, alternatives to faltering institutions. compliance-by-design. Bronze age of our "Democratic Economy"... (Thnx @qstokkink). Introduce a Bitcoin-operated DAO economy around arts&sciences.

Early results

Operational voting system (content likes), Operational ledger, MusicDAO, etc.
Data-driven approach, see example by Pinterest SEO

@synctext
Copy link
Member Author

synctext commented Jul 16, 2021

Lots of related work is appearing, similar to our Trustchain.

  • The Case for Byzantine Fault Detection 2006 the system does not make any attempt to hide the symptoms of Byzantine faults. Rather, each node is equipped with a detector that monitors other nodes for signs of faulty behavior plus With such detectors, each action is undeniably associated with the identity of the node that has performed the action, allowing the system to gather irrefutable evidence of faulty behavior.

  • Blockchains are missing support for the Skyline operations, published in 2001 by OOI. Multidimensional Max/Min which ruthlessly measures the efficiency of touching all data.

  • Foundational: "Byzantine Fault Detection" of 1990. Theoretical achievements of fault detection and make it practical.

  • Monotonic Atomic View (MAV) isolation, from "Highly Available Transactions: Virtues and Limitations" By Ion Stoica!

  • Collaboration auctions, collaboration upload slots, and freeriding prevention Two sealed-bid auction schemes are presented to efficiently and fairly allocate node reputation for single and multiple relay scenarios resulting in energy savings arising from application of the considered indirect reciprocity model. The simulation results show that the proposed reputation auction framework can achieve higher energy efficiency compared to non-cooperative schemes and only slightly increases signaling overhead.

  • All You Need is DAG We construct DAG-Rider in two layers: a communication layer and a zero-overhead ordering layer. In the communication layer, processes reliably broadcast their proposals with some metadata that help them form a Directed Acyclic Graph (DAG) of the messages they deliver

  • Van Renesse, "Byzantine Chain Replication" https://link.springer.com/chapter/10.1007/978-3-642-35476-2_24

  • Roy Friedman, "Hardening Cassandra Against Byzantine Failures" https://arxiv.org/abs/1610.02885

  • "Conventional data storage methods like SQL and NoSQL offer a huge amount of possibilities with one major disadvantage, having to use a centralized authority" https://arxiv.org/pdf/2101.05037.pdf

  • https://kevacoin.org/whitepaper.pdf

  • Anna: A KVS For Any Scale Anna: a partitioned, multi-mastered system that achieves high performance and elasticity via waitfree execution and coordination-free consistency. Our design rests on a simple architecture of coordination-free actors that perform state update via merge of lattice-based composite data structures.

  • Real time group editors without Operational transformation. WOOT (WithOut Operational Transformation) framework that ensures intention consistency without following the OT approach. However, thanks to its new viewpoint, WOOT is drastically simpler, more efficient and does not require vector clocks or central sites. The WOOT framework is particularly adapted to very large peer-to-peer networks. Also: Dr. Victor Grishchenko work, former Tribler post-doc. ChronoFold

  • The Evolution and Design of Digital Economies In examining how the contrasting forces of scale and scope economies, together with the relative costs of transacting within firms and markets have facilitated the emergence of decentralised marketplaces, we make use of a number of core economic principles. These include the economics of transaction costs, ownership and control, the principal-agent problem, bounded rationality, information asymmetry and trust relations. plus Blockchain technology has the potential to reduce transaction costs, execution risk, and information asymmetry, and in doing so increase the speed of transaction. The central feature of mechanism design based upon blockchain technology is the use of decentralised consensus which provides the authenticity of transactions within a trustless network.

  • The Rise of the Platform Business Model and the Transformation of Twenty-First-Century Capitalism The explosive success of aggressive winner-take-all strategies has had a broader
    transformative effect.
    plus The rise of the platform firm thus represents a new development in the changing
    nature of work, the growth of inequality, and the eroding social contract.

  • "Peer‑to‑Peer‑Based Social Networks: A Comprehensive Survey"

@Nub2019

This comment was marked as spam.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants