Ideas of possible technical details for the trust system
(The below description is a mere draft; it presents different
possible ideas among which some choice will have to be made).
In a first step, is the question of how users can enter the data of
trust declarations. Here is a description
of
a
method based on the contacts list.
Declarations of trust between users
For the following, each user can decide for each of his pseudos if
it will be a "protected" pseudo or not. The meaning and use of this
is described below.
Each user has a list of contacts, that is, pseudos of people he
corresponds with. (So a "contact" means a pseudo of another user).
One can access an information page about a contact by clicking on it
in the contacts lists, or when it is not yet listed, by clicking on
it as the pseudo author of a message one reads (and this way one can
add it to the contacts list).
This info page shows what oneself decided about this contact, and
some information that the computer tells about it, as described
below.
One can declare to the computer, then possibly modify later but
archives will be kept for a while, the following information:
A relation of trust given, with two variables : x,y with values in
the below set of trust values, where x is a pseudo and y is a
contact (i.e. someone else's pseudo that this user knows), which
says: "I, under the name of x, declare this trust to y".
To each contact, a user may optionally attribute different kinds of
trust : possible attributes, more or less independent of each other
(but complaint of a kind is not compatible with trust of another
kind):
- Basic trust by a pseudo x to a contact y, with a few possible
values :
- Neutral (by default)
- Positive
- Negative, with different possible degrees : weak or strong
complaint (or maybe more, such as Antispam, which may be
treated differently...)
- Similarly, other kinds of trusts (such as powers) ;
possible values and ways of working may depend on the kind of
trust
- Credit (basis of the money system) : see this theory of credit
(prototype and classical interpretation model only for now):
each credit is measured by a quantity of money. A little
quantity of credit is near to no trust but opens the possibility
of micropayment.
- After the money system will be implemented, it is possible to
develop a more elaborate combination of trust and money systems
so they can reinforce each other, by developing a notion of
credit-backed trust, that is a stronger than credit. A
credit-backed trust declaration requires to have done a credit
declaration of a significant nonzero value, and will carry a
value in relation to this credit (the value of a trust must be
strictly positive, else it is no trust. Or, we can require the
credit to be higher than a certain value to allow for trust).
The value of this declaration is defined as the minimum quantity
between the credit given and the quantity of money the other can
still afford to pay to oneself; to let this last quantity remain
nonzero, one can make 2 kinds of credit: one can be normally
used and the other ("trust reserved credit") is reserved for
giving a value to the trust declarations, that will be used only
as a guarantee given for the case the person is found dishonest.
General ideas about trust and distrust
There are different possible definitions to be considered because
conflicts mean there are contradictions in the logical system:
- One considers oneself reliable
- If A is reliable and declares trust to B then B is
reliable
Then, the web servers will compute the transitive relation
generated by the relation of valid trust declaration, for giving
each user the possibility to see (when displaying someone's info)
if he indirectly trusts someone or not. This question need not
update every minute, but every day is OK. For this, it may be
useful for the servers to regularly (every day ?) make a map of
all users by equivalence class, where 2 users are in the same
class if one indirectly trusts the other and conversely.
But the opposite information needs to be shared too:
- If A is reliable and complaints against B then B is not
reliable.
This raises the problem of how can a self-contradictory logical
system provide answers anyway.
One problem will be to anyway compute something about indirect
trust towards a given pseudo (author of some information, or
possible partner for any transaction) while the contradiction is
not resolved yet.
Another problem is to resolve contradictions by means of
complaints system : when 2 users who both received indirect trust,
contradict each other about a third person, then the people
involved in the chains of trust to them must be involved in the
discussion and choose which side they support. Two people
supporting different sides cannot trust each other anymore, so
that any trust declaration between them is cancelled.
So, a trust declaration is worthless if it is not done in a
responsible manner. To be responsible, the person giving trust has
to be informed about possible conflicts in which the person may be
involved, in order to take part in the debates (trials) and
eventually remove his trust declaration. Also, we shall automatize
the fact of disagreeing with the people who choose another side,
while giving it a more anonymous form (with a concept of conflict
of opinion with someone about a problem not directly involving him
but only someone else), and making complaint automatically
symmetric.
However, a user may not accept anyone "trusting him" to get
informed about conflicts he may be directly involved in, as this
would constitute a breech of privacy, while he can accept to take
the responsibility of his opinions about somebody else's troubles.
First, the spread of information about one's own conflicts can be
a breech of privacy in itself (especially in the case of a small
conflict, not a serious one, that might eventually be just a test,
and can be solved easily between a few people) ; only serious
conflicts where many people agree with the complaint, should be
made public. Second, the mechanism of involvement brings a risk to
the privacy of the information of which pseudos belong to the same
user: if you are owner of two pseudos A and A', then a spy could
discover this fact by declaring trust to A while (his friend)
makes a foolish (testing) complaint against A'. The immediate,
automatic involvement of those directly trusting A as the first
invited participants in the trial, would be a way of revealing
that A and A' are the same user. For this reason, we need to
develop the following
How to manage the multiplicity of pseudos per user in the
computation of indirect trust, to respect privacy
The idea is to let a user taking part in a conflict, choose which of
the people trusting him will he invite to join the debate, in hope
to get his support, with the risk to not get his support but to lose
this trust instead.
I first imagined this choice to be made in advance, while the
invitation during the conflict would be done automatically as
determined by this prior choice :
Each user Y can see the list of trust declarations by anybody else X
(displayed in Y's board in the form of some pseudo x of X) to any
one of his own pseudos y. For each such declaration, user Y can
choose to Receive or not Receive this declaration. If he chooses to
receive it, then it is first received by the pseudo y to which X
addressed this declaration, but then Y can choose to apply the
reception of this trust declaration by X to his other pseudos as he
chooses (y', y''...) as well.
Then, indirect trust is defined from a user A to a pseudo y, by the
existence of a chain of trust :
A → B →...→ W → X → y
for any users A, B, .., W, X disregarding which pseudos of each user
are involved, and whether these trust declarations are marked as
received or not; but only the last trust link X → y needs to carry
reception of this trust declaration from X to y, by the owner Y of
pseudo y.
Anyway, except for the last link of the chain, pseudos of the
same user can be confused and trust needs no confirmation of
acceptance because :
For each user inside the chain, say B in the chain A → B → C →, it
will not matter that he received trust from an unknown A to one of
his pseudos b while he declared his trust to C next under another
pseudo b', because if he intervenes in the conflict (participates in
the discussion) he will do it under neither pseudo b or b', but
under a third pseudo b'' ; only A might know that b and b'' are of
the same user, while only C might know that b' and b'' are of the
same user; they should not share this information (even if they
share it together they still should not publish it), so that the
information that b and b' are of the same user, remains private.
(Note: Can Y be the same as another user in the chain, for example
Y=W ? Does it even matter whether other users in the chain from Y to
y, such as X, are real or if they are virtual, created by Y ? I
think it makes no significant difference : it only has the effect of
delaying the time when someone trusting Y is involved, and will know
that he is invited due to his trust to W(=Y), and may or may not
guess that W is direct part of the conflict (W=Y)... only a little
bit later, compared to the case if Y accepted to receive this trust
to y right from the start)
But there is another solution
That is to let someone taking part in a trial (= online debate about
a conflict), to take his own initiative of who he will invite to
join, among the people who trust him.
I had first ignored this possibility, but considered invitations (of
participation in the trial) to be automatically sent at regular time
intervals to any people who trust those who took part in that trial
on each side, as long as such further people exist on both sides
(otherwise there is a winner, who remains trusted while the loser
lost his chains of trust), because without such automation, this
process would have the risk to last forever without solution, one
side being a group of hackers preserving the chain of mistaken trust
to them, by not challenging it by invitation of members of that
chain but inviting instead an endless list of fictional users
(accounts of people who don't exist but are automatically generated
by malicious software) to "support" them.
Finally, I consider another solution against this risk, by the
following method:
- Using a system of certification of the real identity of users
(as I will describe later), to make a reliable count of the real
number of supporters on each side
- Comparing the number of supporters on each side, the side that
has more supporters (or rather, significantly more, such as >
1.5 times the number of supporters of the other side) is
(temporarily) considered "right", and the side with less
supporters is considered "wrong" and have their own
responsibility to invite more of the people still trusting them,
to take part in the debate.
The real identities of supporters of each side are known by their
host but not published; only the number is published.
More precisely, each host publishes its the number of supporters
on each side among its own users.
You may ask : what if a host is malicious and publishes a number
of supporters on a side that does not fit the real number of his
users actually supporting that side ?
Reply : if a number of supporters of a side published by a given
host gets big and a suspicion is raised about its reality, we can
verify this number by the method of online voting
that all users of that host will be requested to take part in.
Complexification by different types of trust - the case of
memberships
There will be standard trust, but other kinds of trusts will be
handled in parallel according to the same (or different) logic. So,
different kinds of trust are classified by logical structure (format
of declarations with list of options, kind of computation on the
graph, kind of interaction between these data and debating spaces)
and label (meaningful for humans but just a label for computers).
One possible kind (logical structure) of trust, is memberships (or
is there a better word ?). Any user can create a (label of)
membership. Examples of memberships can be "having
the True Faith", "being a genuine scientist" (= not a crank),
or many other purposes where people need to identify the genuine
members of their group. It is another graph of trust which is
computed the same way, with
- Only users who choose to be members
- Trust declaration between users for this membership, which is
separate from basic trust (but usually implies it).
A membership can be public or private. If it is public, it gives
non-members the possibility to view this trust towards people from
the viewpoint of some existing member (who accepted this).
How to give values when trust declaration are mixed with
complaints
A first approach would be to consider the following definitions:
- Indirect trust = existence of a chain of trust (and maybe what
length ?) from oneself to the given pseudo.
- Indirect distrust = existence of a chain of trust from oneself
to someone who complained against him (and information about the
reason of distrust).
But things may be more complicated. For example, what can happen
if we have the chains of trust
A→ B → C
↓
D → E
but E distrusts B, and C distrusts D ? If we accept the indirect
trust from A to C as valid then this eliminates D from A's trust,
and thus makes the indirect trust from A to E as invalid; however
we have no more reason to reach this conclusion than to accept
indirect trust to E as valid and thus reject trust to B as
invalid. So we need more subtle definitions.
But, how likely is this kind of graph of trust, to happen in a
large population ? If C is in fact honest then there should be
high chances to have other chains of trust from A to C to confirm
his honesty. However, if E is mad (or, for example, a fake user
created by D), he could have created conflicts with everybody in
order to affect other chains of trust from A to C as well,
extending the same kind of configuration to parallel branches in
the graph of trust.
Sides and complaints
Roughly:
A complaint opens a disputes between opposed sides in a online
forum. Members of the chains of trust that lead to either sides, can
join and choose their side in the dispute, until one side "wins", as
the losers lost the trust they had received from others before..
In details: A complaint requires to start from:
- either an existing private forum where the person was involved,
which will be transformed into the debate space
- or anything where someone made an action, that will be the object
of the complaint; then the debate will be created with reference to
it.
First step is warning.
It has an explaining text or message, that is the start of debate,
that the accused person will see (linked to) on his board (like any
signal of a new message, but marked with this warning). This has no
public effect on trust, but is there to give a chance for any
misunderstanding to be resolved between the concerned people before
any other people will be involved.
The warning can be cancelled by the person who did it if the problem
is resolved, or, after some delay (one day or week ?), it will be
possible to switch to the next step, making the complaint public.
Second step is open conflict
This has publicly visible effect on the trust information to the
person.
Both people in conflict will become respective centers of a pair of
opposed sides. At first, each side has one member: the complainer
and the accused person.
These first two people, who have a dispute, will each be the head of
one's own "side".
More people, that is, those who had made a trust declaration to
anyone involved in the dispute, may also join the dispute (if
invited by the one involved) and choose their "side".
Pairs of opposed sides modify the trust computation, in the
following way.
For each pair of opposed sides, each user can choose to be either
neutral (by default everyone is neutral) or supporter of (only) one
of both.
Suppose there is only one pair of opposed sides (if there are
several, the below definitions would have an exponential complexity
of computation with respect to the number of such pairs, but there
may be a more computable approach as well).
The one graph of normal trust will be replaced by 2 graphs of trust
to be computed independently: one G that is the graph of trust
relatively to one side, that excludes the supporters of the other
side, the other G' excluding the supporters of the first side, that
is the graph of trust relatively to the second side.
Now to the question, that a user A may ask in the network, whether
he indirectly trusts B, 2 possible answers might be considered
depending on which side is right:
- A indirecty trusts B by G, or
- A indirectly trust B by G'
Then the question is to choose between them, or may we be interested
to know both answers. Of course, supporters of each side can look at
the graph of trust relatively to their own side.
....
Previous drafts of ideas, maybe to be discarded
If there are N pairs of opposed sides, it would not be rational to
compute another graph for each of the 2^N ways to take sides. A
possible idea is for each pair of opposed sides, to have 2 graphs
of trust as if nobody took any other side, and also a synthesis
into one graph or relation, for computation by another server busy
with another pair of opposed sides. But to make computation
approach approximately the taking into account of an opposition
when computing other opposition, we will consider as invalid any
trust declaration between supporters of opposed sides.
The invitations to the forum, of the trusters (which means, a
pseudo y such that C(y,x) = Trust ) (or a subset of this set,
decided how ??) of supporters of the involved sides will happen
progressively (at which rhythm ?) depending on whether
participants consider that they need to extend the conflict, or
whether they are many enough and just need time to debate.
The conflict can run into the following states, to be chosen by
heads of sides: to be weak (without effect on trust, but just for
debating and clarifying problems), then medium (with this effect
on trust), then strong (with public note on profiles visible by
anyone in none of G and G'.
But we can also consider the possibly for a user to support a side
weakly or strongly, in the same spirit (this complexifies the
trust calculations as sides will have strong supporters and weak
supporters).
(Eventually, if the dispute becomes big, each side may need to
structure itself by a memberships for writing a document, for
example a wiki, of arguments for the defence of this side).
(Here is a previously considered algorithm, possibly
worthless)
Compute the preorder relation T1 generated by trust declarations
between users.
For each user x, compute the set of pseudos z indirectly
distrusted by x, that is, Z(x,z) if there exists a pseudo y with
T1(x,y) and y made a strong complaint against z.
Then consider the indirect trust defined by the chains of trust
from x that avoid users indirectly distrusted by x in this way,
that is a new relation T2(x,y) as "Clear indirect trust by x to
y".
So, when x sees the pseudo y, the status of y is displayed, be it
T1(x,y), T2(x,y) and the text of the complaint against y if
Z(x,y).
Complexification by network of independent servers
A problem is for a server to handle tens of thousand users without
saturating the server resources (cpu and memory). It would be
possible to have the result in a rather compact form made of the
list of equivalence classes of users (or pseudos) so that two users
A and B are equivalent if there is a chain of trust from A to B and
from B to A.are chains of trust both weach is trustworthy from the
point of view of the other.
Another problem is to extend the graph to millions of users by
combining the trust declarations from different servers.
In short:
The trust system will work as a combination of the data of the local
graph of trust between users of the same host (working inside the
"black box" of the host, therefore with no need of electronic
signature) and the web of trust between different hosts (with
electronic signatures).
In details:
The site publishes (with signature) the subset of the ordered set
defined by quotienting the set of users by the equivalence relation
of the preorder generated by the trusting relation.
Hopefully this ordered set is made of only one element, so that
there is nothing publicly revealed of the internal structure of the
site.
Each server will first make the map for trust relations among its
own users. So, for the subgraph of trust between its users it
computes equivalence classes and publishes it for other servers,
without saying how many people in each class, but only as the list
of abstract ids of classes which contain users who received a trust
by a user of another server (the classes who receive no trust by
outside are known but stay invisible in the published map).
So it publishes a list of abstract ids with the info of the relation
of indirect trust (by internal chains of trust) between classes, and
of trust from its own classes to the classes of other servers. Then,
when a user A wants to know the trust to a user B, A's server
requests B's server about which class B belongs to, or if B is not
in a published class, which classes trust B. Then uses the union of
graphs of all inter-site trusts and local trust.
Declaration of any trust or anything else between members of
different sites are published as "a member of this class here trust
a member of that class of that other site".
Maybe not useful: computing the shortest path between 2 points ?
If N is the number of users, we can
define for each user two tables with slightly more than sqrt(N)
entries made of
1) the people he indirectly trust,
2) the people that indirectly trust him,
by chains of trust no longer than a certain number. Each table
gives the parent and the length of the chain from the considered
user. Then, to find some shortest trust chain between 2 users we
just need to see (if one is not already in the other's list), what
names appear in the two tables. But most of the time, what will be
needed is not all the trust chain but the first and last elements
of this chain. So, we can put in the table of user x, for each
user y that has a trust chain of length no longer than () with x,
what are the end elements near x and y, for the chain of minimal
length. Unfortunately, it may not be unique.
A possible further development about credit
The following quantities can be computed about a user y for the
account of a user x:
One is the affordability, that is, how much y could afford to pay
to x at a given time as defined in the theory of credit, when not
including the trust reserved credit as a credit. Note that this
affordability relation is transitive, in the sense that the
affordability of x towards z is always higher than or equal to the
minimum between the one of x towards y and the one of y towards z.
One is the reliability, which is the same as affordability except
that it includes the trust reserved credit among credits (so, it
is higher than affordability). This quantity is also equal to how
much x can obtain to be paid back for damages done to him by y
once y will be complained against if x did not make mistakes in
his own trust declarations.
For concrete details about how trials can be processed, see among
the comments about Google Wave