It is clear that a person needs faith to believe in what religion offers. Faith is needed as no scientific proof is given about the existence of what is presented. In the same way, we must trust what scientists tell us, as most of us have no competence to prove them right or wrong. And, in a more practical way, for things we use, do we really know how they work? Do we know how the engine of a car works? Why does a car move when we press the accelerator? Often we know the effect, the causation, but not the process of making things happen. And this lack of knowledge elevates more to a level of trust, or even faith, when what we use is not only mechanical, but has electronic components, or even worse, if it is managed by software.
The discussion focused on the fact that as very few people know science in all its topics and areas, to know if the presented results are true, rather than not, we need to believe what we are told. Is this the same as faith? An interesting counterargument argues that being science-based on results of research if knowledge about a result gets destroyed, thanks to research it can be obtained again, identical to the destroyed one. The support for this statement is that facts are facts, and doing over and over the same research will always have the same results. In short, if we destroy all the knowledge about religion, time will hardly bring back the same books with the same details, while if we destroy all the knowledge about science, giving it time and research, the exact same knowledge will be found.
When it comes to technology, the discussion is more complex. This is because technology produces results on given actions. There is an “input”, and “output”, but how the input causes the output is not always known to the user. It can be guessed, but as there are many ways to obtain an output, without a deep analysis we hardly know how things happen, and what else is happening. Here is an example:
A lightbulb is operated by a switch. The main function of a switch is to let electricity go through, thus lighting the light bulb, or blocking electricity by leaving the lightbulb off. In both above scenarios, pressing a button gives exactly the same result. The difference is that on the left (“Mechanical Switch”) we know exactly what is happening, while on the right (“Smart Switch”) we don’t know for sure.
The smart switch may ALSO keep count of how many times we turn on the light, for how long we keep it on, and at what time of the day or night we turn on the light, it may turn the light on or off on its own, and if it is connected to the internet, it may send all the profiling information about our use of the switch to a remote server. We don’t know for sure, and we may never know, as long as it does (also?) what we expect it to do.
Moving to an even more “soft” level, think about the switches you turn on and off in your mobile phone. They are for sure not mechanical, they are the illusion of turning something on or off. They only represent a request we make to our phone. In fact, often they don’t follow the request, as another setting, somewhere, overrides them.
We are asked to trust what a piece of software does. We are left to believe the software interacts with the hardware as we instruct it to do. We rely on the fact that what we ask for happens, and we are left in ignorance of what else does or does not happen. The level of belief we are asked to put on technology stands between the one we need for science and the absolute one we need for religion. We can study and learn science, and we can abandon ourselves to the faith in a religion, but no matter how good we are in technology, some “stuff” (both hardware and software, and in between) will remain closed to us, and thus we just need to put our trust in the manufacturers.
The concept of “trusted computing” is quite old. Quote: “The core idea of trusted computing is to give hardware manufacturers control over what software does and does not run on a system by refusing to run unsigned software.” There are many companies supporting this hardware control over the software. For example, on your iPhone, you can only run applications approved by Apple, signed with a valid certificate, and downloaded from the app store. A full explanation of trusted computing is on the wiki page, yet, a much better explanation, with a critical perspective, is given in this nice and very old video: Trusted Computing? Yes or No
Any IT system is made of both hardware and software. We understand that we need to trust hardware manufacturers and software applications, both those that run on our computers, and those that run on a remote server. For example, when we use any social media application we trust that the applications and servers of the social media company, and themselves, do not disclose our photos or messages, to those we don’t want to see them. A message or a photo shared only between me and a friend, should only be seen between me and my friend. We trust this to always be the case, as we put our trust in what happens “behind the scenes”. More seriously we do trust the bank computer servers with our money, hospital IT systems with our health data, and so on. We do this because of the reputation of the social media company, the bank, or the hospital.
What about decentralised systems? Ideated often by anonymous people, or by people, we don’t know, coded and built by developers we have no track record of, and offered to us by multiple different vendors. Yet people trust, for example, the Bitcoin blockchain with millions of dollars worth of cryptocurrency, while very few people know in detail the logic of the protocol, let alone the code, that runs in the peer to peer nodes, the backbone of the Bitcoin network. Yet many people trust decentralised systems and often use them with their life savings.
What brings people to trust (almost) blindly decentralised systems, while having often reservations about centralised ones? Where does the level of trust stands, in comparison to religion, (closed) technology, and science? And why?
Removed religion from the analysis as it is a matter of faith. Science surely requires trust but can be proven given study, experiments, and time. Closed technology can be analysed, but we still need to trust what it actually does. Open technology can be analysed, replicated, and with the needed knowledge we can be sure of what it does. In centralised solutions what happens “behind the scenes” still remains a matter of trust. The layers of protections put in place must stop attacks before they reach the core software, keeping it away, for security, from the eyes of the users.
A decentralised platform follows a protocol, which must be open and transparent. As it is deployed in an open network, with open source code, where nodes, the servers of the network, are in the hands of anyone that wants to participate, there are all the means for anyone to try to attack the network with the full knowledge on how it works. The incentives are often high if we consider most decentralised platforms are used to manage cryptocurrencies worth millions of dollars. Most systems get hacked thanks to inside help. Either employees of a company fall victims of social engineering or are accomplices of the crime. In a decentralised system, there is no means for an inside job. Developers trying to leave a backdoor in open source code can be exposed by the open-source community looking at it, and the collectivity is very attentive when upgrading the core software of a peer to peer network.
It is easier to trust something that while it is easier to attack all the attacks fail, where the uptime is 100% as there is not a single data centre that can be placed for issues, where the platform is supported by many different people with nodes all around the planet, rather than by a single company in a single datacenter. So, when a decentralised platform is used, instead of a centralised one, for other things besides cryptocurrencies, then we need less trust in the system for all that concerns our data, our content, and what we post.