pNetwork Team
3 min readMay 15, 2018

--

Hello Jose’,

thank you for your comments — the approach you suggest is indeed one alternative to the one followed by Oraclize.

That same method was indeed explored by our company as well in its early days, following the approach Orisi was proposing. However, we have encountered a few limitations and pitfalls that lead us to a different direction.

Here are some thoughts and what we learnt along the way:

  • the Oraclize system doesn’t run on (cryptonomics-related) incentives — a company is developing and offering it, while getting a small fee in exchange for the usage of the service. Implementing a system where individuals get paid a small fee for them to act as oracles is an option, but still presents elements of uncertainty as a) the service would most likely result in being extremely expensive for the smart contracts requesting data and b) the system needs to consider external incentives (i.e. individuals could have a stronger incentive to provide wrong/tampered data).
  • another problem of a system like the one you suggest is trust as such an architecture requires to trust anonymous individuals. As the data smart contracts are getting might be very valuable (for example, a payment can be triggered based on the data), the trust should be reduced to a minimum. As you suggest, certifying the individuals acting as oracles would help to decrease the likelihood of malicious attempts — the question is, what does it mean in practice to certify them? How is it done and who does it? An answer to this might be having a “certification authority” that identifies individuals and based on some parameters decides whether an individual can be certified or not — this requires individuals to lose their anonymity and still requires trust in the certification entity. Another option is to implement a reputation system — again, in practice, this approach arises pitfalls and limitations due to reputation systems themselves, which are a yet undefined and complex topic. In our case, the first step was to remove anonymity, follow the project as a company and to gain reputation among the community as such. Going further, we wanted anyone to be able to independently verify our track record and independently “certify” our service, while reducing the trust in Oraclize as much as we could. That’s when we introduced the concept of authenticity proofs.
  • finally, the consensus problem. Having an oracles network implies that a consensus needs to be found, among all the oracles in the network, on each data that you want the network to deliver to the calling smart contract. The problem is, how do we define such consensus algorithm? An algorithm that suits one smart contract doesn’t necessary suits another. Besides, for some kind of data it is not possible to find a consensus (an example is the random number) — as that data-fetching is not necessarily a deterministic process. The solution we implemented is to just deliver the data, while letting each smart contract the possibility to require a single data or to require multiple data from multiple sources and then aggregate them independently.

While the alternative you propose is attractive on paper, it faces a variety of unsolved issues. Oraclize proposes a solution that is a good compromise as it enables to connect with the external world while not compromising the main features of blockchain protocols. Let’s not forget that data is inherently trusted — our approach enables every single application to define the trustlines that need to be opened and moved away from the original data source the burden of integrating with the blockchain.

More on this here (https://www.youtube.com/watch?v=7uQdEBVu8Sk&index=13&list=PLtR8irJykqD-_Xt8Dpjk8g5pwx9E9GmGJ).

Does this make sense to you? Do you have any feedback on the matter?

--

--