There are two types of trust. Intrinsic trust is based on the physical reality of the object. I trust that a rock will fall to the ground when I drop it. This is based on trust in gravity and the property that most rocks are heavier than air. More interestingly to the software world is feedthrough trust which comes from my faith in the designers of a system. This is an amalgam of my faith in their competence, their incentive to treat the consumer well, and their level of accountability for mistakes. I call this feedthrough trust because the trust relationship is truly between established between the system creators and myself, not between the system and myself.
Hardware designers try to build as much intrinisic trust as possible. They want their FPUs to have the property of spitting out proper calculations as surely as a rock has the property of being denser than air. Software is a little hazier. There is certainly infrastructural software which should have this property but day to day we use mostly consumer-facing software. Consumer-facing software does not have this property and in many ways it can’t.
In a software industry where most revenue is built on incremental upgrades, consumer lock-in becomes important. Whether it is Word on Windows or AAC on the iPod, there are a variety of incentives for companies which do not match the needs/desires of the customer. In this way, we as consumers will always feel that software companies have something else up their sleeves which undermines some of our sense of feedthrough trust. Companies can alleviate this somewhat through branding, but it does not change the underlying incentive structure.
Complicating the trust issue are the time dynamics of business. I might give my online information to Plaxo for example and sign and trust their terms of service. If five years from now Plaxo is bought by Company X, my personal information moves along with it, but Company X may or may not respect the agreement I had with Plaxo (it needn’t since Plaxo would no longer be a legal entity).
In reading Microsoft’s Trustworthy Computing whitepaper, Mundie notes that computing systems must reach the level of trust we have in our electrical and financial systems. This is an interesting comparison since electricity is a highly regulated and slow-moving industry. Research in renewable energy sources has been stalled for decades, but the industry is, as noted, stable and trustworthy. This is not where the software industry is right now.
What is possible is for our infrastructural software around privacy and security to become stable and ‘trustworthy’, but this will not make for a scam free world. Taking Mundie’s own example of our financial system — while I have trust that when I transfer money between my bank account to a fund or vice versa it will work properly, the existence of a trustworthy financial infrastructure does not preclude the possibility of people selling me scammy junk bonds or Nigerian millions. Similarly, even when software infrastructure becomes ‘trustworthy’, there will be those who write scammy consumer-facing applications which do bad. This bad will span the gamut from the relatively benign ‘lock-in’ to outright fraud.
The problem is that human intention will always feedthrough in human designed systems, and we have no technology that will ever make all humans trustworthy.