Absalom wrote:peragrin wrote:We can't build a trinary computer yet.
Quick note: we can and have built trinary computers, it's just that they don't offer many advantages (they do offer some, but that just means they get used in specialty applications: flash memory, for example).
Several have been built, even, but they were largely experiments. Balanced ternary (each place value being -1, 0, or 1) allows a particularly elegant way of handling signed values, and ternary can encode larger numbers with a given number of signals, so it might someday provide a benefit in scaling up things like vector processing (SIMD units in consumer processors are handling 512-bit vectors...that's a lot of signals to get where they need to be, when they need to be there, and equivalent to around 324 ternary signals).
Number base has nothing whatsoever to do with AI, though, and the neurons of the brain are far too big and hot for quantum interactions to be relevant. Figuring out how the various network structures do their thing is still an active area of research, but the success of building computer models shows that only simple, local behaviors are required at the neuron level. The computational problem is simply that networks that can do useful things involve an enormous number of interactions between neurons and aren't well suited to simulation on typical general purpose Von Neumann-style machines, with a big memory store that does nothing on its own and a small number of processors that do sequential work on tiny fragments of it at a time.
On the threat of AI, the main one I see is from them doing exactly what people tell them to, especially in the short-sighted, increasingly disconnected-from-reality financial sector. An AI optimizing high-frequency trades for immediate profits will do just that, it won't have any comprehension of or reason to consider the implications its actions have on the economy as a whole, people's lives, etc. A bunch of AIs competing, cooperating, and manipulating each other as they do exactly what we built them to do could drive things wildly out of control before any human knows anything odd is going on.
What I'm not afraid of is AIs becoming self aware and taking over the world. They aren't going to turn into a human in a box, an AI will be more alien than anything that we might meet from another star system. We are the product of billions of years of selection for hard-wired survival instincts, motivations to gain and defend territory, acquire sustenance, protect against competing life forms, etc. An AI won't even have a reason to desire power or self preservation unless we give them one.
As for automation: if the Luddites had their way, we wouldn't have cheap, high-quality clothing. The idea that we should deliberately choose inefficient production methods that require the majority of human minds to participate in drudgery for half their waking lifetimes is either short sighted or simply insane.