Minicon 47 recap
Last weekend was Minicon 47 and it was another interesting weekend spent among SF fans and authors. This year I was on two panels. The first panel I was on was on Friday night and was “Failing the Turing Test.” Co-panelists were: Ted Chiang, Aaron Vander Giessen (M), Andy Exley, Howard L. Davidson and Jason Wittman. Ted Chiang (in case you don’t know) was the writer Guest of Honor. I enjoyed being on this panel quite a bit. We had a lively discussion on whether the Turing test was still useful (yes) and if it had been passed yet (no — chatbots really don’t count as the human involved isn’t usually aware they are being tested.) I mentioned some ideas about the ethics of AI — if you have an entity you acknowledge as intelligent, what sort of rights should it have. Ted mentioned that voting is problematic — “What if it replicated itself 10 million times?”
The second panel I was on was Saturday night and was “What is Intelligence?.” The co-panelists were: Ted Chiang, Jason Wittman, Marissa Lingen(M) and Martin Summerton.Ted talked a bit about ideas of intelligence taking different forms and I mentioned Blindsight by Peter Watts as a good example of a book dealing with different kinds of intelligence. (Ted agreed.) Ted brought up Transcranial direct-current stimulation tDcs as an interesting example of mental augmentation that is going on. At the very end, Ted mentioned that he wished we had been able to talk more about ethical implications of super-intelligence. He mentioned that we don’t expect dogs to have many ethics, children to have a few more, adults many more, … So would we expect a super-intelligent entity to have more ethics? (With great power comes great responsibility.) After the panel I had a chance to chat with Ted for a bit (he’s a really nice guy.) I thought the idea was quite interesting and seemed reasonable. While we might expect higher ethics, it is, of course, no guarantee that any given entity will have them Just as adult humans vary wildly in their grasp of ethics. Also, there is the problem that an AI could hold a very different type of ethics. Like “mine iron!” might be its idea of the highest ethical goal.
In a later panel, Ted gave an interesting definition of SF vs. Fantasy. If the basis of the story operates via the scientific method–is reproducible without special circumstances then the story is SF even if it may appear to be fantasy. For example, Ted’s story “Seventy-two Letters” has golems animated by slips of paper with names in Hebrew written upon them. This might appear to be fantasy, but the difference is that anyone can write out the names and animate the golem. Thus (in that universe) it is a verifiable and reproducible result. No special status of “wizard” is needed. I thought this was an interesting definition.