No human knows if any other human is alive in the inside. We're making a choice...
@neurobashing No human knows if any other human is alive in the inside. We're making a choice to treat other humans as being the same as us because we know us and assume that the other similar to ourselves must be much the same. What counts as "the same as us" is a fuzzy match. If DNA is what counts, then AI is not in our moral community, no problems. If the ability to think like a human is what counts, than AI is in our moral community & people don't like expanding moral communities like that
Self-replies
@neurobashing All that said, I like approaches to AI that stick to concepts that are concrete and visible (behaviorally sententient) & are honest about the bullshit epistemologies. Kierkegaard, while choosing wrong & choosing faith, at least is explicit that it is an arbitrary choice. Statements about people's qualia or machine's is also arbitrary choice, i.e. the weakest sort of epistemological grounding you can have.
@neurobashing Anyhow, we don't know if torturing a bot will cause the bot pain (as shown by distress texts), but we can think about if a person torturing a bot is themselves likely to behave worse. Issues where the entire question is about behaviors are likely to be more productive.