Episode 49: AGI Alignment and Safety
The Theory of Anything - A podcast by Bruce Nielson and Peter Johansen - Tuesdays
Categories:
Is Elon Musk right that Artificial General Intelligence (AGI) research is like 'summoning the demon' and should be regulated? In episodes 48 and 49, we discussed how our genes 'align' our interests with their own utilizing carrots and sticks (pleasure/pain) or attention and perception. If our genes can create a General Intelligence (i.e. Universal Explainer) alignment and safety 'program' for us, what's to stop us from doing that to future Artificial General Intelligences (AGIs) that we create? But even if we can, should we? "I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon." --Elon Musk