Smart enough to write a program smarter than itself?


But they are still utilizing a resource, namely practice; they learn from experience. Their advantage lies in their ability to practice very, very fast. Video games lend themselves well to that sort of thing, but is it possible to practice general reasoning in the same fashion? 

It's harder to iterate rapidly if you have to learn about doing anything in the physical world, or learn about the psychology of humans. You'd need a high-fidelity simulator (which, by itself, would take a lot of work to develop). And then you wouldn't discover anything that humans and AGIs don't already know about the universe, because they wouldn't be able to include those unknown properties in the simulation.

The one thing an AGI might get by sitting around and putting billions of cycles into thinking, would be new branches of philosophy and mathematics. And some of those might lead to methods for idealized formal reasoning, in the same way Game Theory does. 

But are our previous improvements in these areas sufficient to constitute superintelligence vs. the generations of humans before the discovery?

Comments

Popular posts from this blog

Perplexity

Aphorisms: AI

DeepAI's Austen on China