Is knowledge inherently dangerous?  A decade ago I read a book by Frank Herbert called the White Plague.  It changed me. Bill Joy read it too and it lead him to write, “The Future Doesn't Need Us.”  The concept is simple.  A single biotechnologist has his family blown up in a IRA attack.  He decides to make war on the world.  So, for less than $200,000 he builds a basement biotechnology lab (I researched it and it can be done) and designs a airborned virus that targets females.  He wants the world to feel as he does, bereft.  He succeeds.

The question for Joy and myself became after reading this:  is specific knowledge or will specific knowledge become too dangerous for society to let an individual know unsupervised?  Will we need to control all people that know too much?  I had an advanced physics instructor once (I almost became a physicist because it was easy and fun), that built designer nukes (BTW, you can do a lot with nukes.  You can vary the radiation output, you can make it blast only without much radiation, you can shape the charge to have it blow in a single direction, and you can select the form the energy to yield).  He was a controlled person.  Why?  Because he knew too much.  Will that be the same with nanotech, biotech, and AI tech?  My gut tells me yes. [John Robb's Radio Weblog

Is knowledge inherently dangerous?  A decade ago I read a book by Frank Herbert called the White Plague.  It changed me. Bill Joy read it too and it lead him to write, “The Future Doesn't Need Us.”  The concept is simple.  A single biotechnologist has his family blown up in a IRA attack.  He decides to make war on the world.  So, for less than $200,000 he builds a basement biotechnology lab (I researched it and it can be done) and designs a airborned virus that targets females.  He wants the world to feel as he does, bereft.  He succeeds.

The question for Joy and myself became after reading this:  is specific knowledge or will specific knowledge become too dangerous for society to let an individual know unsupervised?  Will we need to control all people that know too much?  I had an advanced physics instructor once (I almost became a physicist because it was easy and fun), that built designer nukes (BTW, you can do a lot with nukes.  You can vary the radiation output, you can make it blast only without much radiation, you can shape the charge to have it blow in a single direction, and you can select the form the energy to yield).  He was a controlled person.  Why?  Because he knew too much.  Will that be the same with nanotech, biotech, and AI tech?  My gut tells me yes. [John Robb's Radio Weblog]

Leave a comment