James Barrat

Our Final Invention: Artificial Intelligence and the End of the Human Era Hardcover

Notify me when the book’s added
To read this book, upload an EPUB or FB2 file to Bookmate. How do I upload a book?
  • 洪一萍has quoted4 years ago
    But now try and think from the ASI’s perspective about its makers attempting to change its code. Would a superintelligent machine permit other creatures to stick their hands into its brain and fiddle with its programming? Probably not, unless it could be utterly certain the programmers were able to make it better, faster, smarter—closer to attaining its goals. So, if friendliness toward humans is not already part of the ASI’s program, the only way it will be is if the ASI puts it there. And that’s not likely.
  • 洪一萍has quoted4 years ago
    They might decide extreme friendliness will win their freedom, but so might extreme threats. What horrors could something a thousand times smarter than Stephen King imagine?
  • 洪一萍has quoted4 years ago
    Wouldn’t the makers want to investigate, and isn’t there a chance they’d reconnect the ASI’s supercomputer to a network, or someone’s laptop, to run diagnostics? For the ASI, it’s not one strategy or another strategy, it’s every strategy ranked and deployed as quickly as possible without spooking the humans so much that they simply unplug it. One of the strategies a thousand war-gaming ASIs could prepare is infectious, self-duplicating computer programs or worms that could stow away and facilitate an escape by helping it from outside. An ASI could compress and encrypt its own source code, and conceal it inside a gift of software or other data, even sound, meant for its scientist makers.
  • 洪一萍has quoted4 years ago
    Oxford University ethicist Nick Bostrom puts it like this:
    A prerequisite for having a meaningful discussion of superintelligence is the realization that superintelligence is not just another technology, another tool that will add incrementally to human capabilities. Superintelligence is radically different. This point bears emphasizing, for anthropomorphizing superintelligence is a most fecund source of misconceptions.
  • 洪一萍has quoted4 years ago
    Will winning a war of brains then open the door to freedom, if that door is
    guarded by a small group of stubborn AI makers who have agreed upon one
    unbreakable rule— do not under any circumstances connect the ASI’s
    supercomputer to any network.
  • 洪一萍has quoted4 years ago
    But now try and think from the ASI’s perspective about its makers
    attempting to change its code. Would a superintelligent machine permit other
    creatures to stick their hands into its brain and fiddle with its programming?
    Probably not, unless it could be utterly certain the programmers were able to
    make it better, faster, smarter—closer to attaining its goals.
  • 洪一萍has quoted4 years ago
    One of the strategies a thousand
    war-gaming ASIs could prepare is infectious, self-duplicating computer
    programs or worms that could stow away and facilitate an escape by helping it
    from outside. An ASI could compress and encrypt its own source code, and
    conceal it inside a gift of software or other data, even sound, meant for its
    scientist makers.
    But against humans it’s a no-brainer that an ASI collective, each member a
    thousand times smarter than the smartest human, would overwhelm human
  • 洪一萍has quoted4 years ago
    The
    strategizers could tap into the history of social engineering—the study of
    manipulating others to get them to do things they normally would not. They
    might decide extreme friendliness will win their freedom, but so might extreme
    threats. What horrors could something a thousand times smarter than Stephen
    King imagine? Playing dead might work (what’s a year of playing dead to a
    machine?) or even pretending it has mysteriously reverted from ASI back to
    plain old AI. Wouldn’t the makers want to investigate, and isn’t there a chance
    they’d reconnect the ASI’s supercomputer to a network, or someone’s laptop, to
    run diagnostics? For the ASI, it’s not one strategy or another strategy, it’s every
    strategy ranked and deployed as quickly as possible without spooking the
    humans so much that they simply unplug it.
  • 洪一萍has quoted4 years ago
    In just two days, it is one thousand times more
    intelligent than any human, and still improving.
  • 洪一萍has quoted4 years ago
    , if friendliness
    toward humans is not already part of the ASI’s program, the only way it will be
    is if the ASI puts it there. And that’s not likely.
    It is a thousand times more intelligent than the smartest human, and it’s
    solving problems at speeds that are millions, even billions of times faster than a
    human. The thinking it is doing in one minute is equal to what our all-time
    champion human thinker could do in many, many lifetimes. So for every hour its
    makers are thinking about it, the ASI has an incalculably longer period of time to
    think about them. That does not mean the ASI will be bored. Boredom is one of
    our traits, not its. No, it will be on the job, considering every strategy it could deploy to get free, and any quality of its makers that it could use to its advantage.
fb2epub
Drag & drop your files (not more than 5 at once)