The Return of the Magicians people talk increasingly about the limits of the scientific endeavor — the increasing impediments to discovering new ideas, the absence of low-hanging scientific fruit, the near impossibility, given the laws of physics as we understand them, of ever spreading human civilization beyond our lonely planet or beyond our isolated solar system. ....... — namely, beings that can enlighten us, elevate us, serve us and usher in the Age of Aquarius, the Singularity or both. ........... a golem, more the embodied spirit of all the words on the internet than a coherent self with independent goals. .......... With the emergent forms of A.I., they argue, we have created an intelligence that can yield answers the way an oracle might or a Magic 8 Ball: through processes that are invisible to us, permanently beyond our understanding, so complex as to be indistinguishable from action in a supernatural mind. ...... the A.I. revolution represents a fundamental break with Enlightenment science, which “was trusted because each step of replicable experimental processes was also tested, hence trusted.” .......... the spirit might be disobedient, destructive, a rampaging Skynet bent on our extermination. ....... we would be wise to fear apparent obedience as well. .
Should GPT exist? Gary Marcus asks about Microsoft, “what did they know, and when did they know it?”—a question I tend to associate more with deadly chemical spills or high-level political corruption than with a cheeky, back-talking chatbot. ........ in reality it’s merely a “stochastic parrot,” a glorified autocomplete that still makes laughable commonsense errors and that lacks any model of reality outside streams of text. ....... If you need months to think things over, generative AI probably isn’t for you right now. I’ll be relieved to get back to the slow-paced, humdrum world of quantum computing. ....... if OpenAI couldn’t even prevent ChatGPT from entering an “evil mode” when asked, despite all its efforts at Reinforcement Learning with Human Feedback, then what hope do we have for GPT-6 or GPT-7? ....... Even if they don’t destroy the world on their own initiative, won’t they cheerfully help some awful person build a biological warfare agent or start a nuclear war? ......... a classic example being nuclear weapons. But, like, nuclear weapons kill millions of people. They could’ve had many civilian applications—powering turbines and spacecraft, deflecting asteroids, redirecting the flow of rivers—but they’ve never been used for any of that, mostly because our civilization made an explicit decision in the 1960s, for example via the test ban treaty, not to normalize their use. ........
GPT is not exactly a nuclear weapon. A hundred million people have signed up to use ChatGPT, in the fastest product launch in the history of the Internet. ... the ChatGPT death toll stands at zero
....... The science that we could learn from a GPT-7 or GPT-8, if it continued along the capability curve we’ve come to expect from GPT-1, -2, and -3. Holy mackerel. ....... I was a pessimist about climate change, ocean acidification, deforestation, drought, war, and the survival of liberal democracy. The central event in my mental life is and always will be the Holocaust. I see encroaching darkness everywhere. .......... it’s amazing at poetry, better than most of us. .The False Promise of Chomskyism . .
Why am I not terrified of AI? “I’m scared about AI destroying the world”—an idea now so firmly within the Overton Window that Henry Kissinger gravely ponders it in the Wall Street Journal? ....... I think it’s entirely plausible that, even as AI transforms civilization, it will do so in the form of tools and services that can no more plot to annihilate us than can Windows 11 or the Google search bar......... the young field of AI safety will still be extremely important, but it will be broadly continuous with aviation safety and nuclear safety and cybersecurity and so on, rather than being a desperate losing war against an incipient godlike alien. ........ In the Orthodox AI-doomers’ own account, the paperclip-maximizing AI would’ve mastered the nuances of human moral philosophy far more completely than any human—the better to deceive the humans, en route to extracting the iron from their bodies to make more paperclips. And yet the AI would never once use all that learning to question its paperclip directive. ........ from this decade onward, I expect AI to be woven into everything that happens in human civilization ........ Trump might never have been elected in 2016 if not for the Facebook recommendation algorithm, and after Trump’s conspiracy-fueled insurrection and the continuing strength of its unrepentant backers, many would classify the United States as at best a failing or teetering democracy, no longer a robust one like Finland or Denmark ....... I come down in favor right now of proceeding with AI research … with extreme caution, but proceeding.
The Chomsky et al. opinion piece in the @nytimes about ChatGPT is making the rounds. Rather than trying to deconstruct their argument, I asked @bing what it thinks of it.
— Sebastien Bubeck (@SebastienBubeck) March 10, 2023
Now you can judge for yourself who has the moral high ground ๐. pic.twitter.com/itdh1VOatl
Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
No comments:
Post a Comment