To AI or Not to AI

Paula LabrotBy Paula Labrot

Share Story on:

To AI or Not to AI

To AI or Not to AI

Not even their creators, can understand, predict, or reliably control Artificial Intelligence (AI). On March 29, while I was teaching the brightest techies of the nation at IIT Mandi in India, an historical, international event occurred. Along with the Future of Life Institute, Elon Musk, Apple co-founder Steve Wozniak and Skype co-founder Jaan Tallin, Microsoft and Google engineers and thousands of other prominent signatories, called for a moratorium on any further development for any AI (artificial intelligence) program past GPT-5. That’s a lot of heavy weight making that declaration. There are enough concerns about the future that these high-rolling, high-tech leaders, some of them the actual founders of Open AI ChatGPT, want to call a halt, re-group and make a safe pathway for the future of humanity alongside the disruptive development of artificial intelligence programs. The Future of Life Institute states, “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth; it should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one, not even their creators, can understand, predict, or control. Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to non-elected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states, “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of computers used for creating new models.” We agree. That point is now. Therefore, we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities. AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should, at a minimum, include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause. Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an “AI summer” in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.” Sigal Samuel of the Future Perfect podcast, lays out a good case for the moritorium idea. She writes, “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart and replace us? Should we risk loss of control of our civilization? In other words: We don’t have to build robots that will steal our jobs and maybe kill us.” About Future of Life Institute. Established in 2015 to steer transformative technology towards benefiting life and away from extreme large-scale risks, the Future of Life Institute is made up of a core team, a Board, and a group of External Advisors. Collectively, they represent a diverse range of expertise that comes to the Institute from governance institutions, industry and academia, and a variety of disciplines, including behavioral sciences, medicine, machine learning, engineering, law, design, and the arts. Bill Gates Dissents Gates is a huge proponent of AI technology, believing it can help the world by reducing some of its glaring inequities. From climate change to reducing world wide malaria, Gates writes,“Clearly, there are huge benefits to these things…what we need to do is identify the tricky areas…. I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop,” Gates told Reuters. “The age of AI has begun.… AI will change the way people work, learn, travel, get health care, and communicate with each other,” he continued, claiming that the technology will help teach disadvantaged children, assist doctors working in poorer countries, and fight climate change, although he did not explain exactly how it would handle the latter task. He’s right about one thing. There is so much money invested in and to be made from AI, so much power to be consolidated that it’s hard to imagine everyone would agree to a moritorium. Is It Really Dangerous? Potentially... yes. Jason Koebler, reporting on vice.com, wrote, “A user of the new open-source, autonomous AI project, Auto-GPT, asked it to try to “destroy humanity,” “establish global dominance,” and “attain immortality.” The AI, called ChaosGPT, complied and tried to research nuclear weapons, recruit other AI agents to help it do research, and sent tweets trying to influence others. There are a lot of versions of the destructive power of AI, all leading to the question of whether this technology will serve us or enslave us or, ultimately, kill us. And then, there is the greed factor and the kind of polluted individuals in Turkey and Russia who have already developed and sold personal drones that can be launched to target specifc individuals using facial recognition programs that really worry people at the Future of Life Institue. Can AI Development be Controlled? The genii is definitely out of the bottle. AI safety expert Roman Yamploysky and Otto Barten, director of the Existential Risk Observatory, agree that, “A sensible place to start would be for AI tech companies to increase the number of researchers investigating the topic beyond the roughly 100 people available today. Ways to make the technology safe, or to reliably and internationally regulate it, should both be looked into thoroughly and urgently by AI safety researchers, AI governance scholars, and other experts. As for the rest of us, reading up on the topic, starting with books such as “Human Compatible” by Stuart Russell and “Superintelligence” by Nick Bostrom, is something everyone, especially those in a position of responsibility, should find time for. At the school where I just taught, IIT Mandi, I met the some of the brightest minds in India. There are twenty-three of these institutes in India. In a country of 1.4 billion people, one million students qualify to apply for a place each year. Ten-thousand students are admitted. Can you imagine the level of focus and ability I was privileged to work with? They are so bright and so tech-savvy. So, my guess about AI being able to be controlled is...no. Not with the kinds of young minds I’ve just dealt with. I don’t think there is any way of really controlling this. But I am with Elon on this issue. We need to put our best minds on this problem… and now! We need to stop dicking around crappy tribal thinking and create good examples for our youth of humanity’s higher-level thinking and emotional behavior. We have to stop letting ourselves be jerked around by social media trolls, “influencers” and “trend-thinking.” We have to strengthen self reliance, independent thought and communication skills in ourselves and our children. We have to get our heads out of the sand and return to high standards of education and prepare our children for their amazing future. They are more than capable. We have to be and stay smarter than the machines. Now! Vamos a ver!
Paula Labrot

Share Story on:

THINKING OUT LOUD
NEWS
SCHOOLHOUSE SCOOP
RUDE INTERRUPTIONS
ALL THINGS CONNECTED
LIFESTYLE
MAY EVENTS