Новости Golunoid.ru






Уважаемые друзья!
Проекту нужна Ваша помощь и Вы можете нам помочь!


Мы очень надеемся на Вашу помощь и поддержку! Всем спасибо, кто уже помогает нам на Boosty и готов помогать развивать проект!

Всем, кто поможет нашему проекту, будет предоставлен доступ к эксклюзивному контенту, а также выслано приглашение в закрытый чат рекомендаций.

ThemesAll

Реакция читателя

Science and Technology
2023-03-29 12:36:31

Musk and Wozniak called for a halt in the training of AI systems more powerful than GPT-4

Маск и Возняк призвали приостановить обучение систем ИИ мощнее GPT-4

Future of Life, a nonprofit organization, published a letter in which SpaceX CEO Ilon Musk, Apple co-founder Steve Wozniak, philanthropist Andrew Young and about a thousand other artificial intelligence researchers called for an "immediate suspension" of AI systems "more powerful than GPT-4.

The letter says artificial intelligence systems with "human-competitive intelligence" could carry "serious risks to society and humanity." It calls for labs to suspend training for six months.

"Powerful artificial intelligence systems should be developed only when we are confident that their effects will be positive and their risks will be manageable," the letter's authors emphasize.

More than 1,125 people signed the appeal, including Pinterest co-founder Evan Sharp, Ripple co-founder Chris Larsen, Stability AI CEO Emad Mostak, and researchers from DeepMind, Harvard, Oxford, and Cambridge. Artificial intelligence heavyweights Yoshua Bengio and Stuart Russell also signed the letter. The latter also called for a halt in the development of advanced AI until independent experts have developed, implemented and tested common security protocols for such AI systems.

The letter details the potential risks to society and civilization from competitive AI systems in the form of economic and political disruption. 

Here is the full text of the letter:

"Artificial intelligence systems that compete with humans can pose serious risks to society and humanity, as extensive research has shown and as recognized by leading artificial intelligence laboratories. As stated in the widely endorsed Asilomar AI Principles, advanced AI can trigger profound changes in the history of life on Earth, and should be developed and managed with commensurate care and resources. Unfortunately, this level of planning and management does not exist, even though in recent months artificial intelligence laboratories have become bogged down in an uncontrolled race to develop and deploy ever more powerful "digital minds" that no one-even their creators-can understand, cannot predict or reliably control.

Modern artificial intelligence systems are becoming competitive in common tasks, and we must ask ourselves: Should we allow machines to flood our information channels with propaganda and fakes? Should we automate all jobs, including those potentially replaceable by AI? Should we develop "non-human minds" that could eventually outnumber, outsmart, and replace us? Should we risk losing control of our civilization? Such decisions should not be delegated to unelected technology leaders. Powerful artificial intelligence systems should be developed only when we are confident that their effects will be positive and the risks manageable. This confidence should be well founded and only grow as the potential effects of such systems expand. OpenAI's recent statement on general artificial intelligence states that "at some point it may be important to get an independent assessment before embarking on training future systems, and for the most advanced effort to agree to limit the rate of growth of computation used to create new models." We agree. That moment is now.

So we call on all AI labs to immediately pause for at least six months to train AI systems more powerful than GPT-4. This pause must be universal and controlled, and it must involve all key players. If such a pause cannot be established quickly, governments should step in and impose a moratorium.

Artificial intelligence laboratories and independent experts should use this pause to jointly develop and implement a set of common security protocols for advanced AI design and development that will be scrutinized and monitored by independent outside experts. These protocols should ensure that systems adhering to them are beyond a reasonable doubt safe. This does not mean a pause in AI development in general, but simply a step back from the dangerous race to ever-larger, unpredictable black box models with greater capabilities.

Artificial intelligence laboratories and independent experts should use this pause to jointly develop and implement a set of common security protocols for advanced artificial intelligence design and development that will be scrutinized and monitored by independent external experts. These protocols should ensure that systems adhering to them are beyond a reasonable doubt safe. This does not mean a pause in AI development in general, but simply a step back from the dangerous race to ever-larger, unpredictable black box models with greater capabilities.

Artificial intelligence research and development must be refocused on making today's powerful modern AI systems more accurate, secure, interpretable, transparent, reliable, consistent, trustworthy and loyal.

In parallel, AI developers must work with policy makers to greatly accelerate the development of reliable AI management systems. At a minimum, these efforts should include: new and capable AI regulators; oversight and tracking of high-performance AI systems and large pools of computing power; verification and watermarking systems to help distinguish real content from generated content and track model leaks; a robust auditing and certification ecosystem; liability for harm caused by AI; robust public funding for technical AI safety research; a well-resourced

Humanity can enjoy a prosperous future with AI. Having succeeded in building powerful systems, we will come to the "AI Summer," when we will reap the benefits, develop these systems for the benefit of all and enable society to adapt to them. Society has put other technologies on hold with potentially disastrous consequences. We can apply that measure here as well."

Major artificial intelligence labs such as OpenAI have not yet responded to the letter. 

"The letter isn't perfect, but its spirit is right: we need to slow down until we better understand all the risks," said Gary Marcus, professor emeritus at New York University, who signed the letter. - "AI systems can do serious harm... The big players are becoming increasingly secretive about what they do."

OpenAI unveiled GPT-4 on March 14. The model is capable of interpreting not only text but also images. It also now recognizes schematic images, including hand-drawn ones.

After the announcement of the new language model, OpenAI has refused to publish research material on its basis. Members of the AI community criticized this decision, noting that it undermines the company's spirit as a research organization and makes it difficult for others to reproduce its work. At the same time, it makes it difficult to develop defenses against threats posed by AI systems.



ии
нейросеть
илон маск
ChatGPT




Encyclopaedic reference
Илон Маск - американский предприниматель, инженер и миллиардер. Он является основателем, генеральным директором и главным инженером компании SpaceX; инвестором, генеральным директором и архитектором продукта компании Tesla; основателем The Boring Company и соучредителем Neuralink и OpenAI.
Нейросеть - математическая модель, а также её программное или аппаратное воплощение, построенная по принципу организации и функционирования биологических нейронных сетей — сетей нервных клеток живого организма.
ChatGPT - нейросеть, чат-бот с искусственным интеллектом, разработанный компанией OpenAI и способный работать в диалоговом режиме, поддерживающий запросы на естественных языках.

Объекты

Social media
You can discuss this news at VK or Telegram, you can also share the material via messenger or social media





News on other topics



Current section news
Other news
Back to top



Latest publications





© 2011-2023 Golunoid
Design & Development: 2004-2023 Comrasoft