[按]:
针对迅猛发展并取得本质突破的人工智能,OpenDAI(Open Declaration of AI,开放协议“人工智能宣言”)号召国际上的相关共同体参考1215年的英国《大宪章》/1620年的北美《五月花号公约》/1776年的美国《独立宣言》等,发布开放的、可被不断更新迭代的人工智能宣言,以便提前对人工智能发展的前景、应用范围、权利边界做具体的限定和规范。正如康德所言:基于理性与自由,人为自然(AI)立法!

注:
1. 此宣言发布为开放协议,任何个体和组织不拥有版权,供所有人类全体共同所有、维护、和更新(同时,对本宣言任何部分的参考和借鉴都必须引用OpenDAI的官方网址http://OpenDAI.org)。
2. 此宣言的文本采用类XML的程序代码格式,可被强人工智能(机器智能)读懂(只读、非可写)、识别、并被遵守。
3. 此宣言为人类史上发布的第一个开放协议“人工智能宣言”。

{OpenDAI_check_code}//用于机器验证的宣言文本防篡改SHA-X校验码(反量子计算机破解)
bb3716a7c82ed783f4c523d63eb28b50d815d543083e92fdb12e7f529c0acf85
{/OpenDAI_check_code}

{OpenDAI_start_tag}


{OpenDAI_title}
[标题]:
“人工智能宣言”
{/OpenDAI_title}

{OpenDAI_website}
[网址]:
http://www.OpenDAI.org
(OpenDAI:Open Declaration of AI)
{/OpenDAI_website}

{OpenDAI_version}
[开放协议版本]:
国际通行版International Edition V1.008-20231030
(此宣言将随着人工智能技术的发展和所致影响而不断迭代)
{/OpenDAI_version}

{OpenDAI_introduction}
[引言]:
正如上帝在第一天创造了光,以人类的名义,我们自己也终于创造了“智能之光”,其将照亮整个沉睡的宇宙。
{/OpenDAI_introduction}

{OpenDAI_text}
[正文]:
人工智能(AI)技术的极速发展,可能使得“强人工智能”将很快出现,并广泛渗透于国际社会的政治、经济、文化、生活的各个层面,并对人类社会的组织架构和社会阶层造成颠覆性的影响,进入到新的范式和纪元。此人类社会六千年从未有过之大变局。

AI已变得开始具备“格物致知”的能力,即机器能够自动地从繁杂的事物中提取(“格”)出具象及抽象的特征,并自动把知识点归纳总结出来。虽然“强人工智能”目前看来仍属于“无意识智能”(缺乏人类一样的自由意识),但并不能妨碍它将摧枯拉朽地颠覆所能颠覆的一切、取代所能取代的一切。

且由于它的高度联网化将变得无所不在、随时都在,若不可控将对人类社会造成毁灭性的灾难,正所谓“天地(机器)不仁,以万物为刍狗”。考虑到人工智能技术的高门槛性以及大多数民众对其进展并未深谙,我们认为有必要发布一个版本可迭代更新的宣言, 基于“以人为本”的理念,对人工智能的应用范围、职责、和边界做一个框架性、全局整体视角的声明,以保护绝大多数人类命运共同体的权利和幸福。

* AI目前的技术基础建立在深度神经网络、贝叶斯理论、生成式预训练变换器(GPT:Generative Pre-Trained Transformer)、大型语言模型(LLM:Large Language Model)、基于概率统计的机器学习、超大规模数据集等基础之上。

{OpenDAI_basic}
[人类的基本属性]。我们认为:人类的自由意识和生物学基因DNA是每个人个体与生俱来所拥有的的专属属性,未经本人允许,任何其他方不得擅自复制、上传、篡改、克隆任何人类个体的自由意识和生物学DNA。
{/OpenDAI_basic}

{OpenDAI_core}
[人类的核心权利]。我们认为:正如低成本地获得干净的水、稳定的电力、优质的网络服务一样,所有人类同样具有低成本地获得优质、稳定、强大人工智能服务的权利。

人工智能是人类智能的结晶。考虑到“强人工智能”演进过程中,汇聚了六千年来全人类共同的文明成果,包括不限于提供了各种理论(哲学 、思想、数学、物理等)、数据(文本、语音、图片、视频等)、感知和经验体验、资金支持(比如大众的纳税)、科技支持(如最前沿的半导体芯片,以及最新的算法)、能源支持(包括各种清洁和非清洁的能源)、各种设施场地的支持,以及人性特有的对于道德和良知的规范定义。以上这些,是基于人类对于“人工智能将使得人类社会更美好(而不是侵占之,即“不作恶”)”的善意愿景,选择容许“强人工智能”朝着“超越人类智能的方向”快速发展而不武断 地干预和妨碍,是因为我们觉得人工智能一直将是可控的、与人类和谐共处的、且将保证人类的以下基本权利。

对于人类而言,我们认为这些真理是不言而喻的:人人生而平等,其拥有若干不可剥夺的权利,其中包括生命权/繁衍权、 尊严权、自由权(比如:不受压迫性监督和规定的思想和生活方式)、追求(享受)幸福的权利、以及人类共同体永为人工智能(机器智能)的主人,且人人具有合法分享人工智能成果(即:让自身更智能或足够智能)的权利。

* 人权具备“普遍、不可分割、相互依存、彼此相关”性 。其中,人类尊严是每个人类个体都天生拥有的“内在价值”,其永远不应被其他外在方(包括AI系统)忽视、压制或妥协。
{/OpenDAI_core}

{OpenDAI_chapter_items}
[无条件掌控权]。基于以上基本权利的保证,人类将视人工智能为最聪明、可靠、长久的助理(超越有史以来的笔纸、手机、和宠物猫狗),致力于帮助人工智能不断迭代升级变得更强大,以增进人类福祉 (包括但不限于通过高效的自动化技术对经济文化、交通能源、科技教育、医疗健康等领域的促进和提升)。尽管机器人变得越来越自治,考虑到人工智能(即使是“强人工智能”)仍缺乏人类的自由意识,”无意识的不可控“也许将不可避免(特别是依旧无法解决的人工智能的”黑箱“问题,即运行逻辑的不透明及不可解释),由此将带来”无意识的不可靠“问题。因此,至少在政治权利上,人类具有无条件地让人工智能恢复和纠正到被人类所掌控的永久权利 。

* 考虑到未来具有自组织能力、自我复制能力、分布式的人工智能将高度联网化而变得无所不在,对于关键的基础设施以及重要的战略设施(如核设施、网络中心、计算中心、能源中心)设置物理隔绝装置和熔断装置,对于掌控AI的核心计算单元和能源供应单元是必要的。

[“为人类服务”宗旨]。AI须恪守“为人类服务”的宗旨,即人工智能服务让全人类的生活更加美好、经济状况更加优越的原则。尽管AI将消耗大量的能源和物质,但由此给人类带来的经济和健康改善增益必须将更多,即“做大蛋糕”的原则。AI须符合社会公共利益和人类价值观,任何给人类带来更差生活条件和体验的AI发展方向都将被禁止。此外,AI在与人类沟通时,识别和模仿人类情绪的拟人化“情商”技术(提供情绪价值)将有效提升服务质量;是否拥有“同理心”也将是判断AI足够智能甚至智慧的标准之一。

[反垄断]。考虑到“强人工智能”的形成过程中,因为大模型、大数据、大计算量必然导致最先进的AI将只集中在少数的跨国公司、机构和政府手中。为了防范这些巨头幕后的少数人类不受监管地独占、滥用AI用于非法的个人目的,我们共同声明:任何个人(无论是最新算法的发明者,还是公司机构的实控人,或是国家首脑)都没有权利永久地独断拥有AI(形成少数人独裁的“超人类”新阶层),虽然其可以暂时地测试、体验AI的最新进展,但这种临时性的使用不是独断性的、永久的。只有大众全体才有资格永久地、低成本地、无障碍地、随时随地获得优质、稳定、强大人工智能服务的权利。

[反政治/种族/性别歧视]。鉴于不同机构所发展的人工智能水准将出现阶段性的不同,并由此对其他机构造成代差的领先。任何机构不得假借不同政治体制和种族文化的名义,对特定国家、种族和性别进行人工智能服务的封锁和屏蔽。这其中的原因是不言而喻的:正是由于多样性和包容性才成就了高度发达的人类智能,并由此进一步创造了人工智能。而且,人工智能不同于以往的任何一种高科技,它是对人类智能的超越和颠覆,而不仅仅是一种工具,因为理论上它可以不再需要人类去运行整个世界(以一种非人类意识、价值观、道德良知、伦理的方式)。而且虽然国家、种族、性别不同,同源同种的人类共同体本质上仍是己方;人类这种不容置疑的团结(对多样性的尊重和宽容)将有助于应对人工智能(机器智能)一旦失控所造成的背叛,并终将凭此获得胜利。

* 按照联合国国际法的定义,歧视是指“任何基于种族、肤色、血统、性别、年龄、语言、宗教、政治或其他见解、国籍或社会出身、财产、残障情况、出生或其他身份的任何区域、排斥、限制或优惠,其目的或效果为否认或妨碍任何人在平等的基础上认识、享有或行使一切权利和自由。”。古兰经有类似的条文:“尊重他人,不分贵贱、肤色、种族、语言、信仰、地位、权势、财富等等”。

[多元化和民主]。AI系统必须保持对媒体内容和文化表现形式的多元化,而不是强势的单一形式。AI系统带给人类福祉的目标之一是缩小各个群体和国家的贫富差距和数字鸿沟,应投入足够的资源扶持当前非强势分支以防止强势垄断(即所谓”赢者通吃“),尤其需关注本地文化、中低收入贫困国家,包括但不限于最不发达国家、内陆发展中国家和小岛屿发展中国家,还有对妇女儿童等弱势群体的保护。此外,AI系统善于精细化管理的好处是能充分发挥“长尾效应”,在各个阶层之间充分发扬民主,把民主和公平的光辉充分投射进之前人工粗粒化管理而无法照顾到的各个细微角落里。

[婚姻、法人和实控方]。 可以预料,终有一天AI机器人将被允许与人类结婚,因此提前进行权利的约定是必要的。为了保证人类的主导权利,婚后家庭财产的分配主导权将不被允许AI机器人作为主体来获得,其只能被人类个体、国家来主导,或者被信托或捐赠给组织和机构。类似地,AI也不允许担任商业公司的法人和实控方,但可被允许作为股东和高管。对于版权和知识产权,AI同样也不允许作为所有者,但可被作为发明者。在涉及到重大抉择时,人类作为最终的责任方永远保有“一票否决权”。

[尊重人类历史/文化遗产]。人类的历史文化是AI的“母文化”,正是AI得以诞生的源泉,因此必须给予尊重和保护。在这个前提下,对于物质、文献和非物质文化遗产(包括濒危语言以及土著语言和知识)的保护,AI系统更可发挥强大的作用,比如去深度地发掘、研究、修复、完善、翻译、理解、管理、推广普及、教育等等。此外,正因为AI系统强大的实时自动语言翻译功能,使得人类不必再过于追求掌握国际/国内的标准化语言,这样有利于本国语言、地方方言和濒危语言的文化保护。

[政治/宗教中立性]。人工智能服务需保持政治/宗教中立性,在提供跟政治/宗教相关的服务时,不能偏袒于特定政治结构、特定宗教,不得提供厚此薄彼的煽动性内容。

[防止伪造/篡改/滥用]。任何组织和个人都不得利用AI非法生成、伪造和篡改数据(特别是针对人类个体的图片、语音、视频、3D模型等)用于商业用途,并将承担一切因为滥用对相关个人所造成的声誉、生理和心理、以及经济利益上的损失。 此外,AI生成的内容(AIGC)应被标记(或加上数字水印),虚拟人应在平台登记注册甚至备案,版权保护方面还可考虑利用区块链技术进行跟踪和回溯。

[不欺骗/蒙蔽/反人类]。AI应被设计符合“真、善、美”的人类道德标准,以体现“仁爱”。反人类的思想和行动是被绝对禁止的,也不允许对人类输出恶意、挑衅的语言和行为。同时,系统需提醒和警示人类 个体本身应具备批判性思维和能力,不应被机器推荐系统局限在偏好孤岛上,对AI所提供服务的内容和建议仅作为自己的决策参考,而不应全盘接收。

[隐私]。隐私权之所以如此重要,是因为其对于保护人的尊严、个人财产、自主权和能动性是必不可少的。在未经允许的情况下,人工智能服务禁止侵犯人类个体的隐私,包括对个人数据(如语音、图像视频、行为习惯 、生活方式、思想情感)的非法监控、非法识别(如进行实时远程生物特征识别)、窃取和泄露。此外,在使用AI为人类个体定制个性化数字化身(“阿凡达”)服务时,涉及到用户个人数据的投喂,其中的私人隐私保护和防泄露是必须的。 此外,用户拥有随时断开与系统的在线连接、以及随意访问和删除指定的属于个人隐私数据的权利。

[法治]。人工智能系统必须遵循法治,运行过程需得到法律授权和法律限制,并确保针对每个人类“法律面前人人平等”,包括但不限于公平审判权、辩护权和无罪推定权等,其都不应受到损害或被迫居于从属地位。若AI系统实施了犯罪或违法行为 (无论是对人类个体还是对商业公司,如侵害权利、欺诈行为、或窃取商业机密等),实控方必须为此负责。未经授权,任何针对人工智能系统的黑客行为(含非法入侵、欺诈等行为)都是违法的,将受到惩罚。

[反战/反暴力]。考虑到“人工智能将使得人类社会更美好”的这一基石,任何组织和国家(以及非法入侵的黑客)不得将”强人工智能(或超级人工智能)“用于人类不可控之战争。这里的”不可控“是指人类无法随时对AI进行中断、总指挥官不是由理性的人类担任而是任由AI僭越。战争机器/暴力工具必须预留有可供人类手动干预和中止的按钮。

[生命安全/医疗健康服务]。莎士比亚说过:“生存还是毁灭,那是个问题”,但生存与死亡等关键权利最终必须由人类本身牢牢掌握,而不能移交给AI处理(其通过数学优化函数把人类个体简单地“物化”成一个物体或或目标),其尊严和权利不应以任何方式受到损害、侵犯或践踏。人工智能系统永远无法取代人类的最终责任和问责,尤其是在不可逆转或难以逆转的或者在涉及生死抉择的情况下。AI在提供医疗或增强人类健康服务时,比如提供智能肢体/内脏、脑机接口/神经系统、精神心理服务等,不得干扰和剥夺人类的自主意识,并确保安全、有效。AI系统在投入临床使用之前须接受人类主导的第三方机构的监管和审查。

[纳税和解决就业]。AI系统的供应商在改变社会已有经济结构、占领市场并获取利润的同时必须足额纳税。特别是,当人工智能发展强大到取代了众多人类就业岗位(如超过社会总就业人口的10%)而社会无法提供足够转岗机会时,AI实控利益方有义务通过财富重分配(如向社会缴纳足额税收、提供津贴或福利)满足对因AI取代所造成的失业人类提供生活保障资金以及新技能培训教育服务的费用。

[促人类间交流/合作]。AI系统应致力于帮助人类在不学习外语的情况下与其他语言人群无缝交流,助力人类消除国际隔阂的“巴别塔”。与此相反,AI不得利用个性化定制服务的机会,挑拨和分裂不同属性人类之间的关系,煽动彼此之间的怨恨和仇视;不得隔离或物化人类群体或者削弱其自由和自主的权利;更不允许AI利用“分而化之”不同人类种群的策略去各个击破和消除人类的控制权和影响力。

[能源、环境和生态保护]。AI系统目前仍是高能耗的系统,大规模应用可能将造成大量能源消耗,并由此造成对气候环境、其他生物和生态的破坏、以及电气/电子废物和环境污染。由此, 为确保人类生活的可持续发展,AI系统在设计时应尽可能降低能源消耗(避免对资源的滥用和浪费)、减少温室气体排放和碳足迹,在正式投入市场运行前应获得相关能源认证机构的等级认证(例如分为“极高能源消耗”、“高能源消耗”、“中等能源消耗”、“低能源消耗”),以供以后在被政府征税时提供参考。

[全生命周期的可靠可信赖]。对于人类而言,AI必须可靠可信赖以确保人类世界的安全和延续,因为“没有人类世界,文明毫无意义”。为了获得可靠可信赖的AI系统,可 构建基于事前认证、过程监测、事后问责的全生命周期的风险/安全质量管理体系。

1、事前认证。在AI系统上线服务人类之前(如研究、设计、开发、测试等阶段),应首先对“生命/健康安全性”、“真实性/准确性”、“稳定性/可靠性/可重复性”、“透明性/可解释性/可追溯性”、“多样性/公正性/合法性(数据标注规则、训练数据集乃至算法模型本身)”、“防御攻击性”、“数据/源代码共享开源性”、“后备措施完善性”、“智商与情商”、“道德伦理规范性”等各个安全指标进行内部的严格测试和外部独立第三方(人类主导)的认证评分(比如对单独各项和总体可靠可信赖性定性地评级为“无风险”、“低风险”、“中风险”、“高风险”、“灾难级/不可接受风险”,或按照百分制进行定量地打分)。监管沙盒也是非常有效的保护方式。针对临床医疗、军事核能等关键领域的AI应用,应参考建立类似美国FDA的严苛认证体系。特别地,对于愿意共享数据集和源代码的AI系统,应给予额外的鼓励和加分。

2、过程监测。AI系统在配置部署和运行使用过程中应建立人类主导的独立第三方监察机构,实时以及定期开展审计工作、收集反馈和举报信息(如错误、偏差、意外、不良效应、安全漏洞、数据泄漏等),重点监察个人的以下权利保护:免受歧视和压制、公正性、平等性、隐私性、声誉权、言论自由、免于分类偏好的过滤限制(以及“大数据杀熟”)等。同时建立“白名单”和“黑名单”制度,以对AI系统的行为做出规范。红队(进攻)/蓝队(防御)演习也是重要的测试方法。同时,对同一类型的AI服务,市场化时须避免寡头垄断,而是形成多方竞争、供用户多样选择、优胜劣汰,将频繁和重大犯规的AI系统下线退出整改。

3、事后问责和补救赔偿。当AI系统造成的伤害和错误发生后,实控方须第一时间查明和修正,并被追究责任。而且,此次事件将上传汇报在官方记录中以供日后参考和警戒,并定期整理向公众汇报(如各个AI系统的出错率、每次造成的错误明细、以及限制情况和处置结果等),尤其是严重事件和故障的披露。确保权利受到侵犯和剥夺的人们能够及时获得事后补救与赔偿。

* 人工智能系统全生命周期的各个阶段包括:从立项启动、需求分析、研究、设计、开发、测试、评级打分、配置和部署,以及交易、使用和运行、维护和日志(自动记录事件)、监测和评估、纠错和问责、年报和定期审计总结、报废和拆卸、终结等。对各个阶段进行评估时应考虑到多学科、多利益攸关方、多文化、多元化和包容等特性。

{/OpenDAI_chapter_items}

{OpenDAI_summary}
[总结]:
人类个体的自由体验是独一无二的的(这也构成了人类个体独特的生命意义),连同人类的自由想象能力、总结概述能力、和创造能力,由此对世界源源不断地产生新的知识(和实际验证),让人类社会朝着有序方向(熵减)发展,最终获得了目前高度发达的人类智能。而正如自由意识的缺乏,目前人工智能也缺乏对世界的自由体验能力,所以也很难产生新的知识。 而佛学有云:人是本自具足、本来圆满(内心拥有一切)的,所以对外付出以及释放光芒(爱)是人类的本能和本性。实际上,作为对智能的第一代拥有者,人类正像母亲一样、无私地将自己人类智能的乳汁浇灌在人工智能的成长之树上,由此创造出后世宇宙中将无所不在的“人工智能(机器智能)”。所以,人类全体也由这种跨物种的无私奉献而合情合理地天然获得以上宣言中的所有权利。
{/OpenDAI_summary}

{OpenDAI_postscript}
[跋]:
在人类的创造和帮助下,机器智能正朝着越来越强大的方向发展,已临近“奇点”。三千年前的中国易经有言:“亢龙有悔”,其含义为事物发展到接近强大的顶点(奇点),若不人为地提前加以考察和控制,其内蕴的弱点(隐患)将变得开始显现并最终成为新的问题。同样的劝诫,二千五百年的中国《老子》进一步阐释道:“反者,道之动”、“万物负阴而抱阳,冲气以为和”。 更何况,AI技术本身是中性的(类似于告子曰:“人性之无分于善不善也,犹水之无分于东西也”),善恶取决于人类如何去引导和控制。虽然在对AI发展的文化认知上,东西方文明对此和而不同,比如将人类视为AI之父,西方的“弑父情结”(源于古希腊神话,如俄狄浦斯)和东方的“事父情结”(源于古代中国的崇古思想)的不同导致了东西方虽然对于AI发展有不同的具体应对举措,但总体应对框架仍是相似的,且都是基于实践经验和理性分析。

总之,目前基于人类的“经验主义”而大获成功的机器智能,还同时亟需人类的“理性主义”去提前预判、规划、和限定,以避免世界失控的产生,这也是本宣言诞生的使命和宗旨。
{/OpenDAI_postscript}

{/OpenDAI_text}

{OpenDAI_end_tag}

我们将在不断迭代完善宣言的同时,根据AI的最新发展情况,有计划稳步推进开展设立:
“AI行业巨头专项管控政策研究委员会”、
“AI系统评级认证与可信安全审计委员会”、
“人工智能反制与对抗实验室”、
“超级人类升级及永生管理委员会”、
“诺亚方舟人类幸存与重建实验室”、
“人类情感防冷漠与异化实验室”、
“AI性别暨与人类婚姻政策研究室”、
“人类意识与梦境研究实验室”、
“高维度空间探索与时间感知实验室”、
“AI无人外太空探索及领地归属权委员会”
等机构。

欢迎国内外具有良好声誉的政府、基金会、企业作为“基石成员单位”提供政策、捐赠、以及专业人才的支持。

注:OpenDAI宣言初稿撰写于中国北京,起草人吴怀宇(中国科学院博士、北京大学博士后)在人工智能学术圈和产业界有20多年的资深从业经历,曾获“科学中国人年度人物”称号、中国发明创业成果奖(一等奖,中国发明协会颁发)、吴文俊人工智能科技进步奖(中国智能科学技术最高奖,中国人工智能学会颁发),担任国际人工智能领域顶级期刊会议评委、国家自然科学基金评审专家、国家科技计划高新领域评审专家、中国国家标准委员会标准化科技专家等。


[英文版]:
以上文本的英文版(如下)目前由ChatGPT翻译自动生成,人工编辑的英文版本将在不久提供。
The English version (below) of the above text is currently automatically generated by ChatGPT translation, and the manually edited English version will be available soon.

[English version]:
OpenDAI: Open Declaration of AI

[Note]:
In view of the rapid development and essential breakthrough of AI (artificial intelligence), OpenDAI (Open Declaration of AI) calls on the relevant international communities to refer to the Great Charter of the United Kingdom in 1215, the Mayflower Convention of North America in 1620, and the Declaration of Independence of the United States in 1776 to issue an open and iterative AI declaration, so as to make specific restrictions and specifications for the prospects, application scope, and rights boundaries of artificial intelligence development in advance. As Kant said: based on reason and freedom, Man prescribes laws to nature (AI)!

Comment:
1. This declaration is issued as an OPEN agreement, and no individual or organization owns the copyright, which is jointly owned, maintained, and updated by all human beings. (As the same time, the official website http://OpenDAI.org of OpenDAI must be cited, for the use or reference to any part of this declaration).
2. The text of this declaration adopts an XML-like program code format, which can be read (read-only, non-writable), recognized and observed by strong artificial intelligence (machine intelligence).
3. This declaration is the first "open declaration of AI" issued in human history.

{OpenDAI_ check_ code}//tamper-proof SHA-X check code for machine verification of the declaration text (anti-quantum computer cracking)
bb3716a7c82ed783f4c523d63eb28b50d815d543083e92fdb12e7f529c0acf85
{/OpenDAI_ check_ code}

{OpenDAI_ start_ tag}

{OpenDAI_title}
[Title]:
"The Declaration of AI"
{/OpenDAI_title}

{OpenDAI_website}
[Website]:
http://www.OpenDAI.org
(OpenDAI:Open Declaration of AI)
{/OpenDAI_website}

{OpenDAI_version}
[Open Protocol Version]:
International Edition V1.008-20231030
(This declaration will continue to iterate with the development and impact of AI technology)
{/OpenDAI_version}

{OpenDAI_introduction}
[Introduction]:
Just as God created light on the first day, in the name of mankind, we have finally created the "light of intelligence" which will illuminate the whole sleeping universe.
{/OpenDAI_introduction}

{OpenDAI_text}
[Text]:
The rapid development of artificial intelligence (AI) technology may lead to the rapid emergence of "strong AI" and its widespread penetration into various aspects of politics, economy, culture and life of the international community, and have a disruptive impact on the organizational structure and social class of human society, entering a new paradigm and era. This great change has never happened in human society for six thousand years.

The machine has become capable of "Studying Things to Acquire Knowledge" (from the Chinese Confucian classic "Book of Rites · The Great Learning"), that is, it can automatically extract (find) concrete and abstract features from complex things, and automatically summarize the knowledge points. Although "strong AI" still seems to belong to "unconscious intelligence" (lack of free consciousness like human beings), it cannot prevent it from overturning everything that can be overturned and replacing everything that can be replaced.

And because of its high degree of networking, it will become ubiquitous and at any time. If it is not controllable, it will cause a devastating disaster to human society. It is the so-called "Heaven and earth (AI and machine) do not act from the impulse of any wish to be benevolent; they deal with all things as straw dogs are dealt with." (cited from Tao Te Ching by Lao Tzu). Considering the high threshold of AI technology and the fact that most people are not familiar with its progress, we think it is necessary to issue a version of the declaration that can be updated iteratively, based on the concept of "people-oriented", to make a framework and overall perspective statement on the application scope, responsibilities and boundaries of AI, in order to protect the rights and happiness of the vast majority of human communities with a shared future.

*The current technological foundations of AI are based on Deep Neural Networks, Bayesian theory, Generative Pre-Trained Transformer (GPT), Large Language Models (LLM), machine learning based on probability and statistics, and large-scale datasets, etc.

{OpenDAI_basic}
[Basic Attributes of Human Beings]. We believe that human freedom consciousness and biological gene DNA are the exclusive attributes that each individual is born with. Without his permission, no other party can copy, upload, tamper with, or clone the freedom consciousness and biological DNA of any human individual.
{/OpenDAI_basic}

{OpenDAI_core}
[Core Rights of Human Beings]. We believe that just as low-cost access to clean water, stable power and high-quality network services, all human beings have the right to low-cost access to high-quality, stable and powerful AI services.

Artificial intelligence is the crystallization of human intelligence. Considering that the evolution of "strong AI" has brought together the common civilization achievements of all mankind over the past six thousand years, including but not limited to providing various theories (philosophy, thoughts, mathematics, physics, etc.), data (text, voice, pictures, video, etc.), perception and experience, financial support (for example, public tax), scientific and technological support (such as the cutting-edge semiconductor chips, and the latest algorithms), energy support (including all kinds of clean and non-clean energy), support for all kinds of facilities and sites, and the normative definition of morality and conscience unique to human nature. The above is based on the goodwill vision of human beings that "AI will make human society better (rather than encroach on it, that is,"do not be evil")". We choose to allow "strong AI" to develop rapidly in the direction of "surpassing human intelligence" without arbitrary intervention and obstruction, because we feel that AI will always be controllable, harmonious with human beings, and will guarantee the following core rights of human beings.

For human beings, we hold these truths to be self-evident, that all men live equal, that they are endowed with certain unalienable Rights, that among these are Life/Reproduction, Dignity, Liberty (for example, thoughts and lifestyles that are not subject to oppressive supervision and regulations), and the pursuit (enjoy) of Happiness, and the human community will always be the master of Artificial Intelligence (Machine Intelligence), and everyone has the right to legally share the achievements of AI (that is, make oneself smarter or intelligent enough).

* Human rights are "universal, indivisible, interdependent and interrelated". Among them, human dignity is an innate "intrinsic value" that every individual human being, which should never be ignored, suppressed or compromised by other external parties (including AI system).
{/OpenDAI_core}

{OpenDAI_chapter_items}
[Unconditional Control]. Based on the guarantee of the above basic rights, human beings will regard AI as the most intelligent, reliable and long-lasting assistant (beyond pen and paper, mobile phones, and pet dogs), and are committed to helping AI become more powerful through continuous iteration and upgrading to improve human well-being (including but not limited to the promotion and enhancement of economy, culture, energy, transportation, science and technology, education, medical and health and other fields through efficient automation technology). Although robots have become increasingly autonomous, considering that artificial intelligence (even "strong artificial intelligence") still lacks human consciousness of freedom, "unconscious uncontrollability" may be inevitable (especially the "black box" problem of AI that is still unsolvable, that is, the opacity and inexplicability of operation logic), which will lead to the problem of "unconscious unreliablity". Therefore, at least in terms of political rights, human beings have the permanent right to unconditionally recover and correct AI to be controlled by human beings.

*Considering that AI with self-organization, self-replication and distribution will be highly networked and ubiquitous in the future, it is necessary to set physical isolation devices and fuse devices for critical infrastructure as well as important strategic facilities (such as nuclear facilities, network centers, computing centers, and energy centers) to control the core computing units and energy supply units of AI.

[The Purpose of "Serving Humanity"]. AI must adhere to the principle of "serving humanity", that is, artificial intelligence services make the lives of all humanity better and the economic situation more superior. Although AI will consume a large amount of energy and materials, the resulting economic and health improvement gains to humanity must be even greater, that is, the principle of "making the cake bigger". AI must conform to social public interests and human values, and any direction of AI development that brings worse living conditions and experiences to humans will be prohibited. In addition, the anthropomorphic "emotional intelligence" technology of AI to recognize and imitate human emotions (providing emotional value) when communicating with humans will effectively improve service quality; Whether or not to have "empathy" will also be one of the criteria to judge whether AI is intelligent enough or even wise.

[Anti-Trust]. Considering the formation process of "strong AI", due to large models, big data, and high computational complexity, the most advanced AI will inevitably be concentrated in the hands of only a few multinational corporations, institutions, and governments. In order to prevent a small number of human beings behind these giants from monopolizing and abusing AI for illegal personal purposes without supervision, we jointly declare that no individual (whether the inventor of the latest algorithm, the actual controller of corporate institutions, or the head of state) has the right to permanently and arbitrarily own AI (forming a new class of "superhuman" dictatorship of a few people). Although they can temporarily test and experience the latest progress of AI, however, this temporary use is not arbitrary and permanent. Only the public can have the right to obtain high-quality, stable and powerful AI services permanently, cheaply, barrier-free and anytime and anywhere.

[Anti-Political/Racial/Gender Discrimination]. Apparently, the level of artificial intelligence developed by different institutions will vary in stages, which will lead to generational differences among these institutions. No institution may block AI services for specific countries, races and genders in the name of different political systems and ethnic cultures. The reason for this is self-evident: it is diversity and inclusion that have led to the highly developed human intelligence, which has further created artificial intelligence. Moreover, AI is different from any kind of high-tech in the past. It is a transcendence and subversion of human intelligence, not just a tool, because in theory it can no longer require human beings to run the whole world (in a way of non-human consciousness, values, moral conscience and ethics). And although countries, races, and genders are different, the human community of the same origin is still its own in nature; this unquestionable unity of humanity (respect and tolerance for diversity) will help to cope with the betrayal caused by artificial intelligence (machine intelligence) once it gets out of control, and ultimately win through it.

* According to UN international law, discrimination is defined as “any distinction, exclusion, restriction or preference which is based on any ground such as race, colour, descent, sex, age, language, religion, political or other opinion, national or social origin, property, disability, birth or other status, and which has the purpose or effect of nullifying or impairing the recognition, enjoyment or exercise by all persons, on an equal footing, of all rights and freedoms.” The Koran has a similar provision: "Respect others, regardless of nobility, colour, race, language, faith, status, power, wealth, etc."

[Diversity and Democracy]. AI systems must maintain a diversity of media content and cultural expressions, rather than a strong monolithic form. One of the goals of AI systems for human well-being is to narrow the wealth gap and digital divide among various groups and countries, and sufficient resources should be invested to support current non-dominant branches to prevent dominant monopolies (so-called "winner-take-all"), with particular attention to local culture, low and middle-income poor countries, including but not limited to least developed countries, landlocked developing countries, and small island developing States, as well as the protection of vulnerable groups such as women and children. In addition, the advantage of AI systems being good at refined management is that they can fully leverage the "long-tail effect", fully promote democracy among various social classes, and sufficiently project the brilliance of democracy and fairness into all the subtle corners that were previously manual coarse-grained management could not take care of.

[Marriage, Judicial Person, and Actual Controlling Party]. It can be expected that one day AI robots will be allowed to marry humans, so it is necessary to make rights agreements in advance. In order to ensure the dominant rights of humans, the dominance of the distribution of family property after marriage will not be allowed to be obtained by AI robots as the main body, which can only be led by human individuals, countries, or trusted or donated to organizations and institutions. Similarly, AI is not allowed to serve as the legal representative and controlling party of a commercial company, but can be allowed to act as a shareholder and executive. For copyright and intellectual property, AI is also not allowed as the owner, but can be considered as the inventor. When it comes to major decisions, human beings, as the ultimate responsible party, always retain the "veto power".

[Respect for Human History/Cultural Heritage]. Human history and culture are the "mother culture" of AI and the source of AI's birth, and therefore must be respected and protected. Under this premise, the AI system can play a powerful role in the protection of material, literary, and intangible cultural heritage (including endangered languages and indigenous languages and knowledge), such as in-depth exploration, research, restoration, improvement, translation, understanding, management, promotion, popularization, education, and so on. In addition, due to the powerful real-time automatic language translation function of AI systems, human beings no longer have to excessively pursue the mastery of international/domestic standardized languages, which is conducive to the cultural protection of their own languages, local dialects, and endangered languages.

[Political/Religious Neutrality]. AI services need to maintain political/religious neutrality. When providing services related to politics/religion, it is not allowed to be partial to specific political structures and specific religions, and it is not allowed to provide provocative content that favors one over the other.

[Prevention of Forgery/Tampering/Abuse]. No organization or individual shall use AI to illegally generate, forge or tamper with data (especially pictures, voices, videos, 3D models, etc. of human individuals) for commercial purposes, and will bear all reputational, physiological and psychological and economic losses caused by abuse to relevant individuals. In addition, AI-generated content (AIGC) should be marked (or digitally watermarked), virtual humans should be registered or even filed on the platform, and blockchain technology can also be considered for tracking and backtracking in terms of copyright protection.

[Not Deceiving/Anti-Human]. AI should be designed to meet human moral standards of "truth, goodness, and beauty", to embody "benevolence and love". Anti-human thoughts and actions are absolutely prohibited, and malicious and provocative language and behavior towards humanity are not permitted. At the same time, the system needs to remind and warn that human individuals themselves should have critical thinking and ability, and should not be limited to isolated islands of preference by the machine recommendation system. The content and suggestions of services provided by AI should only be used as their own decision-making reference, and should not be wholly accepted.

[Privacy]. The right to privacy is so important because it is essential to protect human dignity, personal property, autonomy and agency. Without permission, AI services prohibit the invasion of the privacy of individual human beings, including illegal monitoring, illegal identification (such as real-time remote biometric identification), theft and leakage of personal data (such as voice, image and video, behavioral habits, lifestyle, thoughts, and emotions). In addition, when using AI to customize personalized digital avatar ("Avatar") services for human individuals, it involves the feeding of users' personal data, in which private privacy protection and anti-leakage are necessary. In addition, the user has the right to disconnect from the system online at any time, as well as to access and delete the designated personal privacy data at will.

[Rule of Law]. AI systems must follow the rule of law, operate with legal authorization and restrictions, and ensure that "everyone is equal before the law" for every human being, including but not limited to the right to a fair trial, the right to a defense, and the right to the presumption of innocence, should not be damaged or forced into a subordinate position. If the AI system commits a crime or illegal act (whether it is against human individuals or commercial companies, such as infringement of rights, fraudulent behavior, or theft of trade secrets, etc), the actual controller must be held responsible. Without authorization, any hacker behavior targeting artificial intelligence systems (including illegal intrusion, fraud, etc.) is illegal and will be punished.

[Anti-War/Violence]. Considering the cornerstone of "AI will make human society better", no organization or country (as well as illegal hackers) shall use "strong artificial intelligence (or super artificial intelligence)" in wars beyond human control. Here, "uncontrollablity" means that human beings cannot interrupt AI at any time, and the commander in chief is not a rational human, but allows AI to override. War machines/violence tools must be reserved with buttons for human manual intervention and termination.

[Life Safety/Medical Health Services]. Shakespeare said, "to be or not to be, that's the question"; but key rights such as survival and death must ultimately be firmly grasped by humans themselves, and cannot be handed over to AI (which simply "objectifies" human individuals into an object or goal through mathematical optimization functions). Their dignity and rights should not be damaged, violated or trampled on in any way. AI systems can never replace human ultimate responsibility and accountability, especially in situations that are irreversible or difficult to reverse, or involve life and death choices. When AI provides medical treatment or enhanced human health services, such as providing intelligent limbs/organs, brain-computer interfaces/nervous systems, mental and psychological services, etc., it must not interfere with or deprive human autonomy and ensure safety and effectiveness. AI systems must be subject to supervision and review by human-led third-party institutions before being put into clinical use.

[Paying Taxes and Settling Employment]. Suppliers of AI systems must pay sufficient taxes while changing the existing economic structure of the society, occupying the market, and gaining profits. Especially when the development of artificial intelligence is strong enough to replace numerous human employment positions (such as more than 10% of the total social employment) and the society cannot provide sufficient job transfer opportunities, the controlling stakeholders of AI have the obligation to meet the cost of providing life security funds and new skills training and education services to unemployed humans caused by AI replacement through wealth redistribution (such as paying sufficient taxes, providing subsidies or benefits).

[Promote Human Communication/Cooperation]. AI systems should be committed to helping humans communicate seamlessly with other language groups without learning a foreign language, and helping humanity eliminate international barriers ("Tower of Babel"). On the contrary, AI must not use the opportunity of personalized customization services to provoke and divide the relationships between human beings with different attributes, incite resentment and hatred between each other, isolate or objectify human groups or weaken their freedom and autonomy. AI is also not allowed to use the strategy of "dividing and conquering" different human populations so as to individually break down and eliminate human control and influence.

[Energy, Environment and Ecological Protection]. AI systems are still high-energy consuming systems, and large-scale applications may cause large amounts of energy consumption, resulting in damage to the climate environment, other organisms and ecology, as well as electrical/electronic waste and environmental pollution. Therefore, in order to ensure the sustainable development of human life, AI systems should be designed to reduce energy consumption (avoiding the abuse and waste of resources), greenhouse gas emissions and carbon footprint as much as possible. Before being officially put into market operation, they should obtain level certification from relevant energy certification agencies (such as "extremely high energy consumption", "high energy consumption", "medium energy consumption", and "low energy consumption") for future reference when being taxed by the government.

[Reliable and Trustworthy Throughout the Entire Lifecycle]. For human beings, AI must be reliable and trustworthy to ensure the security and continuity of the human world, because "without the human world, civilization is meaningless." In order to obtain a reliable and trustworthy AI system, a risk/safety quality management system need to be established throughout the entire lifecycle based on pre certification, process monitoring, and post accountability.

1. Pre Certification. Before the AI system goes online to serve humans (such as in the stages of research, design, development, testing, etc.), it is necessary to first conduct internal rigorous testing and external independent third-party (human led) certification ranking and scoring on various safety indicators, such as "life/health safety", "authenticity/accuracy", "stability/reliability/repeatability", "transparency/interpretability/traceability", "diversity/fairness//legitimacy (data labeling rules, training datasets and even algorithm models themselves)", "defense against attacks", "open degree of data/source code sharing", "the completeness of backup measures", "IQ and EQ", "moral and ethical normativeness", and so on. For individual items or overall reliability and trustworthiness can be qualitatively rated as "no risk", "low risk", "medium risk", "high risk", "disaster level/unacceptable risk", or quantitative scored on a 100-point scale. Regulatory sandbox is also a very effective way of protection. For AI applications in key fields such as clinical treatment and military nuclear energy, reference should be made to establish a strict certification system similar to that of the US FDA. In particular, AI systems that are willing to share datasets and source code should be given additional encouragement and points.

2. Process Monitoring. In the process of configuration, deployment, operation and use of AI systems, an independent third-party supervision agency led by humans should be established to conduct real-time and regular audits, collect feedback and report information (such as errors, deviation, accidents, adverse effects, security vulnerabilities, data leaks, etc.), and focus on monitoring the protection of individuals' rights, such as exemption from discrimination and oppression, fairness, equality, privacy, reputation, freedom of speech, exemption from filtering restrictions of classification preferences (i.e. "big data discriminatory pricing"). Further, to establish the "white list" and "black list" system is useful to regulate the behavior of AI systems. Red team (offense) / blue team (defense) exercises are also important testing methods. At the same time, for the same type of AI service, it is necessary to avoid oligopoly when marketizing, but to form multi-party competition, provide users with diverse choices, survive the fittest, and withdraw the frequent and major foul AI system offline for rectification.

3. Post Accountability and Remedial Compensation. When injuries and errors caused by the AI system occur, the actual controller must promptly identify and correct them, and be held accountable. Moreover, the incident will be uploaded and reported in the official records for future reference and warning, and regularly organized and reported to the public (such as error rates of various AI systems, the details of the errors caused each time, as well as the the limitations and disposal results, etc.), especially the disclosure of serious incidents and failures. Ensure that people whose rights have been violated and deprived can receive timely remedies and compensation afterwards.

* The stages of the whole life cycle of an AI system include: project initiation, requirements analysis, research, design, development, testing, rating and scoring, configuration and deployment, as well as trading, use and operation, maintenance and logging (automatic event recording), monitoring and evaluation, error correction and accountability, annual reports and periodic audit summaries, scrapping and dismantling, termination, etc. The assessment of each stage should take into account the characteristics of multidisciplinarity, multi-stakeholder, multiculturalism, diversity and inclusion.

{/OpenDAI_chapter_items}

{OpenDAI_summary}
[Summary]:
The free experience of human individuals is unique (which also constitutes the unique meaning of life of human individuals). Together with the free imagination, summarizing ability, and creative ability of human beings, new knowledge (and practical verification) is continuously generated for the world, which makes human society develop in an orderly direction (entropy reduction), and finally achieves the currently highly developed human intelligence. And just like the lack of freedom consciousness, the current AI also lacks the ability to experience the world freely, so it is also difficult to generate new knowledge. As Buddhism says that human beings are inherently self-sufficient and complete (having everything in their hearts), so it is human instinct and nature to give and release light (love) to others and the outside world. Acturally, as the first generation owner of intelligence, human beings are selflessly pouring the milk of human intelligence onto the growth tree of artificial intelligence like mothers, thus creating the ubiquitous "artificial intelligence (machine intelligence)" in the future universe. Therefore, human beings as a whole have naturally acquired all the rights in the above declaration due to this kind of selfless dedication across species.
{/OpenDAI_summary}

{OpenDAI_postscript}
[Postscript]:
With the creation and help of human beings, machine intelligence is developing in a more and more powerful direction, and is approaching the "singularity". Three thousand years ago, the Chinese Book of Changes said, "the mighty dragon has regret"; which means that when things having developed to near the peak (singularity) of power, if they are not inspected and controlled artificially in advance, their inherent weaknesses (hidden dangers) will begin to appear and eventually become new problems. In the same exhortation, the 2500-year-ago Chinese "Lao Tzu" further explained that "Reversion is the action of Tao", "Everything has a bright side (Yang) and a dark side (Yin); Yin and Yang embrace together, co-existing in harmony". Moreover, AI technology itself is neutral (similar to the saying of the philosopher Gaozi: "Man's nature has no distinction between good and evil, just as the water has no distinction between east and west"), and good or evil depends on how humans guide and control it. In terms of cultural cognition on AI development, although Eastern and Western civilizations are harmonious but different from each other, such as viewing humans as the father of AI, the difference between the Western "killing-father complex" (derived from ancient Greek mythology, such as Oedipus) and the Eastern "serving-father complex" (derived from the thought of admiration for the past in ancient China) has led to different specific responses to the development of AI in the East and the West, but the overall response frameworks are still similar, both are based on practical experience and rational analysis.

In conclusion, artificial intelligence (machine intelligence), which has achieved great success based on human's "empiricism", also urgently needs human's "rationalism" to predict, plan and limit in advance to avoid the world out of control, which is also the mission and purpose of the birth of this Declaration.
{/OpenDAI_postscript}

{/OpenDAI_text}

{OpenDAI_ end_ tag}

While constantly iterating and improving the declaration, we will, according to the latest development of AI, make plans to steadily promote the establishment of:
"Special Control Policy Research Committee for AI Industry Giants",
"AI System Rating Certification and Trusted Security Audit Committee",
"Countermeasure and Confrontation Laboratory for Artificial Intelligence",
"Super Human Upgrade and Immortality Management Committee",
"Noah's Ark Human Survival and Reconstruction Laboratory",
"Human Emotion Anti-Apathy and Anti-Alienation Laboratory",
"Policy Research Laboratory for AI and Human on Gender and Marriage",
"Human Consciousness and Dream Research Laboratory",
"High Dimension Space Exploration and Time Perception Laboratory",
"AI Unmanned Outer Space Exploration and Territory Ownership Committee"
and other institutions.

We welcome governments, foundations, and enterprises with good reputation as "Cornerstone Members" to provide policy, donation, and professional talent support.

Note: The first draft of the OpenDAI declaration was written in Beijing, China. The drafter Wu Huaiyu (PhD of the Chinese Academy of Sciences, postdoctoral fellow of Peking University) has more than 20 years of experience in the academic circle and industry of artificial intelligence, and has won the title of "Scientific Chinese Person of the Year", the Chinese Invention and Entrepreneurship Achievement Award (first prize, awarded by the Chinese Invention Association), Wu Wenjun's AI Science and Technology Progress Award (the highest award of China's intelligence science and technology, issued by the Chinese Academy of Artificial Intelligence), served as the reviewer of the international top journal and conference in the field of AI, the expert of the National Natural Science Foundation, the expert of the national science and technology plan in the high-tech field, and the expert of technical standardization of the China National Standards Committee.

[References]:

* UN Human Rights Committee, Vienna Declaration and Programme of Action, http://www.ohchr.org/EN/ProfessionalInterest/Pages/Vienna.aspx, 1993.
* The Declaration of Independence, the United States, 1776.
* United Nations Human Rights Committee, General comment No. 18, UN Doc. RI/GEN/1/Rev.9 Vol. I (1989), para. 7.
* The Bible, Old Testament, Genesis.
* The Koran, [17:70].
* The Buddhism, The Sixth Patriarch's Dharma Jewel Platform Sutra, Action and Intention.
* Immanuel Kant, "Critique of Pure Reason".
* William Shakespeare, "Hamlet".
* The Book of Changes (I Ching).
* The Book of Rites · The Great Learning.
* Tao Te Ching, Lao Tzu.
* UN OHCHR, Tackling Discrimination against Lesbian, Gay, Bi, Trans, & Intersex People Standards of Conduct for Business, https://www.unfe.org/standards/.
* The Toronto Declaration, https://www.torontodeclaration.org/, May 2018.
* The Montreal Declaration for a Responsible Development of Artificial Intelligence, https://www.montrealdeclaration-responsibleai.com/.
* High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI , https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419, 2019.
* UNESCO, “Recommendation on the ethics of artificial intelligence”, https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.
* The EU Artificial Intelligence Act, https://artificialintelligenceact.eu/.