代公告:【尖端講座系列】第四十四場 Sphinx的真心?生成式人工智慧的認知中心化與數位加速主義風險 Cognitive Centrality and Digital Accelerationism Risks in Generative AI
️日期:2025年4月23日(三)
️時間:14:00 – 16:00 (GMT+08)
️地點:Cisco Webex線上會議室
報名連結 Sign-up Link:https://forms.gle/Gzv1geFZWCL5bxf17
*本院將於 4/21(一)寄發線上會議連結予成功報名者
*報名時請提供真實個人資訊,本院保有資格檢核及錄取的權利
*活動主辦單位均有權更改時間和地點,如有變動,將以email通知
️主講人:吳靜(中國南京師範大學哲學系教授兼副主任、「數字與人文」研究中心主任)
️主持人:廖咸浩(國立臺灣大學人文社會高等研究院院長、外國語文學系特聘教授)
️演講摘要:
通用人工智慧的技術基礎建立在一種追求通用知識生成的機制上。當前主流的大型語言模型試圖將人類通過語言積累的知識體系,簡化為對海量數據的學習和模仿,最終形成以數位演算法為核心的新型知識架構。
但這類模型存在明顯的技術矛盾:雖然能夠處理多種任務(如寫作、翻譯),卻經常出現「學得快忘得快」(過度依賴訓練數據)、「以偏概全」(數據偏見)、「生搬硬套」(缺乏真實理解)等問題。更關鍵的是,隨著時間推移,模型的實用性能會出現持續衰減—在需要綜合判斷的複雜場景(如醫療診斷、法律諮詢)表現明顯下滑,導致許多商業應用難以持續,形成科技巨頭壟斷市場的局面。
這種技術路線帶來的深層影響值得警惕:當人類將語言和經驗的複雜性都交給演算法處理時,不僅會過度依賴技術系統,更會導致知識生產變得封閉單一。看似客觀中立的演算法背後,實際上複製著資本擴張的邏輯—用標準化的技術方案取代多元的地方性知識。
儘管大型語言模型的突飛猛進被視為資訊時代新階段的標誌,但是其發展面臨著有待解決的挑戰和限制。當前,大型語言模型的發展已暴露出高耗能、高成本等現實瓶頸。與其盲目追求技術突破,不如建立多方參與的治理體系,在保障技術透明度的前提下,讓演算法服務於人類文明的多樣性發展。
The technological foundation of Artificial General Intelligence (AGI) is built on a mechanism that pursues the generation of universal knowledge. Current mainstream large language models (LLMs) attempt to simplify humanity’s language-based knowledge systems into the learning and imitation of massive datasets, ultimately creating a new knowledge architecture centered on digital algorithms.
However, these models exhibit inherent contradictions: while capable of handling multiple tasks (e.g., writing and translation), they frequently suffer from issues like “quick learning but faster forgetting” (over-reliance on training data), “generalizing from limited perspectives” (data bias), and “mechanical regurgitation” (lack of genuine understanding). More critically, their practical performance degrades over time—showing significant decline in complex scenarios requiring comprehensive judgment (e.g., medical diagnosis, legal consultation)—resulting in unsustainable commercial applications and reinforcing market monopolies by tech giants.
The profound implications of this technological approach demand vigilance: as humans delegate linguistic and experiential complexity to algorithms, we risk both excessive dependence on technical systems and the homogenization of knowledge production. Behind the façade of algorithmic neutrality lies the replication of capital expansion logic—replacing diverse local knowledge with standardized technical solutions.
Despite LLMs’ rapid progress being hailed as a milestone for the information age, their development faces unresolved challenges. Current models reveal practical bottlenecks like excessive energy consumption and unsustainable costs. Rather than blindly pursuing technological breakthroughs, we should establish a multi-stakeholder governance framework. By ensuring algorithmic transparency, we can redirect these technologies to serve the diverse development of human civilization.
主辦單位:國立臺灣大學人文社會高等研究院
合辦單位:財團法人日月光文教基金會