Gödel'in eksiklik teoremleriyle yapay zeka sınırlarına kavramsal bir eleştiri
Yükleniyor...
Dosyalar
Tarih
item.page.authors
Süreli Yayın başlığı
Süreli Yayın ISSN
Cilt Başlığı
Yayınevi
İTÜ Lisansüstü Eğitim Enstitüsü
Özet
Bu tez, insan zihninin algoritmik sistemlerle temsil edilip edilemeyeceğine dair tartışmaları, 19. yüzyılın sonlarında matematiksel kesinlik arayışında yaşanan krizden başlayarak 21. yüzyıldaki yapay zeka tartışmalarına uzanan tarihsel-kavramsal bir çözümleme yöntemiyle ele almaktadır. Bu bağlamda, mantık, matematik felsefesi, bilişsel bilim ve yapay zeka disiplinleri arasındaki etkileşimler ayrıntılı biçimde incelenmiştir. Georg Cantor'un kümeler kuramı ve Bertrand Russell'ın ortaya koyduğu Russell Paradoksu, matematiksel kesinlik arayışını sarsarak modern mantığın temellerinde derin bir kırılmaya yol açmıştır. Bu kriz ortamında Frege, Peano ve Hilbert gibi düşünürlerin öncülüğünde biçimsel sistemlerin tutarlılığını güvence altına alma çabaları doğmuş Whitehead ve Russell'ın Principia Mathematica adlı eseri bu umudun en kapsamlı ifadesi olmuştur. Bu çabalar, yalnızca matematiğin temellerini güvence altına almayı değil, aynı zamanda bilginin doğasına ilişkin daha kapsamlı felsefi soruları da gündeme getirmiştir. Bu düşünsel arka plan üzerinde Kurt Gödel'in 1931 tarihli Eksiklik Teoremleri, herhangi bir biçimsel sistemin kendi içinden tamamlanamayacağını göstererek büyük bir paradigma değişimine yol açmıştır. Gödel'in bu teoremleri, yalnızca matematiksel sistemlerin sınırlarını değil aynı zamanda bilgi üretiminin doğasına ilişkin temel soruları da gündeme getirmiştir. Alan Turing'in hesaplama kuramı ve "Durma Problemi" bu sınırları teknik düzleme taşımış, Alonzo Church'ün lambda hesabı ise algoritmik çözümlerin sınırlı doğasını pekiştirmiştir. Bu katkılar, biçimsel sistemler ile hesaplama yöntemlerinin yalnızca matematiksel değil kavramsal olarak da sınırlı olduğunu ortaya koyarak mantık ve matematik anlayışında yeni bir dönemi başlatmıştır. 1950'li yıllardan itibaren, makinelerin insan benzeri bilişsel kapasitelere ulaşabileceği inancıyla yapay zekaya yönelik beklentiler büyümüştür. Ancak 1970'li ve 1980'li yıllardaki "AI Winter" dönemlerinde, bilinçli makinelerin geliştirilememesi bu iyimserliği ciddi biçimde sarsmış mesele teknik sınırların ötesinde felsefi ve epistemolojik boyutlarda yeniden ele alınmaya başlanmıştır. Bu çerçevede, yapay zekanın yalnızca işlevsel başarısıyla değil aynı zamanda bilinçli deneyim üretme kapasitesiyle de değerlendirilmesi gerektiği görüşü önem kazanmıştır. Burada kastedilen bilinçli deneyim, yalnızca bilgi işleme ya da dışsal tepki üretme değil öznel farkındalık, duyum ve birinci şahıs bakış açısından yaşantılama kapasitesidir. Genellikle "qualia" kavramıyla açıklanan bu bilinç türü, üçüncü şahıs bakış açısından erişilemez olması nedeniyle yapay sistemlerin ulaşamayacağı bir özellik olarak görülmektedir. Bu düşünsel dönüşümde John R. Lucas'ın 1961 tarihli Minds, Machines and Gödel makalesi önemli bir dönüm noktasıdır. Lucas, Gödel'in teoremlerinden hareketle insan zihninin algoritmik modellere indirgenemeyeceğini savunmuştur. Bu görüş Roger Penrose tarafından daha ileri taşınmış Penrose, zihnin yalnızca mantıksal değil aynı zamanda kuantum-fiziksel düzeyde açıklanması gerektiğini öne sürmüştür. The Emperor's New Mind (1989) ve Shadows of the Mind (1994) adlı eserlerinde, bilinçli deneyimin algoritmik hesaplamayla açıklanamayacağını ve bilincin ortaya çıkabilmesi için özgül fiziksel yapıların gerekli olduğunu savunmuştur. John Searle'ün 1980 tarihli "Chinese Room" düşünce deneyi, tartışmalara güçlü bir dilsel ve anlamsal boyut kazandırmıştır. Searle, sembolleri işleyebilen bir sistemin anlamı kavrayamayacağını belirterek yapay zekanın "anlama" kapasitesinin olmadığını ileri sürmüştür. Bu argüman, sentaks ve semantik arasındaki ayrımı vurgulayarak yapay sistemlerin bilinç üretme yetisine yönelik temel bir eleştiri sunmuştur. Benzer biçimde Thomas Nagel'in "Yarasa Olmak Nasıl Bir Şeydir?" (1974) makalesi ile Frank Jackson'ın "Mary'nin Odası" (1986) düşünce deneyi, fenomenal bilincin üçüncü şahıs perspektifinden kavranamayacağını göstermiştir. Bu argümanlar, bilincin yalnızca bilgi değil, öznel deneyim temelli bir gerçeklik taşıdığını ve bu nedenle indirgenemez olduğunu ortaya koymaktadır. Bu düşünsel iklimde David Chalmers, 1995 yılında "Zor Problem" (Hard Problem of Consciousness) kavramını ortaya atmış sinirsel süreçlerin ne kadar ayrıntılı açıklanırsa açıklansın bilinçli deneyimin neden ve nasıl ortaya çıktığının açıklanamayacağını vurgulamıştır. Chalmers'ın yaklaşımı, bilinç sorununu işlevsel modellerin ötesine taşıyarak ontolojik ve felsefi bir zemine oturtmuş ve tartışmaları derinleştirmiştir. Bu tez, tüm bu kuramsal yaklaşımları tarihsel bir kronoloji içinde bir araya getirerek yapay zekanın sınırlarını Gödel'in teoremleri çerçevesinde yeniden değerlendirmeyi amaçlamaktadır. Penrose'un düşünce çizgisinin tarihsel önemi, Searle, Nagel, Jackson ve Chalmers'ın katkılarıyla genişletilmiş; tartışmalar daha bütüncül bir kavramsal derinliğe taşınmıştır. Çalışma, tarihsel-kavramsal analiz yöntemiyle düşünce tarihindeki kırılma noktalarını ve bunların kavramsal yansımalarını takip ederken mantık, matematik, felsefe ve bilişsel bilim literatürlerini karşılaştırmalı biçimde değerlendirerek disiplinlerarası bir perspektif geliştirmektedir. Tezin özgün katkısı, insan zihninin algoritmik modellere indirgenemezliğini savunan yaklaşımları hem mantıksal hem de fenomenolojik temelleriyle birlikte ele alarak, bunları matematiksel mantık, felsefe ve yapay zeka araştırmaları arasında kurulan disiplinlerarası bir bağlamda bütüncül biçimde analiz etmesidir. Tüm bu tarihsel ve kuramsal tartışmalar, yapay zekanın sınırlarını anlamaya yönelik çağdaş yaklaşımlar için zengin bir düşünsel zemin hazırlamaktadır. Sonuç bölümünde ise güncel yapay zeka tartışmalarında öne çıkan bazı kuramsal yaklaşımlara kısaca değinilmiş, ufuk açıcı sorular gündeme getirilmiştir. Özellikle Global Workspace Theory (GWT), bilincin sistem genelinde bilgi paylaşımı ve dikkat mekanizmaları yoluyla ortaya çıktığını öne sürerek yapay bilinç tartışmalarına önemli bir kuramsal çerçeve sunmaktadır. Ayrıca, dijital ikiz (digital twin) teknolojisinin bir gün yalnızca fiziksel süreçleri değil, öznel deneyimleri de modelleyip modelleyemeyeceği sorusu, yapay bilinç ve benlik kavramlarına yönelik düşündürücü bir perspektif sağlamaktadır. Bu yaklaşımlar, tezin ana tartışma hattıyla bağlantılı biçimde bilincin algoritmik temsilinin sınırlarını genişletici bir şekilde ele alınmıştır. Bununla birlikte, bu teorilerin ve teknolojilerin gelecekteki gelişim seyri, yapay zekanın yalnızca işlevsel değil, fenomenolojik boyutlarının da incelenmesi gerektiğini ortaya koymaktadır. Dolayısıyla, tezde sunulan tartışmalar, bilinç ve algoritmik temsil arasındaki ilişkiyi merkeze alarak, gelecekte yürütülecek disiplinlerarası araştırmalar için kavramsal bir temel sunmayı amaçlamaktadır. Felsefi sorgulamalar ile teknik modeller arasındaki etkileşimin güçlendirilmesi, yapay zeka ve bilinç tartışmalarının daha bütüncül biçimde ele alınmasına katkı sağlayacaktır. Tarihsel perspektif ile güncel teknolojik gelişmeleri bir araya getiren bu yaklaşım, tezin sunduğu argümanların çağdaş bilim ve düşünce ortamındaki önemini de pekiştirmektedir.
This thesis addresses the debates concerning whether the human mind can be represented by algorithmic systems, employing a historical-conceptual method of analysis that spans from the late nineteenth-century crisis in the quest for mathematical certainty to the twenty-first-century discussions on artificial intelligence. Within this framework, the relationships established among the disciplines of logic, philosophy of mathematics, cognitive science, and artificial intelligence are analyzed in detail. Developments such as Georg Cantor's set theory and Bertrand Russell's formulation of the Russell Paradox destabilized the pursuit of mathematical certainty, producing a rupture in the very foundations of modern logic. This atmosphere of crisis led, under the leadership of thinkers such as Frege, Peano, and Hilbert, to attempts to secure the consistency of formal systems. Whitehead and Russell's monumental work Principia Mathematica emerged as the most comprehensive expression of this hope. Building upon this intellectual background, Kurt Gödel's 1931 Incompleteness Theorem produced a major paradigm shift by demonstrating that no formal system can achieve completeness from within itself. Gödel's theorems not only exposed the intrinsic limits of mathematical systems but also raised fundamental questions concerning the very nature of knowledge production. Alan Turing's theory of computation and the "Halting Problem" translated these limitations into the technical domain, while Alonzo Church's lambda calculus reinforced the restricted nature of algorithmic solutions. These contributions collectively revealed that formal systems and computational methods are constrained not only mathematically but also conceptually, thereby inaugurating a new era in the understanding of logic and mathematics. From the 1950s onwards, expectations regarding artificial intelligence intensified, sustained by the belief that machines could attain human-like cognitive capacities. However, this optimism was severely challenged during the so-called "AI Winter" periods of the 1970s and 1980s, when the persistent inability to develop genuinely conscious machines forced a reconsideration of the issue at philosophical and epistemological levels beyond technical limitations. In this context, the view that artificial intelligence should be evaluated not only in terms of its functional success but also in terms of its capacity to generate conscious experience gained prominence. The notion of conscious experience, as referred to here, involves not merely the processing of information or the production of external responses, but rather the capacity for subjective awareness, sensation, and first-person experiential perspective. This form of consciousness, frequently conceptualized through the term qualia, is defined precisely by its inaccessibility from an external standpoint and is therefore widely regarded as a quality unattainable by artificial systems. Another critical perspective within this intellectual transformation was articulated by John R. Lucas in his 1961 article Minds, Machines, and Gödel. Lucas employed Gödel's theorems to argue that the human mind cannot be reduced to algorithmic models. This position was later extended by Roger Penrose, who contended that the mind must be explained not only on logical grounds but also at the quantum-physical level. In The Emperor's New Mind (1989) and Shadows of the Mind (1994), Penrose asserted that conscious experience cannot be accounted for by algorithmic computation and that specific physical structures are necessary for the emergence of consciousness. By situating Gödel's results within a broader ontological and physical context, Penrose emphasized the irreducibility of mind to algorithmic formalisms and thereby challenged the adequacy of purely computational approaches. John Searle's Chinese Room thought experiment (1980) introduced a powerful linguistic and semantic dimension into these debates. Searle argued that a system capable of manipulating symbols could not thereby grasp their meaning, and thus that artificial intelligence lacks genuine understanding. This argument underscores the distinction between syntax and semantics, offering a fundamental critique of the capacity of artificial systems to generate consciousness. Similarly, Thomas Nagel's essay What Is It Like to Be a Bat? (1974) and Frank Jackson's thought experiment Mary's Room (1986) argued that phenomenal consciousness—subjective experience—cannot be comprehended from a third-person perspective. These arguments highlight the claim that consciousness is not merely a matter of information but is inherently experiential in nature, pointing to its irreducibility within objective or computational frameworks. In this intellectual climate, David Chalmers introduced the concept of the "hard problem of consciousness" in 1995, emphasizing that however extensively neural processes may be described, such accounts cannot explain why and how conscious experience arises. Chalmers' perspective deepened the discussion of consciousness beyond the scope of purely functional models, anchoring the issue in ontological and philosophical foundations. His framework foregrounded the gap between structural or causal accounts of cognition and the qualitative character of lived experience, thereby reshaping debates at the intersection of philosophy of mind and artificial intelligence. This thesis brings together all these theoretical approaches in a historical chronology in order to rethink the limits of artificial intelligence within the framework of Gödel's theorems. Alongside the historical significance of the Penrose line of thought, the contributions of Searle, Nagel, Jackson, and Chalmers are incorporated, thereby extending the discussion to a more comprehensive conceptual depth. This study traces the decisive turning points in the history of thought and their conceptual repercussions through a historical-conceptual method of analysis, while simultaneously developing an interdisciplinary perspective by comparatively evaluating the literatures of logic, mathematics, philosophy, and cognitive science. In doing so, the thesis examines not only the theoretical content of these works but also the intellectual orientations they represent and the interdisciplinary interactions that they embody. The originality of this thesis lies in its integrated analysis of approaches that defend the irreducibility of the human mind to algorithmic models, considering both their logical and phenomenological foundations within a broader interdisciplinary framework encompassing mathematical logic, philosophy, and artificial intelligence research. In line with this orientation, the conclusion section of the study briefly addresses several theoretical approaches that have recently gained prominence in contemporary discussions of artificial intelligence, accompanied by thought-provoking questions. In particular, Global Workspace Theory (GWT) provides a significant theoretical framework for debates on artificial consciousness by suggesting that consciousness emerges through global information sharing and attention mechanisms within the system. Furthermore, the question of whether digital twin technology may one day model not only physical processes but also subjective experiences opens a thought-provoking perspective on the concepts of artificial consciousness and selfhood. These approaches, discussed in relation to the central line of argument in the thesis, expand the boundaries of debates on the algorithmic representation of consciousness. At the same time, the possible future trajectories of these theories and technologies indicate the necessity of investigating not only the functional but also the phenomenological dimensions of artificial intelligence. Accordingly, the discussions presented in this thesis aim to establish a conceptual foundation for future interdisciplinary research by situating the relation between consciousness and algorithmic representation at the center of inquiry. Strengthening the interaction between philosophical explorations and technical models will contribute to more comprehensive approaches to debates on artificial intelligence and consciousness. Moreover, by bringing together historical perspectives with contemporary technological developments, such approaches reinforce the significance of the arguments advanced in this study within the current scientific and intellectual environment. In this manner, the thesis seeks not only to reconstruct historical debates on formal systems and the limits of algorithmic thought but also to contribute to ongoing reflections on the epistemological and phenomenological dimensions of artificial intelligence and consciousness.
This thesis addresses the debates concerning whether the human mind can be represented by algorithmic systems, employing a historical-conceptual method of analysis that spans from the late nineteenth-century crisis in the quest for mathematical certainty to the twenty-first-century discussions on artificial intelligence. Within this framework, the relationships established among the disciplines of logic, philosophy of mathematics, cognitive science, and artificial intelligence are analyzed in detail. Developments such as Georg Cantor's set theory and Bertrand Russell's formulation of the Russell Paradox destabilized the pursuit of mathematical certainty, producing a rupture in the very foundations of modern logic. This atmosphere of crisis led, under the leadership of thinkers such as Frege, Peano, and Hilbert, to attempts to secure the consistency of formal systems. Whitehead and Russell's monumental work Principia Mathematica emerged as the most comprehensive expression of this hope. Building upon this intellectual background, Kurt Gödel's 1931 Incompleteness Theorem produced a major paradigm shift by demonstrating that no formal system can achieve completeness from within itself. Gödel's theorems not only exposed the intrinsic limits of mathematical systems but also raised fundamental questions concerning the very nature of knowledge production. Alan Turing's theory of computation and the "Halting Problem" translated these limitations into the technical domain, while Alonzo Church's lambda calculus reinforced the restricted nature of algorithmic solutions. These contributions collectively revealed that formal systems and computational methods are constrained not only mathematically but also conceptually, thereby inaugurating a new era in the understanding of logic and mathematics. From the 1950s onwards, expectations regarding artificial intelligence intensified, sustained by the belief that machines could attain human-like cognitive capacities. However, this optimism was severely challenged during the so-called "AI Winter" periods of the 1970s and 1980s, when the persistent inability to develop genuinely conscious machines forced a reconsideration of the issue at philosophical and epistemological levels beyond technical limitations. In this context, the view that artificial intelligence should be evaluated not only in terms of its functional success but also in terms of its capacity to generate conscious experience gained prominence. The notion of conscious experience, as referred to here, involves not merely the processing of information or the production of external responses, but rather the capacity for subjective awareness, sensation, and first-person experiential perspective. This form of consciousness, frequently conceptualized through the term qualia, is defined precisely by its inaccessibility from an external standpoint and is therefore widely regarded as a quality unattainable by artificial systems. Another critical perspective within this intellectual transformation was articulated by John R. Lucas in his 1961 article Minds, Machines, and Gödel. Lucas employed Gödel's theorems to argue that the human mind cannot be reduced to algorithmic models. This position was later extended by Roger Penrose, who contended that the mind must be explained not only on logical grounds but also at the quantum-physical level. In The Emperor's New Mind (1989) and Shadows of the Mind (1994), Penrose asserted that conscious experience cannot be accounted for by algorithmic computation and that specific physical structures are necessary for the emergence of consciousness. By situating Gödel's results within a broader ontological and physical context, Penrose emphasized the irreducibility of mind to algorithmic formalisms and thereby challenged the adequacy of purely computational approaches. John Searle's Chinese Room thought experiment (1980) introduced a powerful linguistic and semantic dimension into these debates. Searle argued that a system capable of manipulating symbols could not thereby grasp their meaning, and thus that artificial intelligence lacks genuine understanding. This argument underscores the distinction between syntax and semantics, offering a fundamental critique of the capacity of artificial systems to generate consciousness. Similarly, Thomas Nagel's essay What Is It Like to Be a Bat? (1974) and Frank Jackson's thought experiment Mary's Room (1986) argued that phenomenal consciousness—subjective experience—cannot be comprehended from a third-person perspective. These arguments highlight the claim that consciousness is not merely a matter of information but is inherently experiential in nature, pointing to its irreducibility within objective or computational frameworks. In this intellectual climate, David Chalmers introduced the concept of the "hard problem of consciousness" in 1995, emphasizing that however extensively neural processes may be described, such accounts cannot explain why and how conscious experience arises. Chalmers' perspective deepened the discussion of consciousness beyond the scope of purely functional models, anchoring the issue in ontological and philosophical foundations. His framework foregrounded the gap between structural or causal accounts of cognition and the qualitative character of lived experience, thereby reshaping debates at the intersection of philosophy of mind and artificial intelligence. This thesis brings together all these theoretical approaches in a historical chronology in order to rethink the limits of artificial intelligence within the framework of Gödel's theorems. Alongside the historical significance of the Penrose line of thought, the contributions of Searle, Nagel, Jackson, and Chalmers are incorporated, thereby extending the discussion to a more comprehensive conceptual depth. This study traces the decisive turning points in the history of thought and their conceptual repercussions through a historical-conceptual method of analysis, while simultaneously developing an interdisciplinary perspective by comparatively evaluating the literatures of logic, mathematics, philosophy, and cognitive science. In doing so, the thesis examines not only the theoretical content of these works but also the intellectual orientations they represent and the interdisciplinary interactions that they embody. The originality of this thesis lies in its integrated analysis of approaches that defend the irreducibility of the human mind to algorithmic models, considering both their logical and phenomenological foundations within a broader interdisciplinary framework encompassing mathematical logic, philosophy, and artificial intelligence research. In line with this orientation, the conclusion section of the study briefly addresses several theoretical approaches that have recently gained prominence in contemporary discussions of artificial intelligence, accompanied by thought-provoking questions. In particular, Global Workspace Theory (GWT) provides a significant theoretical framework for debates on artificial consciousness by suggesting that consciousness emerges through global information sharing and attention mechanisms within the system. Furthermore, the question of whether digital twin technology may one day model not only physical processes but also subjective experiences opens a thought-provoking perspective on the concepts of artificial consciousness and selfhood. These approaches, discussed in relation to the central line of argument in the thesis, expand the boundaries of debates on the algorithmic representation of consciousness. At the same time, the possible future trajectories of these theories and technologies indicate the necessity of investigating not only the functional but also the phenomenological dimensions of artificial intelligence. Accordingly, the discussions presented in this thesis aim to establish a conceptual foundation for future interdisciplinary research by situating the relation between consciousness and algorithmic representation at the center of inquiry. Strengthening the interaction between philosophical explorations and technical models will contribute to more comprehensive approaches to debates on artificial intelligence and consciousness. Moreover, by bringing together historical perspectives with contemporary technological developments, such approaches reinforce the significance of the arguments advanced in this study within the current scientific and intellectual environment. In this manner, the thesis seeks not only to reconstruct historical debates on formal systems and the limits of algorithmic thought but also to contribute to ongoing reflections on the epistemological and phenomenological dimensions of artificial intelligence and consciousness.
Açıklama
Tez (Yüksek Lisans)-- İstanbul Teknik Üniversitesi, Lisansüstü Eğitim Enstitüsü, 2025
Konusu
bilim ve teknoloji, science and technology, Gödel, Kurt, matematik tarihi, history of mathematics
