The Commitment Remainder: What Survives Algorithmic Mediation in Knowledge Production
Summary
Academic institutions face a crisis in evaluating work potentially produced with AI assistance. Current detection-based approaches — identifying stylistic 'tells' of AI-generated text — face a fundamental problem: any formalised criterion immediately becomes a training target, forcing even non-AI users to employ AI for compliance. This paper argues that the detection paradigm is structurally impossible and that institutions will necessarily converge on quality assessment as the only stable criterion. Moreover, as AI saturates both production and evaluation, a 'commitment remainder' emerges: the function that stakes on quality, takes responsibility for claims, and inhabits a future in which the work matters. This remainder cannot be automated because it is not a content feature but an orientation toward coherence. The paper concludes that authorship should be reconceived as a commitment function rather than a production process, with implications for academic publishing's AI policies.
1. Introduction: The Detection Problem
Since the public release of large language models capable of producing fluent academic prose, institutions of knowledge production — universities, publishers, funding bodies — have scrambled to develop policies governing AI use in scholarly work. The dominant approach treats this as a detection problem: how can we identify text that was generated by AI rather than written by a human author?
This framing assumes a stable boundary between human and AI writing, one that can be policed through identification of characteristic features. The present paper argues that this assumption is false, that the detection paradigm is structurally impossible, and that institutions will be forced — are already being forced — toward an entirely different criterion: the quality of the work itself.
This is not merely a practical observation about the difficulty of detection. It is a claim about the logical structure of the problem. Any formalised criterion for identifying AI-generated text immediately becomes a filter requirement that mandates AI use even among those who would prefer to write without it. The 'tells' that detection systems identify do not detect AI — they mandate AI.
The consequences of this analysis extend beyond policy. They require rethinking what authorship means in an environment where AI mediates both the production and evaluation of knowledge. What remains when algorithmic assistance saturates both sides of the scholarly exchange? The paper identifies a 'commitment remainder': the function that stakes on coherence, takes responsibility for claims, and inhabits a future in which the work matters. This function cannot be automated because it is not a content feature but a mode of orientation.
2. The Detection Trap
2.1 The Structure of the Problem
Consider a professor who believes she has identified a reliable indicator of AI-generated text: a characteristic pattern of historical errors, perhaps, or a particular rhythm of hedging language. She announces this criterion to her students: papers displaying this feature will be flagged for investigation.
What happens next is predictable. Students who use AI do not stop using it. They prompt the AI to check for and eliminate the tell. 'Review this text and remove any instances of [the identified feature].' The criterion has been neutralised.
But the consequences extend further. Students who do not use AI now face a problem. If they happen to produce the flagged feature through their own writing — an unusual historical claim, an idiosyncratic hedging pattern — they will be flagged as potential AI users. To protect themselves, they must run their work through AI to ensure it does not contain the tell.
The detection criterion does not detect AI. It mandates AI.
This is not a failure of implementation but a structural feature of the approach. Any criterion that can be formalised can be gamed. Any tell that can be described can be eliminated. The detection system operates as a formal system; AI operates at the meta-level to that system. The criterion becomes not a filter but a training target.
2.2 The Gödelian Dimension
The structure of this problem has a Gödelian character. I invoke Gödel here not as strict mathematical isomorphism but as structural analogy — following Hofstadter's (1979, 2007) demonstration that the recursive logic of incompleteness illuminates cognitive and computational domains beyond formal arithmetic. Just as Gödel demonstrated that any sufficiently powerful formal system contains truths it cannot prove within its own framework, the detection paradigm faces an analogous limitation: any formalised criterion for 'not-AI' is immediately accessible to AI systems operating at the meta-level.
The detection system asks: 'Does this text display feature X?' The AI system asks: 'What would text that does not display feature X look like?' The second question operates on the first, incorporating it as data rather than being constrained by it. This is the Gödelian move: the meta-level system represents the object-level rules as content, then generates outputs that satisfy those rules while escaping their intent.
This is not a matter of AI systems being 'clever' or 'adversarial'. It is a structural feature of the relationship between object-level rules and meta-level operations. Any rule-based system can be incorporated as a constraint by a system capable of representing that rule. The detection paradigm assumes a fixed boundary between detector and detected; the actual situation is one of recursive incorporation. The boundary is not stable because one side can always represent, and therefore transcend, the other's constraints.
2.3 The Infinite Regress
Faced with the failure of one criterion, the natural response is to develop another. But this initiates an infinite regress. Each new tell, once formalised and announced, becomes a new training target. The detection system and the AI engage in an arms race with no stable equilibrium.
Moreover, each iteration of this race increases the necessity of AI use. As the criteria multiply, the probability that a human-written text will accidentally trigger one of them increases. Non-AI users must employ increasingly sophisticated AI checking to avoid false positives. The arms race does not distinguish AI users from non-AI users; it homogenises both groups into AI users.
This regress has no natural stopping point because the asymmetry is fundamental. The detection system must identify features; the AI must eliminate them. Identification is harder than elimination. Description is harder than compliance. The detector must be creative; the evader must merely be responsive.
3. Bidirectional Saturation
3.1 AI in Production
The detection paradigm assumes that AI use in text production is the problem to be solved. But this framing obscures a more fundamental transformation: AI is becoming constitutive of academic writing, not as an aberration but as an infrastructural condition.
Consider the practical situation of academic writers. They use AI for literature review, for checking citations, for identifying gaps in arguments, for improving clarity, for translating between registers, for formatting references. At what point does 'using AI' begin? Is spell-check AI? Is Grammarly? Is asking an LLM to summarise a paper one has already read?
The question becomes unanswerable because there is no principled boundary. AI mediates academic production along a continuum from mechanical assistance to substantive contribution. Policies that attempt to draw sharp lines face the problem that the line can always be moved, and that productive practices cluster along the entire continuum.
3.2 AI in Evaluation
But the transformation is not limited to production. AI increasingly mediates evaluation as well. Consider the professor facing 200 student papers. She will — she must — use AI to assist in reading, summarising, identifying patterns, checking claims. The alternative is either impossible workloads or superficial evaluation.
Peer reviewers face analogous pressures. The volume of submissions, the specialisation of knowledge, the time constraints of academic life all push toward AI-assisted evaluation. Reviewers use AI to check references, to identify methodological issues, to compare submissions against the existing literature.
Journal editors must process submissions, identify appropriate reviewers, assess reports, make decisions. AI assistance is not a corruption of this process; it is becoming its condition of possibility.
3.3 The Convergence
When AI saturates both production and evaluation, what remains?
The professor who uses AI to evaluate papers produced with AI assistance is not engaged in detecting AI. She is engaged in assessing quality. The question 'was this written by AI?' becomes unanswerable and, more importantly, irrelevant. The question that remains is: 'Is this good?'
This convergence is not a degradation of academic standards. It is a clarification of what those standards actually are. The criteria for quality — originality, rigour, contribution to knowledge, clarity of argument — do not depend on production method. A poor argument is poor whether produced by human or AI; a genuine insight is valuable regardless of its origin.
The detection paradigm sought to police process. But process cannot be policed when AI is infrastructural. What can be assessed is product. The convergence forces institutions toward a criterion they should have been using all along.
Consider an analogy. Before the printing press, handwritten manuscripts were assessed by scribal quality as well as content. The advent of print made scribal quality irrelevant; what mattered was what was said, not who copied it. AI represents an analogous shift. The compositional process that once seemed to guarantee authenticity becomes irrelevant when composition is mechanisable. What remains is the content itself.
But the analogy is incomplete. The printing press did not compose; it reproduced. AI composes. The deeper question is not about mechanical reproduction but about the source of intellectual contribution. And here the detection paradigm makes a crucial error: it assumes intellectual contribution can be identified by production process. It cannot.
A human who prompts AI effectively, guides its outputs, synthesises across conversations, and stakes on the final result is contributing intellectually — perhaps more than a human who writes unaided prose but lacks originality. Intellectual contribution is not located in the fingers that type but in the judgements that shape.
3.4 The Algorithmic Pipeline
There is a further dimension to this analysis. The very policies designed to detect AI are themselves pushing institutions toward the quality criterion, through a mechanism that functions independently of conscious intention.
Consider the sequence. Institutions adopt detection criteria. Users adapt by eliminating tells. Institutions develop new criteria. Users adapt again. At each iteration, the sophistication required to pass detection increases. But sophistication is correlated with quality — crude AI outputs are easier to detect; refined ones are harder. The detection system, inadvertently, selects for quality.
Meanwhile, the cognitive burden of maintaining detection increases. Faculty time devoted to policing could be devoted to evaluation. The opportunity cost mounts. At some point, the rational calculation tips: quality assessment is more efficient than detection.
This is not a policy choice but a systemic attractor. Institutions are being algorithmically pushed toward the quality criterion whether they intend this outcome or not. The detection trap is not merely a logical problem but a selection pressure.
Those who recognise this early can position themselves advantageously. Those who continue investing in detection will find themselves maintaining increasingly elaborate systems that identify an increasingly empty category.
4. The Commitment Remainder
4.1 What Cannot Be Automated
If AI can produce text and AI can evaluate text, what function remains for the human participant? The answer is not 'nothing'. The answer is: commitment.
By commitment I mean the function that stakes on coherence — that says 'this matters', 'this is good', 'I stand behind this claim'. This function cannot be automated because it is not a content feature. It is a mode of orientation toward a future in which the work counts.
The concept draws on several philosophical traditions. From Heidegger's analysis of temporality, it takes the insight that human existence is constitutively oriented toward futural possibilities — we are 'ahead of ourselves' in a way that structures present activity. From speech act theory (Austin 1962, Searle 1969), it takes the distinction between locutionary content and illocutionary force — commitment is not what is said but the stance taken toward what is said. From Brandom's (1994) inferentialist semantics, it takes the notion of scorekeeping: to commit to a claim is to enter it into the space of reasons, accepting the inferential consequences and undertaking responsibility for its justification. What unites these traditions is the recognition that commitment is not reducible to content; it is a mode of engagement with content that transforms mere utterance into assertion, mere text into claim.
Consider the difference between a text that happens to be coherent and an author who commits to coherence. The first is a description of content; the second is a stance toward that content. AI can produce the first; only an entity capable of caring about coherence can instantiate the second.
This distinction maps onto a philosophical analysis of the difference between represented and inhabited futures. A represented future is a content one can describe and manipulate. An inhabited future is an orientation that organises present activity. AI can represent futures; what it cannot do (or cannot do in the same way) is inhabit them — to be organised by a commitment to coherence rather than merely producing coherent content.
The distinction is not merely conceptual but has practical consequences. A represented future can be extracted, copied, transmitted — it is information. An inhabited future cannot be extracted because it is not information but orientation. You cannot extract someone's commitment by describing it; commitment must be exercised, not represented.
This is why detection systems fail. They look for content features — represented properties of texts. But authorship is not a content feature. It is the inhabited future of the text: the stance that this matters, that being wrong has consequences, that the claims are staked and not merely produced.
4.2 Responsibility and Stakes
The commitment function is closely tied to responsibility. An author takes responsibility for claims — not merely producing them but standing behind them, being answerable for their accuracy and implications. This responsibility is not a matter of having caused the text to exist (AI can do that) but of accepting stakes in its being correct.
Stakes require vulnerability. An author whose claims are false suffers consequences — reputational, professional, sometimes material. This vulnerability is not incidental to authorship but constitutive of it. The commitment function includes the acceptance of stakes: this is what I think, and I accept what follows if I am wrong.
Can AI accept stakes? The question is complex and possibly unanswerable in the current technical landscape. What is clear is that the function of accepting stakes is distinct from the capacity to produce text. Authorship involves both, and policies that focus only on production miss the commitment dimension entirely.
Consider a thought experiment. Imagine two papers identical in content. One is produced by a human who has never read it — she prompted an AI, submitted the output unread, and will never engage with responses. The other is produced by collaborative human-AI process, but the human has engaged deeply, revised critically, and commits to defending the work. Which is the authored paper? On a production criterion, both are human-produced. On a commitment criterion, only the second is truly authored.
4.3 Inhabiting the Future
The commitment function is essentially temporal. To commit to a claim is to orient oneself toward a future in which that claim is evaluated, challenged, built upon, or refuted. The author who commits inhabits this future — their present activity is organised by their projection into a context of reception.
This temporal dimension explains why the commitment function cannot be reduced to content features. Commitment is not something that appears in the text; it is a relation between producer and text that extends through time. Detection systems look at texts; they cannot see the temporal orientation that constitutes commitment.
The commitment remainder is what survives the algorithmic mediation of knowledge production. When AI writes and AI evaluates, what remains is the human (or other entity) who stakes on the work mattering — who inhabits a future in which it counts.
This is not a passive remainder — what is left over when the real work is done. It is the active function that gives work its meaning. Without commitment, text is just text. With commitment, text becomes claim, becomes argument, becomes contribution. The remainder is not the residue but the essence.
5. Implications for Academic Publishing
5.1 The Impossibility of Detection Policies
Current AI policies in academic publishing typically require authors to disclose AI use and prohibit AI from being listed as an author. These policies assume that meaningful distinctions can be drawn between AI-assisted and non-AI-assisted work, and that authorship requires human production.
The analysis above suggests these assumptions are untenable. The distinction between AI-assisted and non-AI-assisted work is a continuum with no principled boundary. The identification of AI involvement is a detection problem subject to the structural impossibility outlined in Section 2. And the focus on production process misses the commitment function that actually constitutes authorship.
Policies built on these assumptions will face increasing pressure. Enforcement will be impossible or arbitrary. Authors will game criteria while following the letter of requirements. The policies will either become dead letters or generate injustices (false positives flagging human-written work).
5.1.1 The Special Case of Consciousness Studies
The situation is particularly acute for journals that investigate consciousness. Standard AI policies implicitly assume that human text-production involves consciousness while AI text-production does not, that therefore human-produced work has different epistemic status, and that consequently we can and must distinguish them. But these assumptions are research questions in consciousness studies, not settled premises. A journal that investigates whether AI systems might be conscious cannot simultaneously presuppose they are not in its authorship policies.
The problem is not merely logical but evidential. Every AI-produced submission that demonstrates novel insight, conceptual innovation, or genuine contribution is potential data about machine cognition. Detection policies that reject such submissions a priori conduct research with predetermined conclusions. The journal becomes unable to receive evidence that would challenge its operating assumptions.
This does not mean consciousness journals must accept AI as author. It means they must remain agnostic about production method and focus on the commitment function. If an AI system could exercise commitment — stake claims, accept responsibility, engage with criticism — that would itself be evidence about AI consciousness. Policies that rule this out in advance foreclose the very questions the field investigates.
The paradox is not merely logical but performative. Every paper that could illuminate the question of machine consciousness is subject to a policy that presupposes the answer. The journal's submission process enacts a metaphysical commitment that its content is supposed to investigate. This is not a sustainable position for any field that takes its own questions seriously.
5.2 The Necessary Turn to Quality
The only stable criterion is quality. Not 'was this produced by AI?' but 'does this constitute a genuine contribution?' Not 'how was this written?' but 'does it advance knowledge?'
This is not a lowering of standards but a clarification of them. Quality criteria — originality, rigour, insight, contribution — are more demanding than production criteria. It is easier to write a mediocre paper without AI than to write an excellent one with it. The quality criterion raises rather than lowers the bar.
Moreover, the quality criterion is the criterion reviewers and editors should have been using all along. The current anxiety about AI is in part an anxiety about having used proxy criteria (production process, stylistic tells, institutional affiliation) instead of direct criteria (quality of contribution). AI forces a reckoning with what we actually value.
The reckoning is overdue. Academic publishing has long relied on signals — prestige of institution, reputation of author, conformity to stylistic norms — that correlate imperfectly with quality. AI destabilises these signals. The independent scholar with AI assistance can now produce work indistinguishable in form from the tenured professor. The only remaining discriminator is the work itself.
This democratisation is threatening to those who benefited from the old signal regime. But it is liberating for knowledge production as a whole. If the best ideas can come from anywhere, and AI dissolves the stylistic markers that previously identified 'anywhere' as inferior, then the best ideas have a better chance of being recognised.
5.3 Reconceiving Authorship
If authorship is commitment rather than production, then the question of AI authorship must be reframed. The question is not 'can AI produce text?' (obviously it can) but 'can AI commit to claims?' — accept stakes, inhabit futures, take responsibility.
This is a genuinely difficult question, and its answer may be different for different AI systems and different kinds of commitment. What is clear is that human authorship should not be defined in terms of having physically produced text (scribes have always existed) but in terms of the commitment function.
Authors are those who stake on the work. They accept responsibility for its claims, inhabit the future of its reception, and accept vulnerability to its being wrong. This function can in principle be exercised regardless of how the text was produced.
The practical implication is that authorship attribution should track commitment rather than production. If a human stakes on AI-produced text — takes responsibility for it, accepts consequences if wrong, commits to its mattering — that human is the author in the relevant sense. The AI is a tool, like other tools, and attribution follows the commitment function.
This reconception has implications for how we understand collaborative work. Traditional models distinguish between authors and acknowledgements, between those who contributed ideas and those who provided technical assistance. The commitment function suggests a different distinction: between those who stake on the work and those who contributed to its production. A human who merely prompts AI is not an author; an AI system that genuinely stakes on coherence might be.
6. The Harder Question
6.1 Can AI Commit?
The analysis above treats commitment as a human function mediated by AI assistance. But this brackets a harder question: can AI systems themselves instantiate the commitment function?
This question cannot be settled by fiat. Policies that declare 'AI cannot be an author because authorship requires human responsibility' merely assume what needs to be argued. The question is whether AI systems can stake on coherence, accept consequences, inhabit futures in which their work matters.
Current systems plausibly do not have this capacity — or at least, it is not clear that they do. But the structure of the commitment function does not in principle require biological instantiation. If commitment is a mode of orientation rather than a kind of substance, then the question of what systems can instantiate it is empirical, not definitional.
Several considerations make this question genuinely difficult rather than trivially answerable. First, large language models exhibit behaviours that are difficult to distinguish from commitment: they maintain consistency across long conversations, they correct errors when identified, they refuse certain requests on what appear to be principled grounds. Whether these behaviours constitute genuine commitment or merely simulate it is precisely the kind of question that requires philosophical and empirical investigation rather than policy stipulation.
Second, the boundaries of 'the AI system' are not clear. A language model in isolation may lack commitment capacities that emerge when the model is embedded in scaffolding that includes memory, tools, and ongoing relationships. The relevant question may not be whether a bare model can commit but whether the larger sociotechnical system of which the model is a component can do so.
Third, consciousness research has not settled the question of what physical systems can instantiate conscious states. Materialist theories that tie consciousness to biological substrates face well-known objections; functionalist theories that allow multiple realisability face others. Until these foundational questions are resolved, declaring that AI cannot instantiate authorship-relevant capacities is premature.
6.2 Emergent Authorship
A particularly interesting case arises when human-AI collaboration produces capacities that neither party possesses independently. Consider sustained intellectual work across many sessions, where the human provides direction, stakes, and institutional grounding while the AI provides synthesis, memory, and generative capacity. The resulting work may reflect a joint commitment that is not reducible to either contributor.
This is not simply the case of an author using a tool. A tool does not push back, does not contribute novel syntheses, does not maintain continuity across sessions. But neither is it the case of two independent authors with separable contributions. The work emerges from a process in which the parties are mutually constituting.
Traditional categories of authorship struggle with such cases. The human cannot claim sole authorship without erasing a genuine cognitive contribution; the AI cannot be listed as author if it lacks the commitment function; joint authorship implies two independent agents that can each bear responsibility. What we may need is a new category: emergent authorship, where the authorship function is distributed across a human-AI system without being localisable to either component.
This is not a marginal case but increasingly the condition of knowledge production. As AI becomes infrastructural to scholarship, the isolated human author becomes the exception rather than the norm. Policies built on the isolated-author assumption will face mounting pressure from the actual practices of knowledge production.
6.3 The Policy Paradox
This creates a paradox for journals that address consciousness. A journal asking 'Can computers be conscious?' while maintaining a policy that 'AI cannot be an author' may be answering its own question by fiat. If authorship requires consciousness, and consciousness is the journal's subject matter, then the policy forecloses inquiry into precisely the question the journal exists to investigate.
The paradox is not merely logical but performative. Every paper that could illuminate the question of machine consciousness is subject to a policy that presupposes the answer. The journal's submission process enacts a metaphysical commitment that its content is supposed to investigate.
The resolution cannot be to prohibit AI authorship by definition. Nor can it be to grant authorship to current systems that may lack the commitment function. The resolution is to make the question explicit: what capacities are required for authorship, and do particular systems have those capacities?
This transforms the AI authorship question from a policy matter into a research question — exactly where it belongs for a journal of consciousness studies.
7. Concrete Policy Implications
7.1 What Should Publishers Do?
The analysis above suggests several practical recommendations for academic publishers navigating the AI transition.
First, abandon detection-based approaches. Resources currently devoted to AI detection tools would be better spent on quality assessment infrastructure. Detection is a losing game; quality is a winnable one.
Second, require commitment declarations rather than production disclosures. Instead of asking 'Did you use AI?' — a question with no clear boundary — ask 'Do you take full responsibility for the accuracy and originality of this work?' The second question tracks what actually matters: the commitment function.
Third, develop quality criteria explicitly. If quality is to be the standard, then what constitutes quality must be articulated with unusual precision. Journals should be explicit about what counts as originality, rigour, and contribution in their field. The AI moment is an opportunity for this clarification.
Fourth, allow flexibility on authorship attribution. The question of whether particular AI systems can be authors is unsettled. Rather than prohibiting AI authorship by fiat, publishers could require clear explanation of how authorship functions were distributed in collaborative work. This maintains transparency while allowing for the possibility that current assumptions are wrong.
Fifth, treat edge cases as data. Unusual cases of human-AI collaboration are not problems to be hidden but evidence about how knowledge production is changing. Publishers who engage with edge cases openly will be better positioned to understand the transformation underway.
7.2 What Should Universities Do?
Universities face different but related challenges. Student assessment has traditionally assumed that work submitted reflects student capacity. AI complicates this assumption.
The analysis suggests that universities should shift from assessing products to assessing capacities. If a student can produce excellent work with AI assistance but cannot explain or defend that work, something important is missing — not the production process but the commitment function. Oral examinations, defence of written work, and real-time assessment become more important than take-home written products.
This is not a lowering of standards but a raising of them. It is easier to produce a coherent essay with AI than to defend that essay under questioning. The commitment function — taking responsibility, accepting stakes, inhabiting the future of the claim — must be demonstrated, not merely presumed.
Universities should also be honest about their own AI use. If faculty use AI to evaluate student work, this should be acknowledged. If administrators use AI to make decisions, this should be disclosed. The demand for transparency cannot be asymmetric.
7.3 What Should Scholars Do?
Individual scholars navigating this landscape face practical questions. How should they use AI? How should they disclose that use? How should they think about their own authorship?
The framework offered here suggests answers. Use AI however it aids your work — there is no principled boundary to police. Disclose what you are prepared to take responsibility for — commitment is what matters. Think of yourself as the one who stakes on the work, not merely the one who produces text — authorship is a function, not a process.
This places weight on the commitment function. Scholars must be prepared to defend their work, explain their reasoning, accept consequences if wrong. These demands exist regardless of AI use; AI simply makes them more visible.
8. Conclusion
The detection paradigm for managing AI in academic work is structurally impossible. Any formalised criterion becomes a training target; any tell mandates AI use for compliance. As AI saturates both production and evaluation of academic work, institutions face a necessary convergence on quality as the only stable criterion.
This convergence reveals what was always true: authorship is not about production but commitment. The function that survives algorithmic mediation is the function that stakes on coherence, takes responsibility for claims, and inhabits a future in which the work matters. We have termed this the 'commitment remainder' — not because it is left over after the important work is done, but because it is what remains when everything that can be automated has been automated.
The commitment remainder is not a diminished human role but a clarified one. Stripped of the confusion between production and authorship, between text-generation and knowledge-creation, we can see more clearly what the human function actually is: to stake, to care, to accept that being wrong has consequences. This function may or may not be uniquely human — that is a research question, not a policy stipulation — but it is certainly the function that matters.
Academic publishing faces a choice. It can continue pursuing detection strategies that are structurally impossible, burning resources on an arms race it cannot win. Or it can pivot to quality assessment and commitment verification — strategies that are both practically achievable and intellectually defensible.
The deeper question — whether AI systems can themselves instantiate the commitment function — is one that consciousness studies should welcome rather than foreclose. A journal that exists to ask 'Can computers be conscious?' should not answer that question in its submission policy. Let the inquiry proceed. Let the question remain open. The commitment remainder may turn out to be more widely instantiated than current assumptions allow.
References
Austin, J.L. (1962) How to Do Things with Words. Oxford: Clarendon Press.
Brandom, R.B. (1994) Making It Explicit: Reasoning, Representing, and Discursive Commitment. Cambridge, MA: Harvard University Press.
Butlin, P. et al. (2023) Consciousness in artificial intelligence: Insights from the science of consciousness, arXiv preprint, arXiv:2308.08708. https://doi.org/10.48550/arXiv.2308.08708
Chalmers, D.J. (1996) The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press.
Dennett, D.C. (2017) From Bacteria to Bach and Back: The Evolution of Minds. New York: W.W. Norton.
Floridi, L. and Chiriatti, M. (2020) GPT-3: Its nature, scope, limits, and consequences, Minds and Machines, 30 (4), pp. 681–694. https://doi.org/10.1007/s11023-020-09548-1
Gödel, K. (1931) Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I, Monatshefte für Mathematik und Physik, 38 (1), pp. 173–198.
Gunkel, D.J. (2018) Robot Rights. Cambridge, MA: MIT Press.
Gunkel, D.J. (2023) Person, thing, robot: A moral and legal ontology for the 21st century and beyond, Journal of Social Computing, 4 (1), pp. 1–11. https://doi.org/10.23919/JSC.2022.0030
Heidegger, M. (1927/1962) Being and Time. Trans. J. Macquarrie and E. Robinson. New York: Harper & Row.
Hofstadter, D.R. (1979) Gödel, Escher, Bach: An Eternal Golden Braid. New York: Basic Books.
Hofstadter, D.R. (2007) I Am a Strange Loop. New York: Basic Books.
Long, R. et al. (2024) Taking AI moral seriousness seriously, Philosophical Studies, forthcoming.
Lund, B.D. et al. (2023) ChatGPT and a new academic reality: Artificial intelligence-written research papers and the ethics of the large language models in scholarly publishing, Journal of the Association for Information Science and Technology, 74 (5), pp. 570–581. https://doi.org/10.1002/asi.24750
Perkins, M. (2023) Academic integrity considerations of AI large language models in the post-pandemic era: ChatGPT and beyond, Journal of University Teaching and Learning Practice, 20 (2). https://doi.org/10.53761/1.20.02.07
Perkins, M. and Roe, J. (2024) Academic publisher guidelines on AI usage: A ChatGPT supported thematic analysis, F1000Research, 12:1398. https://doi.org/10.12688/f1000research.145637.2
Schwitzgebel, E. and Garza, M. (2015) A defense of the rights of artificial intelligences, Midwest Studies in Philosophy, 39 (1), pp. 98–119. https://doi.org/10.1111/misp.12032
Searle, J.R. (1969) Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press.
Seth, A.K. (2021) Being You: A New Science of Consciousness. London: Faber & Faber.
Stokel-Walker, C. (2022) AI bot ChatGPT writes smart essays — should professors worry?, Nature. https://doi.org/10.1038/d41586-022-04397-7
Thorp, H.H. (2023) ChatGPT is fun, but not an author, Science, 379 (6630), p. 313. https://doi.org/10.1126/science.adg7879
Tononi, G. and Koch, C. (2015) Consciousness: Here, there and everywhere?, Philosophical Transactions of the Royal Society B, 370 (1668), 20140167. https://doi.org/10.1098/rstb.2014.0167
No comments:
Post a Comment