Ortalamayı Öldürmek, Azınlığı Güçlendirmek: Üretken Yapay Zeka, Sosyal Medya Örüntüleri ve Akademik Çalışmanın Yapısal Krizi

Yazarlar

DOI:

https://doi.org/10.63556/tisej.2026.1781

Anahtar Kelimeler:

Bilgi üretim rejimi- akademik kapitalizm- üretken yapay zeka- özgün düşünce- akademik emeğin otomasyonu

Özet

This article rejects the dominant framing of generative AI in academia as merely a problem of plagiarism, laziness and cheating, and instead reads it as a symptom and accelerator of a structural crisis in the regime of academic labour. It asks two related questions: through which mechanisms does generative AI transform an already metricised and standardised academic production system, and why can this transformation not be adequately captured within the narrow frame of “ethics violations”? Drawing on conceptual analysis and a targeted reading of recent empirical and methodological studies, the article argues that generative AI automates the bulk of “average academic work” – literature summarising, template filling, low-risk repetition – while acting as a powerful multiplier for a small minority of researchers who possess genuine theoretical depth and original insight. At the same time, the spread of synthetic data and synthetic participants raises new questions about what still counts as empirical research when “data” can be generated without encounter with the world. Situating these developments within audit culture, algorithmic infrastructures and the attention economy of social media, the article contends that the real risk of generative AI is not simply individual misconduct but a further algorithmic colonisation of academic subjectivity itself. It concludes by sketching institutional implications for evaluation criteria, the cautious use of synthetic data and the protection of slow, risky and original work.

Kaynaklar

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15(2), e35179. https://doi.org/10.7759/cureus.35179

Andersen, J. P., Degn, L., Fishberg, R., Graversen, E. K., Horbach, S. P. J. M., Kalpazidou Schmidt, E., … Sørensen, M. P. (2025). Generative artificial intelligence (GenAI) in the research process: A survey of researchers’ practices and perceptions. Technology in Society, 81, 102813.

Besançon, L., Cabanac, G., Labbé, C., & Magazinov, A. (2024). A great opportunity or a Pandora’s box? A survey on the use of large language models in academic writing. Research Integrity and Peer Review, 9, 4.

Bhargava, V., & Velasquez, M. (2020). Ethics of the attention economy: The problem of social media addiction. Business Ethics Quarterly, 31(3), 321–359.

Biagioli, M., & Lippman, A. (Eds.). (2020). Gaming the metrics: Misconduct and manipulation in academic research. Cambridge, MA: MIT Press.

Bin-Nashwan, S. A., Sadallah, M., & Bouteraa, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technology in Society, 75, 102370. https://doi.org/10.1016/j.techsoc.2023.102370

Bittle, K., & El-Gayar, O. (2025). Generative AI and academic integrity in higher education: A systematic review. Information, 16(4), 296.

Çakir, A., Kuyurtar, D., & Balyer, A. (2024). The effects of the publish or perish culture on publications in the field of educational administration in Türkiye. Social Sciences & Humanities Open, 9, 100817. https://doi.org/10.1016/j.ssaho.2024.100817

Cotton, D. R. E., Shipway, J. R., & Van der Velden, L. (2024). “Chatting and cheating”: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 227–238.

Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060.

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). Cambridge, MA: MIT Press.

Gravel, J., D’Amours-Gravel, M., & Osmanlliu, E. (2023). Learning to fake it: Limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clinic Proceedings: Digital Health, 1(3), 226–234. https://doi.org/10.1016/j.mcpdig.2023.05.004

Grech, V. (2022). Publish or perish, information overload, and journal impact factors – A conflicting tripod of forces. Saudi Journal of Anaesthesia, 16(2), 204–207. https://doi.org/10.4103/sja.sja_632_21

Griffiths, M. D. (2018). The evolution of the “components model of addiction” and the need for a confirmatory approach in conceptualizing behavioral addictions. Dusunen Adam: The Journal of Psychiatry and Neurological Sciences, 31(3), 179–184.

Human Rights Watch. (2023). “Meta’s broken promises”: Systemic censorship of Palestine content on Instagram and Facebook. New York, NY: Human Rights Watch.

Ioannidis, J. P. A., Klavans, R., & Boyack, K. W. (2018). Thousands of scientists publish a paper every five days. Nature, 561(7722), 167–169.

Korinek, A. (2023). Generative AI for economic research: Use cases and implications for economists. Journal of Economic Literature, 61(4), 1275–1315.

Lechien, J. R., Briganti, G., & Vaira, L. A. (2024). Artificial intelligence and ChatGPT in otolaryngology research and clinical practice. American Journal of Otolaryngology, 45(1), 103892.

Lodge, J. M. (2024). The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action. Canberra, Australia: Tertiary Education Quality and Standards Agency.

Miller, K. (2025, July 29). Social science researchers use AI to simulate human subjects. Stanford Report. https://news.stanford.edu/stories/2025/07/ai-social-science-research-simulated-human-subjects

Morrish, L. (2019). Pressure vessels: The epidemic of poor mental health among higher education staff (HEPI Occasional Paper 20). Oxford, UK: Higher Education Policy Institute. https://www.hepi.ac.uk/reports/pressure-vessels-the-epidemic-of-poor-mental-health-among-higher-education-staff/

Nordling, L. (2023). How ChatGPT is transforming the postdoc experience. Nature, 622(7983), 655–657.

Orduna-Malea, E., & Cabezas-Clavijo, Á. (2023). ChatGPT and bibliometrics: Reflections on artificial intelligence and scientific communication. Profesional de la Información, 32(2), e320204.

Raitskaya, L., & Tikhonova, E. (2024). Appliances of generative AI-powered language tools in academic writing: A scoping review. Journal of Language and Education, 10(1), 1–27.

Resnik, D. B. (2024). Vulnerable subjects. In The ethics of research with human subjects (pp. 293–329). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-031-82757-0_8

Rossi, L., Harrison, K., & Shklovski, I. (2024). Synthetic data and the problems of LLM-generated evidence in social research. Sociologica, 18(2), 145–168.

Sanchez-Ramos, L., Lin, L., & Romero, R. (2023). Beware of references when using ChatGPT as a source of information to write scientific articles. American Journal of Obstetrics and Gynecology, 229(3), 356–357. https://doi.org/10.1016/j.ajog.2023.04.004

Sebo, P. (2024). ChatGPT in medical research: The risk of fabricated data and references. Family Practice, 41(3), 494–496.

Shore, C., & Wright, S. (1999). Audit culture and anthropology: Neo-liberalism in British higher education. Journal of the Royal Anthropological Institute, 5(4), 557–575.

Shore, C. (2008). Audit culture and illiberal governance: Universities and the politics of accountability. Anthropological Theory, 8(3), 278–298.

Shrestha, P., Krpan, D., Koaik, F., Schnider, R., Sayess, D., & Binbaz, M. S. (2024). Beyond WEIRD: Can synthetic survey participants substitute for humans in global policy research? Behavioral Science & Policy, 10(2), 26–45. https://doi.org/10.1177/23794607241311793

Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4–5), 395–412.

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313.

van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford, UK: Oxford University Press.

Watermeyer, R., Phipps, L., Lanclos, D., & Knight, C. (2024). Generative AI and the automating of academia. Postdigital Science and Education, 6(2), 446–466.

Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., … Thelwall, M. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management. Bristol, UK: Higher Education Funding Council for England.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York, NY: PublicAffairs.

Yayınlanmış

20-03-2026

Nasıl Atıf Yapılır

HASANÇEBİ, D. (2026). Ortalamayı Öldürmek, Azınlığı Güçlendirmek: Üretken Yapay Zeka, Sosyal Medya Örüntüleri ve Akademik Çalışmanın Yapısal Krizi. Üçüncü Sektör Sosyal Ekonomi Dergisi, 61(1), 1159–1176. https://doi.org/10.63556/tisej.2026.1781

Sayı

Bölüm

Araştırma Makalesi

Benzer Makaleler

<< < 18 19 20 21 22 23 24 25 26 27 > >> 

Bu makale için ayrıca gelişmiş bir benzerlik araması başlat yapabilirsiniz.