Killing the Average, Amplifying the Few: Generative AI, Social Media Patterns, and the Structural Crisis of Academic Work

Authors

DOI:

https://doi.org/10.63556/tisej.2026.1781

Keywords:

Regime of knowledge production, academic capitalism, generative AI, original thought, automation of academic labour

Abstract

Bu makalede, akademide üretken yapay zekâ tartışmalarını intihal, tembellik ve kopya eksenine sıkıştıran hâkim çerçeveyi reddederek, meseleyi akademik emek rejimindeki yapısal krizin bir belirtisi ve hızlandırıcısı olarak tespit edilmektedir. İki ilişkili soruya odaklanılmıştır: Üretken yapay zekâ, hâlihazırda metrikleştirilmiş ve şablonlaştırılmış akademik üretim sistemini hangi somut mekanizmalar üzerinden dönüştürmektedir? Ve bu dönüşüm, neden yalnızca “etik ihlaller” çerçevesiyle kavranamayacak kadar derin bir yapısal sorundur? Çalışmada kavramsal analiz ile son dönemdeki ampirik ve yöntemsel çalışmaların hedefli bir okumasına dayanan metin, üretken yapay zekânın “ortalama akademik emek” diye adlandırılan – literatür özetleme, kalıp doldurma, düşük riskli tekrar – işlerin büyük kısmını otomatikleştirirken, gerçek kuramsal derinlik ve özgün kavrayış gücüne sahip az sayıdaki araştırmacı için güçlü bir çarpan etkisi yarattığı savunulmaktadır. Eşzamanlı olarak sentetik veri ve sentetik katılımcı kullanımının, “veri”nin dünya ile karşılaşmadan üretilebildiği bir ortamda ampirik araştırmanın ne anlama geldiğine dair yeni sorular doğurduğu gösterilmiştir. Bu gelişmeleri denetim kültürü, algoritmik altyapılar ve sosyal medyanın dikkat ekonomisi bağlamına yerleştirildiğinde, üretken yapay zekânın asıl riskinin bireysel suistimal değil, akademik öznelliğin daha ileri düzeyde algoritmik kolonizasyonu olduğunu ileri sürülmüştür ve değerlendirme ölçütleri, sentetik veri sınırları ve yavaş, riskli, özgün çalışmanın kurumsal korunması açısından bazı sonuçlar tartışılmıştır.

References

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15(2), e35179. https://doi.org/10.7759/cureus.35179

Andersen, J. P., Degn, L., Fishberg, R., Graversen, E. K., Horbach, S. P. J. M., Kalpazidou Schmidt, E., … Sørensen, M. P. (2025). Generative artificial intelligence (GenAI) in the research process: A survey of researchers’ practices and perceptions. Technology in Society, 81, 102813.

Besançon, L., Cabanac, G., Labbé, C., & Magazinov, A. (2024). A great opportunity or a Pandora’s box? A survey on the use of large language models in academic writing. Research Integrity and Peer Review, 9, 4.

Bhargava, V., & Velasquez, M. (2020). Ethics of the attention economy: The problem of social media addiction. Business Ethics Quarterly, 31(3), 321–359.

Biagioli, M., & Lippman, A. (Eds.). (2020). Gaming the metrics: Misconduct and manipulation in academic research. Cambridge, MA: MIT Press.

Bin-Nashwan, S. A., Sadallah, M., & Bouteraa, M. (2023). Use of ChatGPT in academia: Academic integrity hangs in the balance. Technology in Society, 75, 102370. https://doi.org/10.1016/j.techsoc.2023.102370

Bittle, K., & El-Gayar, O. (2025). Generative AI and academic integrity in higher education: A systematic review. Information, 16(4), 296.

Çakir, A., Kuyurtar, D., & Balyer, A. (2024). The effects of the publish or perish culture on publications in the field of educational administration in Türkiye. Social Sciences & Humanities Open, 9, 100817. https://doi.org/10.1016/j.ssaho.2024.100817

Cotton, D. R. E., Shipway, J. R., & Van der Velden, L. (2024). “Chatting and cheating”: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 227–238.

Eke, D. O. (2023). ChatGPT and the rise of generative AI: Threat to academic integrity? Journal of Responsible Technology, 13, 100060.

Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). Cambridge, MA: MIT Press.

Gravel, J., D’Amours-Gravel, M., & Osmanlliu, E. (2023). Learning to fake it: Limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clinic Proceedings: Digital Health, 1(3), 226–234. https://doi.org/10.1016/j.mcpdig.2023.05.004

Grech, V. (2022). Publish or perish, information overload, and journal impact factors – A conflicting tripod of forces. Saudi Journal of Anaesthesia, 16(2), 204–207. https://doi.org/10.4103/sja.sja_632_21

Griffiths, M. D. (2018). The evolution of the “components model of addiction” and the need for a confirmatory approach in conceptualizing behavioral addictions. Dusunen Adam: The Journal of Psychiatry and Neurological Sciences, 31(3), 179–184.

Human Rights Watch. (2023). “Meta’s broken promises”: Systemic censorship of Palestine content on Instagram and Facebook. New York, NY: Human Rights Watch.

Ioannidis, J. P. A., Klavans, R., & Boyack, K. W. (2018). Thousands of scientists publish a paper every five days. Nature, 561(7722), 167–169.

Korinek, A. (2023). Generative AI for economic research: Use cases and implications for economists. Journal of Economic Literature, 61(4), 1275–1315.

Lechien, J. R., Briganti, G., & Vaira, L. A. (2024). Artificial intelligence and ChatGPT in otolaryngology research and clinical practice. American Journal of Otolaryngology, 45(1), 103892.

Lodge, J. M. (2024). The evolving risk to academic integrity posed by generative artificial intelligence: Options for immediate action. Canberra, Australia: Tertiary Education Quality and Standards Agency.

Miller, K. (2025, July 29). Social science researchers use AI to simulate human subjects. Stanford Report. https://news.stanford.edu/stories/2025/07/ai-social-science-research-simulated-human-subjects

Morrish, L. (2019). Pressure vessels: The epidemic of poor mental health among higher education staff (HEPI Occasional Paper 20). Oxford, UK: Higher Education Policy Institute. https://www.hepi.ac.uk/reports/pressure-vessels-the-epidemic-of-poor-mental-health-among-higher-education-staff/

Nordling, L. (2023). How ChatGPT is transforming the postdoc experience. Nature, 622(7983), 655–657.

Orduna-Malea, E., & Cabezas-Clavijo, Á. (2023). ChatGPT and bibliometrics: Reflections on artificial intelligence and scientific communication. Profesional de la Información, 32(2), e320204.

Raitskaya, L., & Tikhonova, E. (2024). Appliances of generative AI-powered language tools in academic writing: A scoping review. Journal of Language and Education, 10(1), 1–27.

Resnik, D. B. (2024). Vulnerable subjects. In The ethics of research with human subjects (pp. 293–329). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-031-82757-0_8

Rossi, L., Harrison, K., & Shklovski, I. (2024). Synthetic data and the problems of LLM-generated evidence in social research. Sociologica, 18(2), 145–168.

Sanchez-Ramos, L., Lin, L., & Romero, R. (2023). Beware of references when using ChatGPT as a source of information to write scientific articles. American Journal of Obstetrics and Gynecology, 229(3), 356–357. https://doi.org/10.1016/j.ajog.2023.04.004

Sebo, P. (2024). ChatGPT in medical research: The risk of fabricated data and references. Family Practice, 41(3), 494–496.

Shore, C., & Wright, S. (1999). Audit culture and anthropology: Neo-liberalism in British higher education. Journal of the Royal Anthropological Institute, 5(4), 557–575.

Shore, C. (2008). Audit culture and illiberal governance: Universities and the politics of accountability. Anthropological Theory, 8(3), 278–298.

Shrestha, P., Krpan, D., Koaik, F., Schnider, R., Sayess, D., & Binbaz, M. S. (2024). Beyond WEIRD: Can synthetic survey participants substitute for humans in global policy research? Behavioral Science & Policy, 10(2), 26–45. https://doi.org/10.1177/23794607241311793

Striphas, T. (2015). Algorithmic culture. European Journal of Cultural Studies, 18(4–5), 395–412.

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313.

van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford, UK: Oxford University Press.

Watermeyer, R., Phipps, L., Lanclos, D., & Knight, C. (2024). Generative AI and the automating of academia. Postdigital Science and Education, 6(2), 446–466.

Wilsdon, J., Allen, L., Belfiore, E., Campbell, P., Curry, S., Hill, S., … Thelwall, M. (2015). The metric tide: Report of the independent review of the role of metrics in research assessment and management. Bristol, UK: Higher Education Funding Council for England.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York, NY: PublicAffairs.

Downloads

Published

20.03.2026

How to Cite

HASANÇEBİ, D. (2026). Killing the Average, Amplifying the Few: Generative AI, Social Media Patterns, and the Structural Crisis of Academic Work. Third Sector Social Economic Review, 61(1), 1159–1176. https://doi.org/10.63556/tisej.2026.1781

Issue

Section

Research Article

Similar Articles

<< < 4 5 6 7 8 9 10 11 12 13 > >> 

You may also start an advanced similarity search for this article.