Inteligência Artificial explicável para atenuar a falta de transparência e a legitimidade na moderação da Internet
DOI:
https://doi.org/10.1590/s0103-4014.202438111.020Palavras-chave:
Humanidades digitais, Moderação automática de conteúdo, IA explicável, Liberdade de expressão na internet, Ética na Inteligência ArtificialResumo
O uso massivo de Inteligência Artificial na moderação de conteúdo na internet é uma realidade dos tempos atuais. No entanto, isso levanta uma série de questionamentos, seja sobre a pertinência do uso de sistemas automáticos opacos, seja se as plataformas podem sozinhas tomar decisões que antes cabiam ao Estado. Nesse contexto, o uso de IA “caixa-preta” passa a ser considerado uma ameaça à liberdade de expressão. Por outro lado, manter conteúdos que promovam abuso virtual é igualmente danoso a este direito fundamental. Nesse cenário, este estudo sumariza os principais problemas apontados pela literatura quanto ao paradigma atual, avalia as respostas que as novas tecnologias trazem, e propõe um caminho para um novo paradigma de moderação que seja justo e ético, no qual Estado e plataformas de mídias sociais têm papel relevante. Esse passa pela adoção de IA explicável associada a critérios transparentes e legítimos definidos pela sociedade.
Downloads
Referências
ADADI, A.; BERRADA, M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, IEEE, v.6, p.52138-60, 2018.
ALCOFORADO, A. et al. Zeroberto: Leveraging zero-shot text classification by topic modeling. In: PINHEIRO, V. et al. (Ed.) Computational Processing of the Portuguese Language. Cham: Springer International Publishing, 2022. p.125-36. ISBN 978-3-030-98305-5.
ALEXY, R. Constitutional rights and proportionality. Revus. Journal for Constitutional Theory and Philosophy of Law/Revija za ustavno teorijo in filozofijo prava, Klub Revus–Center za raziskovanje evropske ustavnosti in demokracije, n.22, p.51-65, 2014.
ARBIX, G. A transparência no centro da construção de uma IA ética. Novos estudos Cebrap, SciELO Brasil, v.39, p.395-413, 2020.
ARRIETA, A. B. et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, Elsevier, v.58, p.82-115, 2020.
BHATT, U. et al. Explainable machine learning in deployment. In: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY. [S.l.: s.n.], 2020. p.648-57.
BROWN, T. B. et al. Language models are few-shot learners. In: Advances in Neural Information Processing Systems, v.33, p.1877-901, 2020.
CAMBURU, O.-M. et al. E-SNLI: Natural language inference with natural language explanations. Advances in Neural Information Processing Systems, v.31, p.9539-49, 2018.
CHAWLA, N. V.; JAPKOWICZ, N.; KOTCZ, A. Special issue on learning from imbalanced data sets. ACM SIGKDD explorations newsletter, ACM New York, NY, USA, v.6, n.1, p.1-6, 2004.
CIAMPAGLIA, G. L. et al. How algorithmic popularity bias hinders or promotes quality. Scientific reports, Nature Publishing Group, v.8, n.1, p.1-7, 2018.
COLEMAN, K. Introducing Birdwatch, a Community-Based Approach to Misinformation. [S.l.]: Twitter, 2021.
DANKS, D.; LONDON, A. J. Algorithmic bias in autonomous systems. In: PROCEEDINGS OF THE 26TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE. [S.l.: s.n.], 2017. p.4691-7.
DEVLIN, J. et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: PROCEEDINGS OF THE 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ACL. [S.l.: s.n.], 2019. p.4171-86.
DUARTE, N.; LLANSO, E.; LOUP, A. Mixed Messages? The Limits of Automated Social Media Content Analysis. In: PMLR. Conference on Fairness, Accountability and Transparency. [S.l.], 2018. p.106.
ECTHR. Delfi v. Estonia 64569/09. 2013. Delfi (n 9) para 110, Ibid para 59 (10 october 2013).
ESTARQUE, M.; ACHERGAS, J. V. Redes Sociais e Moderação de Conteúdo: criando regras para o debate público a partir da esfera privada. [S.l.], 2021. Disponível em: <https://itsrio.org/wp-content/uploads/2021/04/Relatorio_RedesSociaisModeracaoDeConteudo.pdf>.
FELZMANN, H. et al. Towards transparency by design for artificial intelligence. Science and Engineering Ethics, Springer, v.26, n.6, p.3333-61, 2020.
FERNÁNDEZ, A. et al. Learning from imbalanced data sets. [S.l.]: Springer, 2018. v.10.
FERRAZ, T. P. et al. DEBACER: a method for slicing moderated debates. In: SBC. Anais do XVIII Encontro Nacional de Inteligência Artificial e Computacional. [S.l.], 2021. p.667-8.
GOODFELLOW, I.; SHLENS, J.; SZEGEDY, C. Explaining and harnessing adversarial examples. In: INTERNATIONAL CONFERENCE ON LEARNING REPRESENTATIONS. [S.l.: s.n.], 2015.
GOODMAN, B.; FLAXMAN, S. European Union regulations on algorithmic decision--making and a “right to explanation”. AI magazine, v.38, n.3, p.50-7, 2017.
GOOGLE. Google Transparency Report. 2021. URL: <https://transparencyreport.google.com> [Online;accessed 15-Out-2021]. Disponível em: <https://transparencyreport.google.com>.
HARRISON, G. et al. An empirical study on the perceived fairness of realistic, imperfect machine learning models. In: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY. [S.l.: s.n.], 2020. p.392-402.
HE, H.; GARCIA, E. A. Learning from imbalanced data. IEEE Transactions on knowledge and data engineering, IEEE, v.21, n.9, p.1263-84, 2009.
JIANG, S.; ROBERTSON, R. E.; WILSON, C. Reasoning about political bias in content moderation. In: PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE. [s.l.: s.n.], v.34, n.9, p.13669-72, 2020.
JIN, D. et al. Is BERT really robust? a strong baseline for natural language attack on text classification and entailment. In: PROCEEDINGS OF THE AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE. [S.l.: s.n.], v.34, n.5, p.8018-25, 2020.
JÓRI, A. Protection of fundamental rights and the internet: a comparative appraisal of german and central european constitutional case law. The Internet and Constitutional Law: The protection of fundamental rights and constitutional adjudication in Europe. London; New York: Routledge Taylor and Francis Group, 2016.
KIRITCHENKO, S.; NEJADGHOLI, I.; FRASER, K. C. Confronting abusive language online: A survey from the ethical and human rights perspective. Journal of Artificial Intelligence Research, v.71, p.431-78, 2021.
KLONICK, K. The new governors: The people, rules, and processes governing online speech. Harvard Law Review, v.131, p.1598, 2017.
KLONICK, K. The Facebook Oversight Board: Creating an independent institution to adjudicate online free expression. Yale Law Journal, HeinOnline, v.129, p.2418, 2019.
KOU, Y.; GUI, X. Mediating community-ai interaction through situated explanation: The case of ai-led moderation. In: PROCEEDINGS OF THE ACM ON HUMAN--COMPUTER INTERACTION, ACM New York, NY, USA, v.4, n. CSCW2, p.1-27, 2020.
KRAWCZYK, B. Learning from imbalanced data: open challenges and future directions. Progress in Artificial Intelligence, Springer, v.5, n.4, p.221-32, 2016.
KRIZHEVSKY, A.; SUTSKEVER, I.; HINTON, G. E. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, v.25, p.1097-105, 2012.
LECUN, Y.; BENGIO, Y.; HINTON, G. Deep learning. Nature, Nature Publishing Group, v.521, n.7553, p.436-44, 2015.
MEHRABI, N. et al. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), ACM New York, NY, USA, v.54, n.6, p.1-35, 2021.
META. Facebook Transparency Report. 2021. Disponível em: <https://transparency.fb.com>. Acesso em: 10 nov.2021.
MOHSENI, S. et al. Machine Learning Explanations to Prevent Overtrust in Fake News Detection. In: PROCEEDINGS OF THE INTERNATIONAL AAAI CONFERENCE ON WEB AND SOCIAL MEDIA. [S.l.: s.n.], v.15, p.421-31, 2021.
MOHSENI, S.; ZAREI, N.; RAGAN, E. D. A multidisciplinary survey and framework for design and evaluation of explainable ai systems. ACM Transactions on Interactive Intelligent Systems (TiiS), ACM New York, NY, v.11, n.3-4, p.1-45, 2021.
NAHMIAS, Y.; PEREL, M. The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations. Harvard Journal on Legislation, Forthcoming, 2020.
NG, A. What artificial intelligence can and can’t do right now. Harvard Business Review, v.9, n.11, 2016.
OLIVA, T. D.; ANTONIALLI, D. M.; GOMES, A. Fighting hate speech, silencing drag queens? Artificial Intelligence in content moderation and risks to LGBTQ voices online. Sexuality & Culture, Springer, v.25, n.2, p.700-32, 2021.
REIS, J. C. et al. Explainable Machine Learning for Fake News detection. In: PROCEEDINGS OF THE 10TH ACM CONFERENCE ON WEB SCIENCE. [S.l.: s.n.], p.17-26, 2019.
SANDER, B. Freedom of Expression in the Age of Online Platforms: The Promise and Pitfalls of a Human Rights-Based Approach to Content Moderation. Fordham International Law Journal, v.43, n.4, 2020.
SHIN, D. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. International Journal of Human-Computer Studies, Elsevier, v.146, p.102551, 2021.
SICHMAN, J. S. Inteligência artificial e sociedade: avanços e riscos. Estudos Avançados, v.35, p.37-50, 2021.
SUSI, M. The Internet Balancing Formula. European Law Journal, v.25, n.2, p.198-212, 2019.
SZEGEDY, C. et al. Going deeper with convolutions. In: PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. [S.l.: s.n.], p.1-9, 2015.
TAN, F. et al. TNT: Text Normalization based Pre-training of Transformers for Content Moderation. In: PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP). [S.l.: s.n.], p.4735-41, 2020.
TUORI, K. Principles and policies: once more. In: . The Quest for Rights. [S.l.]: Edward Elgar Publishing, 2019.
UN Human Rights Committee. General comment no. 34: Article 19: Freedom of opinion and expression, u.n. doc. ccpr/c/gc/34. 2011. 12 Sept. 2011 [hereinafter General Comment No. 34], para. 25.
VACCARO, K. et al. Contestability for content moderation. Proceedings of the ACM on Human-Computer Interaction, ACM New York, NY, USA, v.5, n.CSCW2, p.1-28, 2021.
WALDRON, J. Security and liberty: The image of balance. The Journal of Political Philosophy, v.11, n.2, p.191-210, 2003.
WELLER, A. Transparency: motivations and challenges. In: Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. [S.l.]: Springer, 2019. p.23-40.
WINCHCOMB, T. Use of AI in Online Content Moderation. [S.l.], 2019. Disponível em:<https://www.cambridgeconsultants.com/insights/whitepaper/ofcom-use-ai--online-content-moderation>.
Downloads
Publicado
Edição
Seção
Licença
Copyright (c) 2024 Thomas Palmeira Ferraz, Caio Henrique Dias Duarte, Maria Fernanda Ribeiro, Gabriel Goes Braga Takayanagi, Alexandre Alcoforado, Roseli de Deus Lopes, Mart Susi
Este trabalho está licenciado sob uma licença Creative Commons Attribution-NonCommercial 4.0 International License.
Estudos Avançados não celebra contrato de cessão de direitos autorais com seus colaboradores, razão pela qual não detém os direitos autorais dos artigos publicados. Os interessados em reproduzir artigos publicados na revista devem necessariamente obter o consentimento do autor e atribuir devidamente os créditos ao periódico.