MediaWiki API result

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "compare": {
        "fromid": 1,
        "fromrevid": 1,
        "fromns": 0,
        "fromtitle": "UNIFAI",
        "toid": 1,
        "torevid": 2,
        "tons": 0,
        "totitle": "UNIFAI",
        "*": "<tr><td colspan=\"2\" class=\"diff-lineno\" id=\"mw-diff-left-l1\">Line 1:</td>\n<td colspan=\"2\" class=\"diff-lineno\">Line 1:</td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">Software engineering has long been a critical field of study, focusing on optimization problems and techniques for producing high-quality code. However, the advent of Artificial Intelligence (AI) has significantly altered the landscape of software development. The emphasis has shifted from traditional code-centric paradigms toward usability, user experience, and the emergence of knowledge-driven, code-agnostic development environments.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">The paradox in modern AI development lies in the critical need to identify and control code quality and characteristics, despite the inherent complexity and opacity of many AI paradigms. AI systems, particularly those employing deep learning, often operate through multiple layers of abstraction, making the underlying processes and decision-making mechanisms non-transparent. This factor can obscure the quality and reliability of the code, leading to potential issues in performance, ethics, and compliance. Given this complexity, there is an \u201cmoral\u201d obligation for designers to revisit fundamental questions about code evaluation and modeling. This involves not just understanding the technical aspects but also ensuring/evaluating that the AI systems align with ethical standards and societal values.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">The work on AI evaluation is not new; it has been a topic of research for over four decades. For instance, [1] provides a foundational analysis on how to evaluate AI research by extracting insights and setting goals at each stage of development. While such analysis may appear trivial at first glance, it offers critical insights into the foundational drivers of a concept\u2019s necessity and reveals the underlying value systems it implicitly adheres to. This type of analysis is often underreported or entirely absent in the documentation and discussion of AI systems. Metrics such as task-specific accuracy, precision, and false negative rate are frequently highlighted as benchmarks for model performance [2][3]. However, the methodological foundations and contextual relevance of these metrics are occasionally communicated in detail. On top of that, in certain instances [4], additional tuning is applied not to improve the model\u2019s inherent performance, but rather to enhance the perceived accuracy as experienced by end users\u2014introducing an additional layer of abstraction that may obscure the system's actual behavior. In other cases, additional tuning is employed to enhance the perception of accuracy. This might involve adjusting models to better align with human expectations or to mitigate biases, thereby improving the system's perceived reliability. In summary, the development of AI systems needs a return to the moral code analysis, not only to ensure technical robustness but also to align and conform with ethical and societal expectations that are expected. </ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">While these benchmarks serve as valuable indicators of success in controlled environments, they are insufficient for assessing the complexities and uncertainties of real-world deployments, particularly when they are not tested with real data or concepts relevant to the intended use case of the system. Similarly with the previous comparison, data represents another dimension of evaluation, especially through the leveraging of benchmark datasets. Although this approach can effectively demonstrate the accuracy of a model in comparison with known discrete concepts, it predominantly fails to reveal or adequately explore the \"correctness\" in real-world scenarios for decision making. Although such datasets enable the assessment of model accuracy against well-defined and discrete tasks [5][6][7][8], they frequently fall short in demonstrating\u2014or even investigating\u2014the model\u2019s correctness and reliability in real-world decision-making contexts.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">While performance-based evaluation methods remain prevalent, they offer a limited perspective on AI system assessment. This approach's dominance can be attributed to two primary factors: its effectiveness as a marketing strategy that emphasizes performance metrics, and its close alignment with economic considerations that typically drive enterprise decision-making. However, this narrow focus on performance may overlook other crucial aspects of AI system evaluation, such as ethical considerations, societal impact, and long-term sustainability. A growing body of literature has raised concerns regarding the widespread and accelerated deployment of AI algorithms and platforms in diverse aspects of everyday life [9][10][11][12]. Numerous research efforts have proposed strategies to mitigate the associated social and economic impacts [13][14][15][16], while standardization bodies [17] and international organizations [18][19][20][21] have issued frameworks and ethical guidelines aimed at supporting responsible AI deployment. However, these initiatives are largely advisory in nature, lacking enforceability, and often fall short of establishing binding rules or regulatory mechanisms for the development and use of AI systems.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">A more comprehensive assessment framework is necessary to ensure that AI systems are not only high-performing but also ethically sound and socially responsible. </ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[1] Cohen, P. R., &amp; Howe, A. E. (1988). How evaluation guides AI research: The message still counts more than the medium. ''AI magazine'', ''9''(4), 35-35.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[2] Wei, J., Karina, N., Chung, H. W., Jiao, Y. J., Papay, S., Glaese, A., ... &amp; Fedus, W. (2024). Measuring short-form factuality in large language models. ''arXiv preprint arXiv:2411.04368''.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[3] Paul, S., &amp; Chen, P. Y. (2022, June). Vision transformers are robust learners. In ''Proceedings of the AAAI conference on Artificial Intelligence'' (Vol. 36, No. 2, pp. 2071-2081).</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[4] Kocielnik, R., Amershi, S., &amp; Bennett, P. N. (2019, May). Will you accept an imperfect ai? exploring designs for adjusting end-user expectations of ai systems. In ''Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems'' (pp. 1-14).</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[5] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., &amp; Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In ''2009 IEEE conference on computer vision and pattern recognition'' (pp. 248-255). Ieee.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[6] Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., ... &amp; Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In ''Computer vision\u2013ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13'' (pp. 740-755). Springer International Publishing.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[7] Harish, B. S., Kumar, K., &amp; Darshan, H. K. (2019). Sentiment analysis on IMDb movie reviews using hybrid feature extraction method.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[8] Asghar, N. (2016). Yelp dataset challenge: Review rating prediction. ''arXiv preprint arXiv:1605.05362''.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[9] Al-kfairy, M., Mustafa, D., Kshetri, N., Insiew, M., &amp; Alfandi, O. (2024, September). Ethical challenges and solutions of generative AI: An interdisciplinary perspective. In ''Informatics'' (Vol. 11, No. 3, p. 58). Multidisciplinary Digital Publishing Institute.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[10] Wei, M., &amp; Zhou, Z. (2022). Ai ethics issues in real world: Evidence from ai incident database. ''arXiv preprint arXiv:2206.07635''.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[11] Baldassarre, M. T., Caivano, D., Fernandez Nieto, B., Gigante, D., &amp; Ragone, A. (2023, September). The social impact of generative ai: An analysis on chatgpt. In ''Proceedings of the 2023 ACM Conference on Information Technology for Social Good'' (pp. 363-373).</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[12] Padhi, I., Dognin, P., Rios, J., Luss, R., Achintalwar, S., Riemer, M., ... &amp; Bouneffouf, D. (2024, August). Comvas: Contextual moral values alignment system. In ''Proc. Int. Joint Conf. Artif. Intell'' (pp. 8759-8762).</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[13] Mbiazi, D., Bhange, M., Babaei, M., Sheth, I., &amp; Kenfack, P. J. (2023). Survey on AI Ethics: A Socio-technical Perspective. ''arXiv preprint arXiv:2311.17228''.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[14] D\u00edaz-Rodr\u00edguez, N., Del Ser, J., Coeckelbergh, M., de Prado, M. L., Herrera-Viedma, E., &amp; Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. ''Information Fusion'', ''99'', 101896.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[15] Shavit, Y., Agarwal, S., Brundage, M., Adler, S., O\u2019Keefe, C., Campbell, R., ... &amp; Robinson, D. G. (2023). Practices for governing agentic AI systems. ''Research Paper, OpenAI''.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[16] D\u00edaz-Rodr\u00edguez, N., Del Ser, J., Coeckelbergh, M., de Prado, M. L., Herrera-Viedma, E., &amp; Herrera, F. (2023). Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. ''Information Fusion'', ''99'', 101896.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[17] Schiff, D., Ayesh, A., Musikanski, L., &amp; Havens, J. C. (2020, October). IEEE 7010: A new standard for assessing the well-being implications of artificial intelligence. In ''2020 IEEE international conference on systems, man, and cybernetics (SMC)'' (pp. 2746-2753). IEEE.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[18] UNESCO. (2021). ''Recommendation on the Ethics of Artificial Intelligence''. United Nations Educational, Scientific and Cultural Organization. &lt;nowiki&gt;https://www.unesco.org/en/artificial-intelligence/recommendation-ethics&lt;/nowiki&gt;</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[19] International Organization for Standardization &amp; International Electrotechnical Commission. (2022). ''ISO/IEC 22989:2022 \u2013 Artificial intelligence \u2014 Artificial intelligence concepts and terminology''. ISO. &lt;nowiki&gt;https://www.iso.org/standard/74296.html&lt;/nowiki&gt;</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[20] International Organization for Standardization &amp; International Electrotechnical Commission. (2023). ''ISO/IEC 23894:2023 \u2013 Artificial intelligence \u2014 Guidance on risk management''. ISO. &lt;nowiki&gt;https://www.iso.org/standard/77608.html&lt;/nowiki&gt;</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\">[21] Council of Europe. (2024). ''Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law''. Strasbourg, France.</ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td colspan=\"2\" class=\"diff-side-deleted\"></td><td class=\"diff-marker\" data-marker=\"+\"></td><td class=\"diff-addedline diff-side-added\"><div><ins class=\"diffchange\"></ins></div></td></tr>\n<tr><td class=\"diff-marker\"></td><td class=\"diff-context diff-side-deleted\"><div>&lt;strong&gt;MediaWiki has been installed.&lt;/strong&gt;</div></td><td class=\"diff-marker\"></td><td class=\"diff-context diff-side-added\"><div>&lt;strong&gt;MediaWiki has been installed.&lt;/strong&gt;</div></td></tr>\n<tr><td class=\"diff-marker\"></td><td class=\"diff-context diff-side-deleted\"><br></td><td class=\"diff-marker\"></td><td class=\"diff-context diff-side-added\"><br></td></tr>\n"
    }
}