AI-Driven Automation of Code Review Processes: Enhancing Software Quality and Reducing Human Error

Evgenii Lvov

Citation: Evgenii Lvov, "AI-Driven Automation of Code Review Processes: Enhancing Software Quality and Reducing Human Error", Universal Library of Innovative Research and Studies, Volume 02, Issue 02.

Copyright: This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In contemporary software engineering, expert code review practices have entered a phase of profound reconsideration under the influence of generative artificial intelligence technologies. In 2024–2025, a qualitatively new, exponential stage of integrating large language models (LLM) into the software development life cycle (SDLC) is being observed, which radically changes the balance between development speed, quality assurance, and the security of software systems. The aim of the study is to provide a comprehensive assessment of the effectiveness of using AI to automate Code Review processes, to analyze how such technological interventions modify software quality metrics, and to identify latent risks conditioned by the human factor. The focal point is the phenomenon of the productivity paradox: the acceleration of code writing with AI assistants leads to the review and deployment stages becoming the bottleneck, where the throughput of the team in fact decreases. Based on quantitative indicators, it is demonstrated that the introduction of AI correlates with a 7,2% decrease in delivery stability and an increase in architectural technical debt, while developers themselves subjectively interpret what is happening as an increase in their own productivity. Particular emphasis is placed on a comparative analysis of traditional static application security testing (SAST) tools and LLM agents, on identifying specific vulnerabilities induced by neural network models (including the impact of politically charged triggers on code security), as well as on examining the cognitive effects of AI use for experienced software engineers. It is shown that experts may lose up to 19% of their working time when involving AI in solving complex tasks due to the need for additional verification and correction of contextual model hallucinations. The article proposes a scientifically grounded typology of errors generated by AI and formulates recommendations for transitioning to agentic workflows in which AI functions not only as a generator of code fragments, but also as an interactive verifier of developer intentions, operating in a mode of close human–machine synergy.


Keywords: Digital Transformation, Organizational Management, Information Technologies, Management Efficiency, Digital Platforms, Business Processes, Data Analysis, Innovative Development.

Download doi https://doi.org/10.70315/uloap.ulirs.2025.0202006