<p>This article aims to discover how AI-powered systems facilitate auditing, what risks emerge for AI-assisted audits and how to deal with these new risks. The paper studies the impact of cognitive computing on audit risk. AI-powered software is capable of self-learn so that it can identify patterns in data and codify them in predictions, rules and decisions. This self-learning ability can become both a benefit and, at the same time, insecurity. Although AI-self-learning helps make the process more efficient and calculations more accurate by improving the algorithm, eliminating errors and reducing risks, it creates new previously unknown threats. We discovered inherent limitations of cognitive-based technologies and risks for the audit process associated with using AI systems. We also proposed a complex security model that can reduce the uncertainty of AI-enabled audit and provides insight into future research opportunities.</p>