Skip to content

Review Code

This guide explains how CloudAEye performs code reviews.

For doing code review in VS Code refer Code Review

Overview

CloudAEye provides automated code reviews directly within your pull request workflow. It analyzes your code for potential bugs, logical errors, and security vulnerabilities, and suggests actionable fixes.

Prerequisites

Step 1: Register

Sign up with CloudAEye SaaS.

Step 2: Setup Code Review

Setup Code Review by following the Getting Started guide.

CloudAEye Code Review Commands

Getting Started

To begin analyzing your code, use the following command in your PR timeline:


@cloudaeye /inspect

/inspect runs 10 targeted categories, including 8 code quality checks and 2 foundational security checks covering authentication and access control as well as injection protection. It is intentionally scoped to catch the most common issues early, without the full depth of /review.

This command identifies:

  • Logic errors
  • Edge case handling
  • Code clarity
  • Input validation
  • Concurrency safety
  • Naming consistency
  • Error handling
  • Code signature
  • Auth and access
  • Injection protection

Performing a Full Review

After reviewing and addressing the issues identified during inspection, you can trigger a comprehensive review using:


@cloudaeye /review

This command:

  • Re-evaluates the updated code for potential bugs and logical errors
  • Identifies security vulnerabilities aligned with OWASP standards across web applications, LLM and generative AI apps, agentic applications, and MCP servers, with actionable remediation guidance
  • Runs pull request (PR) checklists when the feature is enabled, ensuring all required review steps and quality gates are completed before merge
  • Executes custom rules when enabled, enforcing repository-specific policies and validations during the review process
  • Runs configured linters when enabled, automatically checking code for style, syntax, and quality issues as part of the review process
  • Provides final recommendations before merge
  • Run @cloudaeye /inspect to detect issues early
  • Fix the identified problems in your code
  • Run @cloudaeye /review to complete the review process

Additionally, you can automate your pull request (PR) review workflow by following the instructions provided here. You can also point CloudAEye to the existing documentation of your tech stack, allowing it to use that context to perform more accurate and relevant reviews of your PRs.

Risk Score Guidelines

Risk Score Risk Type Description
5.0 Critical Immediate system-level security risk
4.0 High Significant security risk requiring urgent attention
3.0 Medium Notable security concern requiring attention
2.0 Low Minor security issue that should be addressed
1.0 Info Potential security improvement
0.0 None No security impact

Bug Report

CloudAEye provides details about potential bugs that has been found in a PR code change. It includes following details:

  • Bug Priority
  • Bug Details
  • Where this bug was found
  • Possible Fix

Code Review - Bug Report

Security Report

CloudAEye provides details about potential security issues that has been found in a PR code change. It includes following details:

  • Security Issue Type
  • Security Issue Details
  • Where this Security Issue was found
  • Possible Fix

Code Review - Security Report

Categories used for Security vulnerability

- Broken Authentication
- Sensitive Data Exposure
- Broken Access Control
- Injection
- Cross-Site Scripting (XSS)
- Security Misconfiguration
- XML External Entities (XXE)
- Insecure Deserialization
- Insufficient Logging & Monitoring

LLM Security Report

CloudAEye provides details about potential LLM security issues and vulnerabilities introduced by the use of large language models (LLMs) in applications. These include:

  • Security Issue Priority
  • Security Issue Details
  • Where this Security Issue was found
  • Possible Fix

Code Review - Security Report

Categories used for LLM security alerts

- Output Handling and Agency Control
- Agency and Autonomy Issues
- Sensitive Information Protection
- Injection Attack
- Over-Reliance Assessment
- Cost and Resource Management

AI Code Report

CloudAEye scans your PR code changes and provides details about potential code issues found. These include:

  • Issue Priority
  • Issue Details
  • Possible Fix

Code Review - Security Report

Example from GitLab environment.

Code Review - AI Security Report

Security Vulnerabilities

CloudAEye provides deep, policy-driven OWASP-aligned security coverage.

Security & Compliance

These checks align with OWASP Top 10 and enterprise security expectations without slowing developers down.

  • No Secrets in Code: Secrets, API keys, or credentials are not exposed.
  • Injection Protection: Code is safeguarded against SQL/NoSQL/command injection.
  • Authentication & Authorization: Access controls are implemented correctly without bypass path
  • Sensitive Data Handling: PII or sensitive data is not leaked in logs or request parameters.
  • Safe Deserialization: Deserialization logic avoids insecure patterns.
  • Security Misconfiguration: Lazy or overly permissive security settings that make your app easier to attack are not used.
  • XML External Entity (XXE): Reading XML files without blocking external references that attackers can no longer use to steal sensitive files or make unauthorized requests.

LLM and Gen AI Apps Security

This makes CloudAEye suited for teams building AI-native products.

For RAG based LLM applications like chatbots, content generation, and Q&A systems WITHOUT autonomous agents.

  • Prompt Injection: User prompts do not alter the LLM’s behavior or output in unintended ways.
  • Sensitive Information Disclosure: Sensitive data included in prompts to external LLM APIs without redaction.
  • Improper Output Handling: Insufficient validation, sanitization, and handling of the outputs generated by large language models before they are passed downstream to other components and systems.
  • Excessive Agency: Does not enable damaging actions to be performed in response to unexpected, ambiguous, or manipulated outputs from an LLM.
  • System Prompt Leakage: System prompts or instructions used to steer the behavior of the model are not inadvertently leaked.
  • Vector and Embedding Weaknesses: Vectors and embeddings vulnerabilities are not present in systems utilizing Retrieval Augmented Generation (RAG) with Large Language Models (LLMs).
  • Misinformation: Code can address situations where LLMs produce false or misleading information that appears credible.
  • Unbounded Consumption: Large Language Model (LLM) application does not allow users to conduct excessive and uncontrolled inferences, leading to risks such as denial of service (DoS), economic losses, model theft, and service degradation.

Agentic Security

CloudAEye supports OWASP Top 10 for Agentic Applications.

For autonomous AI agents that make decisions, use tools, and execute actions independently.

  • Memory Poisoning: Prevent injection of malicious data into persistent agent memory that could corrupt behavior across sessions or agents.
  • Tool Misuse: Adversarial tool misuse through chaining, privilege escalation, or execution of unintended actions is prevented.
  • Privilege Compromise: Attackers can’t exploit implicit trust relationships between agents, tools, memory contexts, or task transitions to execute actions beyond intended permissions.
  • Resource Overload: Attackers can’t exploit open-ended goals, long planning chains, or looping delegation to consume compute, memory, storage, or API credits.
  • Cascading Hallucination: Hallucinations are prevented and can’t cascade into widespread misinformation, faulty decisions, or unsafe actions.
  • Intent Breaking & Goal Manipulation: Prevent manipulation of agent goals or planning logic that could cause unintended actions.
  • Repudiation & Untraceability: Code has proper traceability to reduce the risk of repudiation.
  • Overwhelming Human-in-the-Loop: Agents may not flood users with requests, obscure critical decisions, or exploit approval fatigue.
  • Unexpected RCE & Code Attacks: Exploit of code-generation features or embedded tool access to escalate actions into remote code execution (RCE), local misuse, or exploitation of internal systems is prevented.
  • Agent Communication Poisoning: No injection of malicious content into inter-agent messages or shared communication channels, corrupting coordination, triggering undesired workflows, or manipulating agent responses.
  • Rogue Agents in Multi-Agent Systems: Malicious, unauthorized, or compromised agents can’t embed themselves in a multi-agent system (MAS), influencing workflows, exfiltrating data, or sabotaging operations.

MCP Server

For building Model Context Protocol servers that integrate with AI tools.

Definition & Documentation:

  • MCP Server Metadata: Server has a clear, descriptive name and purpose that helps users understand what it does.
  • Tool Definitions: Each tool includes a clear explanation of what it does and when to use it.
  • Resource Definitions: Data sources have clear descriptions and specify what format data is returned in.
  • Prompt Definitions: Reusable prompts include clear instructions about what information they need.
  • Parameter Descriptions: Every input field has a description explaining what values are valid and what format to use.
  • Internal vs Client Descriptions: Documentation focuses on how to use features, not internal technical details.
  • Naming Conventions: Tools, resources, and prompts use clear, descriptive names that make their purpose obvious.

Tool Parameter Handling:

  • Type Annotations: All inputs specify what type of data they expect (text, numbers, lists, etc.).
  • Optional vs Required: Clear indication of which inputs are mandatory and which have default values.
  • Validation Constraints: User inputs are checked for valid values, proper formats, and safe ranges.
  • Hidden Parameters: Sensitive information like API keys are injected securely, never exposed as user inputs.
  • Complex Types: Complex inputs use well-defined structures instead of generic formats.

Tool Call Return Optimization:

  • Async vs Sync: Tools use appropriate methods for fast vs slow operations (network calls, file operations).
  • Return Type Validation: Outputs follow consistent, validated structures.
  • Content Formatting: Different types of content (images, text, structured data) are returned in the correct format.
  • Response Size: Large datasets are paginated or filtered to prevent overwhelming responses.
  • Streaming Responses: Large data exports use incremental delivery for better performance.

Error Handling:

  • Error Handling: Error messages are clear and helpful, without exposing technical internals.
  • Internal Detail Masking: Detailed error information is logged for debugging while keeping user messages simple.
  • Graceful Degradation: When optional features fail, the system continues working with reduced functionality.
  • Timeout Handling: External operations have time limits and provide clear timeout messages.

Autonomous Tool Interactions:

  • Tool Safety: Destructive operations (like deletes) are clearly marked and require confirmation.
  • Tool Availability: Tools automatically enable/disable based on whether required dependencies are available.
  • Tool Dependencies: Clear documentation of which tools depend on others.
  • Rate Limiting: Protection against excessive requests to backend resources.
  • Idempotency: Operations that can be safely retried are marked as such.

Sampling and Elicitation:

  • Prompt Injection Protection: User inputs are properly separated from system instructions to prevent manipulation.
  • LLM Response Validation: Responses from AI models are validated against expected formats before use.
  • Client Authenticity: Server doesn't blindly trust client responses without validation.
  • Elicitation Patterns: Complex requirements are broken into clear, step-by-step prompts.
  • Context Management: Chat history is managed appropriately for context.
  • Destructive Operation Elicitation: Destructive actions use interactive prompts to verify user intent.