Penetration Testing

OWASP Top 10 (2021): What Every Indian SaaS Developer Needs to Know

A practical walkthrough of the OWASP Top 10 for Indian SaaS developers and CTOs. Real-world examples for each category, what scanners catch vs what needs manual testing, and how the list maps to pentest scope.

A&RG
Abhinay & Rathnakara GN
Cyber Secify
11 min read

You’ve seen “OWASP Top 10” show up in a pentest report, a client security questionnaire, or a compliance checklist. Maybe all three. It gets referenced constantly, but most dev teams have never actually walked through all ten categories and asked: does this apply to us?

This post does exactly that. We’ll cover each of the ten categories in the OWASP Top 10 (2021) with a practical example from the kind of SaaS apps we test every month, whether your scanner will catch it, and what it actually takes to fix it.

If you’re building a SaaS product in India and selling to customers who care about security (enterprise buyers, regulated industries, international markets), this is the list your app gets tested against.

Why OWASP Top 10 Matters for Indian SaaS

The OWASP Top 10 is not a compliance standard. It’s an awareness document. But it has become the de facto checklist that pentest firms, auditors, and enterprise buyers use to evaluate web application security.

When a customer asks for a “pentest report covering OWASP Top 10,” they’re asking whether your app has been tested for these ten categories of vulnerabilities. When an ISO 27001 auditor reviews your security testing evidence, they expect to see OWASP Top 10 referenced in the methodology. The OWASP Web Security Testing Guide (WSTG) provides the detailed test cases behind each category.

For Indian SaaS companies selling to US or European enterprise customers, OWASP Top 10 coverage in your pentest report is table stakes. It’s not optional.

The 10 Categories: What Each Means for Your SaaS App

A01: Broken Access Control

What it is: Users can act outside their intended permissions. This was the #1 category in 2021, up from #5 in 2017. It covers everything from IDOR (insecure direct object references) to missing function-level access checks.

What it looks like in your app: Your multi-tenant SaaS lets User A from Company X access Company Y’s data by changing an ID in the URL or API call. Or a regular user discovers they can hit /admin/users and it returns the full user list because authorization is only enforced in the frontend, not the API.

Scanner or manual? Manual. Scanners cannot understand your authorization model. They don’t know that User A shouldn’t see User B’s data. A pentester creates multiple test accounts with different roles and tenants, then systematically tests every endpoint for horizontal and vertical privilege escalation.

Fix it: Enforce authorization checks server-side on every request. Don’t rely on hiding UI elements. Use indirect references (UUIDs mapped to internal IDs per session) where possible. Deny by default.


A02: Cryptographic Failures

What it is: Formerly called “Sensitive Data Exposure.” This covers weak or missing encryption for data in transit and at rest, hardcoded secrets, weak hashing algorithms, and improper certificate validation.

What it looks like in your app: Your app stores passwords with MD5 or SHA-1 instead of bcrypt/argon2. API keys are committed to your Git repo. Internal services communicate over plain HTTP. User PII sits unencrypted in your database, and your backup bucket is publicly accessible over HTTP.

Scanner or manual? Both. Scanners can detect missing TLS, weak cipher suites, and exposed secrets in responses. But finding hardcoded credentials in source code, weak key derivation, or improper certificate pinning in mobile apps requires manual review.

Fix it: Use TLS 1.2+ everywhere, including internal services. Hash passwords with bcrypt or argon2. Encrypt PII at rest. Rotate secrets regularly and never commit them to version control. Run trufflehog or gitleaks in your CI pipeline.


A03: Injection

What it is: SQL injection, NoSQL injection, OS command injection, LDAP injection. Untrusted data is sent to an interpreter as part of a command or query without proper validation or escaping.

What it looks like in your app: Your search feature builds SQL queries by concatenating user input: SELECT * FROM products WHERE name LIKE '%${userInput}%'. An attacker inputs '; DROP TABLE products; -- and your database is gone. Or your app shells out to a system command using user-supplied filenames without sanitization.

Scanner or manual? Both. Automated scanners (Burp, OWASP ZAP) are good at finding classic SQL injection with error-based or time-based detection. But second-order injection (where the payload is stored and executed later in a different context) and NoSQL injection against MongoDB queries require manual testing.

Fix it: Use parameterized queries or prepared statements. Always. For every database interaction. Use ORMs correctly (they protect you only if you don’t bypass them with raw queries). Validate and sanitize all input on the server side.


A04: Insecure Design

What it is: This is new in 2021. It addresses flaws in the design itself, not the implementation. A perfectly coded feature can still be insecure if the design didn’t account for abuse scenarios.

What it looks like in your app: Your referral system gives credits for every new signup, but doesn’t prevent a user from creating fake accounts to farm credits. Your password reset flow sends a 4-digit OTP with no rate limiting, making brute-force trivial (10,000 combinations, even at 1 request/second that’s under 3 hours). Your file-sharing feature lets users generate public links with no expiration.

Scanner or manual? Manual only. No scanner can evaluate business logic. This is the category where experienced pentesters earn their keep, thinking through abuse scenarios, edge cases, and race conditions that the development team didn’t consider.

Fix it: Threat model during design, not after deployment. Use abuse case stories alongside user stories. Ask “how would someone exploit this?” for every feature that involves money, access, or data sharing.


A05: Security Misconfiguration

What it is: Default credentials, unnecessary features enabled, overly permissive cloud configurations, missing security headers, verbose error messages that leak stack traces.

What it looks like in your app: Your staging environment is publicly accessible with default admin credentials. Your S3 bucket policy allows s3:GetObject for *. Django debug mode is on in production, showing full stack traces with file paths and environment variables. Your API returns detailed error messages like "error": "column 'password_hash' not found in table 'users'".

Scanner or manual? Mostly scannable. Tools like ScoutSuite (cloud), Nuclei (web), and security header checkers can flag most misconfigurations. But context matters: a pentester evaluates whether a specific configuration is actually exploitable in your environment, not just theoretically risky.

Fix it: Automate configuration checks in CI/CD. Strip debug output in production. Review cloud IAM policies quarterly. Use security headers (CSP, HSTS, X-Frame-Options). Remove default accounts and sample applications from production deployments.


A06: Vulnerable and Outdated Components

What it is: Using libraries, frameworks, or dependencies with known vulnerabilities. This includes direct dependencies and transitive ones (dependencies of dependencies).

What it looks like in your app: Your package.json pins an old version of lodash with a prototype pollution vulnerability. Your Java app uses Log4j 2.14 (the Log4Shell version). Your Docker base image hasn’t been updated in 18 months and has 47 known CVEs. You’re running an outdated WordPress plugin on your marketing site that has a public exploit.

Scanner or manual? Mostly scannable. SCA tools (Snyk, Dependabot, Trivy) do this well. They compare your dependency tree against CVE databases. Manual effort is needed to evaluate whether a CVE is actually exploitable in your specific usage context, not every vulnerability in a dependency is reachable from your code.

Fix it: Run SCA tools in CI. Set up Dependabot or Renovate for automated dependency PRs. Maintain a software bill of materials (SBOM). Have a process for evaluating and patching critical CVEs within 48 hours.


A07: Identification and Authentication Failures

What it is: Formerly “Broken Authentication.” Covers weak password policies, credential stuffing, session fixation, missing MFA, and improper session management.

What it looks like in your app: Your app accepts passwords like “12345678” because the policy only checks length, not complexity. Login attempts aren’t rate-limited, so attackers can try thousands of passwords per minute. Sessions don’t expire after logout (the JWT is still valid server-side). Your “remember me” token is a predictable value.

Scanner or manual? Both. Scanners can check password policies and session cookie flags. But testing for account enumeration (does the login page say “invalid email” vs “invalid password”?), session fixation, token predictability, and MFA bypass requires manual testing.

Fix it: Enforce strong password policies (length > complexity). Implement rate limiting and account lockout. Use secure session tokens with proper expiration. Add MFA, especially for admin accounts. Don’t reveal whether an email exists during login or registration.


A08: Software and Data Integrity Failures

What it is: New in 2021. This covers code and infrastructure that doesn’t protect against integrity violations: insecure CI/CD pipelines, unsigned updates, deserialization of untrusted data.

What it looks like in your app: Your CI/CD pipeline pulls dependencies without verifying checksums. A compromised npm package in your supply chain injects a cryptominer. Your app deserializes user-supplied JSON/YAML without validation, allowing object injection. Auto-update mechanisms download updates over HTTP without signature verification.

Scanner or manual? Mostly manual. Scanners can detect some deserialization issues, but supply chain risks, CI/CD pipeline weaknesses, and integrity verification gaps require architectural review. This is increasingly important, the SolarWinds and Codecov breaches were both integrity failures.

Fix it: Verify checksums and signatures for all dependencies. Use lockfiles (package-lock.json, Gemfile.lock). Sign your releases. Review CI/CD pipeline permissions (principle of least privilege). Validate all deserialized data against a strict schema.


A09: Security Logging and Monitoring Failures

What it is: Insufficient logging of security-relevant events, or logging that exists but nobody monitors. Without detection, attackers operate unnoticed for weeks or months.

What it looks like in your app: Failed login attempts aren’t logged. Your application logs don’t capture who accessed what data. You have CloudWatch running but no alerts configured for anomalous patterns. When a customer asks “was my data accessed?”, you can’t answer because there’s no audit trail.

Scanner or manual? Manual. No scanner can verify your logging coverage or monitoring effectiveness. A pentester checks whether their test activities (failed logins, access control bypass attempts, injection payloads) show up in your logs. If they don’t, that’s a finding.

Fix it: Log all authentication events (success and failure), access control failures, input validation failures, and admin actions. Include enough context (who, what, when, from where) to reconstruct an incident. Set up alerts for anomalous patterns. Retain logs for at least 90 days (longer if compliance requires it).


A10: Server-Side Request Forgery (SSRF)

What it is: The application fetches a remote resource based on a user-supplied URL without validating the destination. Attackers use this to access internal services, cloud metadata endpoints, or other systems behind the firewall.

What it looks like in your app: Your app has a “preview URL” feature that renders a screenshot of any URL. An attacker supplies http://169.254.169.254/latest/meta-data/iam/security-credentials/ and gets your AWS IAM credentials. Or they supply an internal URL like http://localhost:8080/admin to access internal admin panels through your server.

Scanner or manual? Mostly manual. Scanners can test for basic SSRF against known metadata endpoints, but blind SSRF (where the response isn’t returned to the user), SSRF via redirects, and SSRF through PDF generators or image processors require manual exploration.

Fix it: Validate and sanitize all user-supplied URLs. Block requests to internal IP ranges (10.x, 172.16.x, 192.168.x, 169.254.x, localhost). Use allowlists instead of blocklists where possible. Disable HTTP redirects in server-side HTTP clients. On AWS, use IMDSv2 to require session tokens for metadata access.

How OWASP Top 10 Maps to Pentest Scope

When you commission a penetration test, the OWASP Top 10 gives structure to the engagement. Here’s how the categories map to what a pentester actually does:

OWASP CategoryPentest ActivityEffort
A01: Broken Access ControlMulti-role, multi-tenant testing across all endpointsHigh (manual)
A02: Cryptographic FailuresTLS analysis, password storage review, secrets scanningMedium
A03: InjectionFuzzing all input points, testing query constructionMedium (automated + manual)
A04: Insecure DesignBusiness logic abuse testing, threat modeling reviewHigh (manual)
A05: Security MisconfigurationConfiguration audit, header checks, cloud reviewLow-Medium (automated)
A06: Vulnerable ComponentsDependency scanning, version fingerprintingLow (automated)
A07: Authentication FailuresAuth flow testing, session management, MFA bypassMedium (manual)
A08: Integrity FailuresCI/CD review, deserialization testing, supply chain checkMedium (manual)
A09: Logging FailuresVerify detection of test activities in logsLow (manual)
A10: SSRFURL input testing, metadata endpoint probingMedium (manual)

Notice the pattern: the categories that cause the most damage (A01, A04, A07) are the ones that need manual testing. Automated scanners handle the low-hanging fruit (A05, A06), but the vulnerabilities that lead to data breaches require a human tester who understands your application’s context.

This is why a “vulnerability scan” is not the same as a “penetration test.” A scan covers maybe 3-4 of these categories well. A proper pentest covers all 10. For more on this distinction, read our breakdown of what VAPT actually involves.

OWASP Top 10 and the API Layer

If your SaaS is API-first (and most modern SaaS products are), the OWASP Top 10 for web applications is only half the picture. OWASP also publishes a separate API Security Top 10 that targets API-specific risks like broken object-level authorization (BOLA), mass assignment, and unrestricted resource consumption.

A thorough pentest covers both. We’ve written a detailed walkthrough of the OWASP API Top 10 and what it means for your product.

What to Do With This

If you’re building a SaaS product and haven’t had a pentest that explicitly covers OWASP Top 10, here’s the move:

  1. Run the free stuff first. Set up Dependabot for A06. Add security headers for A05. Switch to parameterized queries for A03. These are engineering hygiene items you can fix this week.

  2. Get a pentest for the manual stuff. A01 (access control), A04 (business logic), A07 (auth), and A10 (SSRF) need a human tester who will create test accounts, map your authorization model, and try to break it. No scanner does this.

  3. Use the report for compliance. A pentest report that references OWASP Top 10 methodology satisfies requirements for ISO 27001, SOC 2, and most enterprise security questionnaires.

Our Startup Pentest plan (INR 74,999) covers OWASP Top 10 testing for a single scope (web app or API) with 7-day delivery. The Growth Pentest plan (INR 1,79,999) covers 2 scopes and includes SOC 2 + ISO 27001 audit prep. Both include a detailed report mapping findings to OWASP categories.

Want to see what the output looks like? Request a sample report. You can also see our full penetration testing services for scope and methodology details.

Frequently Asked Questions

What is the OWASP Top 10?

The OWASP Top 10 is a standard awareness document listing the ten most critical security risks to web applications. Published by the Open Worldwide Application Security Project (OWASP), it is updated every few years based on real-world data from hundreds of organizations and thousands of applications.

Is OWASP Top 10 compliance mandatory in India?

OWASP Top 10 is not a regulation, so there is no legal mandate. However, most compliance frameworks used in India (ISO 27001, SOC 2, RBI cyber security guidelines, SEBI CSCRF) expect web applications to be tested against OWASP Top 10 categories. Pentest reports that reference OWASP Top 10 are accepted as evidence during audits.

Can automated scanners cover the full OWASP Top 10?

No. Automated scanners are effective for about 4 of the 10 categories (misconfiguration, known vulnerable components, injection patterns, and some cryptographic issues). Categories like broken access control, business logic flaws, and SSRF require manual testing by a skilled pentester.

How much does an OWASP Top 10 pentest cost in India?

At Cyber Secify, the Startup Pentest plan covering OWASP Top 10 testing for a single scope is INR 74,999 with 7-day delivery. The Growth Pentest plan covering 2 scopes with SOC 2 + ISO 27001 audit prep is INR 1,79,999.

Share this article
OWASP Top 10web application securityOWASP web security testing guidepentestSaaS securityweb security Indiaapplication security