Research-backed developer blog

Python security, static analysis, and AI code review that developers can actually use

Framework guides, benchmarks, incident-style research, and CI workflows for Python teams that want lower-noise AppSec and better AI-generated code review.

Coverage
19

Articles across Python security, AI code risk, dead code, CI hardening, and workflow changes.

Topics
5

Curated discovery paths so readers can scan by job-to-be-done instead of publish date.

Formats
5

Guides, benchmarks, case studies, comparisons, and research pieces with clear next steps.

Start here

Explore the blog by job, not just by publish date

Use these paths when you already know what you need: compare scanners, review AI-generated code, understand framework-specific signal, or dig into proof-heavy benchmarks.

Article library

Showing all blog articles

19 articles

Choose a path above or use the filters below to narrow the article list.

19 articles

Topic

Format

Framework

Featured article

Start with this article

Guide • AI Code Security

AI Coding Agent Security Checklist: 12 Controls Before Agents Open PRs

AI coding agents can edit files, run commands, add dependencies, call tools, and open pull requests. Use this security checklist before you let Claude Code, Cursor, Codex, Copilot, Devin, or other agents work in production repositories.

Why this is worth your time

Useful if your team has moved from AI autocomplete to AI coding agents and needs concrete guardrails before those agents touch production repos.

AI coding agent security is different from AI autocomplete security because agents can take actions: edit files, run commands, install packages, call tools, and produce PRs.

The highest-risk controls are not exotic: restrict repo trust, review agent instructions and tool config, gate dependency changes, scan diffs, and block removed auth or validation before merge.

Do not rely on the agent that wrote the code to be the only reviewer of that code. Use deterministic local and CI checks before human approval.

May 16, 202615 min read
Read article

Library

Keep exploring

Guide • CI Hardening

GitHub Actions Security and GitLab CI Security: Static Analysis for CI/CD

CI/CD YAML should be reviewed like privileged code because it controls tokens, secrets, artifacts, release jobs, and deployment paths.

What you'll get

Use this if you want to treat CI/CD workflow files like privileged code and catch high-risk GitHub Actions or GitLab CI patterns before merge.

May 12, 20268 min
Read
Research • AI Code Security

Why AI-Generated Python Code Is Insecure in 2026 (And What Static Analysis Actually Catches)

Veracode's late-2025 and Spring 2026 GenAI Code Security Reports place the AI-generated code vulnerability rate around 45 percent across 100+ models, with Python around 38 percent in the October 2025 language breakdown.

What you'll get

Useful if you are trying to convince a team, a security reviewer, or a budget owner that AI-generated Python code needs deterministic gates, not just another reviewer.

May 9, 202611 min
Read
Checklist • AI Code Security

AI Code Review for Security: A PR Checklist for Auth, Tenant Isolation, Validation, and Secrets

AI code review security should check both newly introduced vulnerabilities and security controls that disappeared from the diff.

What you'll get

Useful if your team uses Cursor, Claude Code, Copilot, Codex, or other AI coding tools and needs a practical PR security checklist instead of another generic AI security overview.

May 2, 202612 min
Read
Research • Dead Code Detection

I Tested Dead-Code Detection by Sending Cleanup PRs to Mature OSS Repos

The useful test for dead-code detection is not whether a scanner prints findings; it is whether maintainers accept the cleanup.

What you'll get

A proof-led explanation of dead-code false positives, maintainer review, and the difference between benchmark signal and real cleanup work.

Apr 30, 202610 min
Read
Comparison • AI Code Security

Best AI Code Security Tools in 2026 Compared

The market is mixing together three different jobs: AI review assistance, traditional SAST, and AI-specific regression detection.

What you'll get

Useful if you are trying to choose the right security workflow for AI-generated code instead of buying the loudest new category label.

Apr 20, 202614 min
Read
Guide • Python Static AnalysisFlask

Flask Security Scanning: What Static Analysis Actually Catches in 2026

Flask security issues rarely come from Flask itself. They come from the raw Python and library calls Flask apps make around requests, templates, files, and subprocesses.

What you'll get

This completes the Django / FastAPI / Flask framework cluster and gives Python teams a cleaner way to evaluate scanner coverage by framework.

Apr 16, 20268 min
Read
Research • AI Code Security

Slopsquatting in Python: What 205,474 Hallucinated Package Names Mean for Your Supply Chain

LLMs invent Python packages that don't exist. Attackers register them. Academic research shows 43% of hallucinated names recur on every re-run of the same prompt — turning a model quirk into a repeatable attack surface. Here's what the peer-reviewed data says, and how to catch hallucinated imports at PR time.

What you'll get

This is the sharpest current writeup on hallucinated Python imports and why they turn into a repeatable supply-chain problem.

Apr 11, 202610 min
Read
Guide • Developer WorkflowVS Code

`python.linting` Is Deprecated in VS Code: What Python Teams Should Use Now

The old `python.linting.*` settings are deprecated; VS Code now expects dedicated tool extensions.

What you'll get

High-intent migration query with a direct path from editor setup into security scanning.

Mar 21, 20265 min
Read
Case Study • Dead Code Detection

3 Merged PRs: Dead Code We Found in Black, Flagsmith, and pypdf

We ran Skylos on popular open source Python projects, submitted pull requests to remove dead code, and all three were merged by maintainers. Here's what we found, how the LLM verification agent worked, and what the maintainers said.

What you'll get

Proof beats theory. This is the strongest evidence that the dead-code signal survives real maintainer review.

Mar 17, 20265 min
Read
Guide • Python Static AnalysisFastAPI

FastAPI Security Scanning: 8 Vulnerability Patterns Static Analysis Catches

FastAPI's async-first design and Pydantic validation prevent some bugs but introduce others. Here are 8 real vulnerability patterns in FastAPI applications — from SSRF in background tasks to Pydantic validation bypass — and how to detect them with static analysis.

What you'll get

Use this if your team ships async Python APIs and wants concrete FastAPI vulnerability patterns, not generic SAST advice.

Mar 14, 202614 min
Read
Guide • Python Static AnalysisDjango

Django Security Scanning: What Static Analysis Actually Catches in 2026

Django's ORM prevents SQL injection — until your code uses raw(), .extra(), or cursor.execute(). Here are 7 real vulnerability patterns in Django applications, which tools detect each one, and how to test them yourself.

What you'll get

The Django guide is the clearest framework-specific entry point for teams comparing Python security scanners.

Mar 13, 202612 min
Read
Comparison • Python Static Analysis

Best Python Static Analysis Tools Compared: Bandit vs Vulture vs Skylos (2026)

A side-by-side comparison of Python static analysis tools for security, dead code, framework awareness, speed, and CI/CD integration.

What you'll get

This is the fastest way to understand where Bandit, Vulture, and Skylos differ before you install anything.

Mar 6, 20266 min
Read
Benchmark • Python Static Analysis

We Scanned 9 Popular Python Libraries for Security and Dead Code. Here's What We Found.

We ran static analysis on FastAPI, Flask, Pydantic, Rich, Requests, httpx, Click, Starlette, and tqdm. The results: 1,800 security findings, 4,195 quality issues, and 730 pieces of dead code across 9 widely used Python packages.

What you'll get

This is the broadest benchmark in the library set and the best top-level proof page for signal and tradeoffs.

Mar 5, 20266 min
Read
Benchmark • Dead Code DetectionFlask

Finding Dead Code in Flask (71k Stars): Skylos vs Vulture Benchmark

We ran Skylos and Vulture on the Flask repository. Skylos found all 7 dead items with 12 false positives. Vulture found 6 but produced 260 false positives. Here's the full breakdown with real output.

What you'll get

This benchmark shows what framework awareness changes on a real Flask codebase instead of a toy example.

Feb 27, 20266 min
Read
Guide • Dead Code Detection

Dead Code in Python Isn't Just Tech Debt — It's a Security Liability

Every unused function in your Python codebase is attack surface you don't need. Here's how dead code creates real security risks, why it gets worse with AI-generated code, and how to detect and remove it systematically.

What you'll get

Start here if you want the strategic reason dead code belongs in an AppSec workflow, not just a cleanup backlog.

Feb 15, 20268 min
Read
Research • AI Code Security

How AI-Generated PRs Are Overwhelming Code Review (and How to Fix It)

AI generates code faster than teams can review it. Here's why the AI PR flood is breaking code review, and how to automate security and quality gates without turning senior engineers into lint bots.

What you'll get

Good starting point if your team’s problem is review overload, not just individual vulnerabilities.

Feb 1, 20265 min
Read
Research • AI Code Security

AI-Generated Python Code Is Shipping Vulnerabilities (2026 Data)

LLMs write code fast but introduce security flaws. Here's why AI-generated Python code fails security checks, the most common vulnerability patterns from Copilot, Claude, and Cursor, and how to detect them with static analysis.

What you'll get

Use this for the broad executive argument that AI-generated Python code needs a security verification layer.

Jan 23, 202610 min
Read
Research • Python Static Analysis

Why Python SAST Tools Drown Teams in False Positives (and What Actually Works)

Static Application Security Testing is supposed to catch vulnerabilities before they ship. In practice, noisy SAST results often get ignored. Here's why, and how taint analysis and framework awareness fix it.

What you'll get

Best read for teams trying to understand why SAST noise happens before they compare tools.

Jan 21, 20265 min
Read