Architecture Deep Dive

How We Virtualize
Python Code.

PyVMProtect does not package, bundle, or merely obfuscate your code. We compile your logic into a custom, non-standard instruction set executed by a proprietary runtime environment.

Windows x64 · Python 3.11 · 3.12 · 3.13
Step-by-Step

The Compilation Pipeline

From plaintext source to native binary in three deterministic stages.

1. Parse
2. Virtualize
3. Compile
source_logic.py Python
def validate_license(key: str, machine_id: str) -> bool: SECRET = "f9a2b1-internal-2026" endpoint = "https://api.yourtool.com/v2/verify" digest = hashlib.sha256( f"{key}:{machine_id}:{SECRET}".encode() ).hexdigest() r = requests.post(endpoint, json={"hash": digest}) return r.status_code == 200
CPython bytecode (standard)
124 LOAD_FAST 'key' 116 STORE_FAST 'digest' 142 CALL_FUNCTION 1
PyVMProtect opcodes (this build)
0x4F PVMP_LOAD r0
0x12 PVMP_STORE r1
0x8B PVMP_CALL dispatch+0x44
Build Console

Click to walk through each stage

Strings encrypted at build time

Every string constant is replaced with encrypted ciphertext using a per-build key before anything else happens. Your API keys and endpoints are gone from this point forward. They only exist in memory for the exact moment they're needed.

Binary Diversity

Every build produces a different binary.

Click 'New Build' to generate a new opcode mapping. The same Python instruction gets a different number every time.

A disassembler written for Build A produces garbage on Build B. Reversing this isn't hard. It's pointless.

Instruction Build A Build B
LOAD_FAST 0x4F 0xA2
STORE_FAST 0x12 0x7E
CALL_FUNCTION 0x8B 0x31
JUMP_ABSOLUTE 0x3C 0xC4
LOAD_CONST 0xD1 0x58
RETURN_VALUE 0x07 0xF9
Obfuscation

Control Flow Flattening

We destroy linear execution. Reverse engineers see a labyrinth instead of an algorithm.

Original Source (Reversible) Vulnerable
def verify_license(key): # Linear execution flow if len(key) != 16: return False secret = "MASTER_KEY_2026" if check_hash(key, secret): grant_access() return True else: return False
Virtualized Flow (Decompiler Output) Secure
void pyvmp_dispatcher() { int state = 0x4F2A; while (true) { // Switch-based finite state machine switch (state ^ vm_context->key) { case 0x11A2: vm_push(reg_A); state = 0x9B1C; break; case 0x9B1C: if (vm_cmp()) state = 0x22F1; else state = 0x88A3; break; case 0x22F1: /* Encrypted block */ break; default: trigger_anti_debug(); break; } } }
Memory Protection

Active Runtime Defense

Protection that monitors the process memory while the code executes.

Anti-Debug

Checks for attached debuggers continuously

Memory Integrity

Hashes .text section, detects any patch

HWID Lock

Validates CPU + disk serial at startup

Ephemeral Strings

Secrets exist in memory for <1ms

Runtime Integrity

CRC32 checks on code pages every few ms

Injection Watchdog

Detects foreign DLLs and remote threads

>_
Performance

What it costs to be protected

Measured on Python 3.13 / Windows x64. Each fixture is run 5 times; figures below are medians. Overhead = compiled time vs. plain CPython work time, both measured as subprocess wallclock minus the Python startup baseline.

Binary size (small module)
~268 KB
Baseline output for a compact module (~80 LOC). Most of the size is the fused VM interpreter, integrity machinery, and injection watchdog. Grows with module complexity.
Cold-start overhead
10–25 ms
One-time cost per process: load module, validate integrity, run initial anti-debug probes. Amortized over the process lifetime.
Runtime overhead (mixed business logic)
1.1–1.8×
Best case on mixed logic + JSON workloads. String-heavy or WinAPI-heavy code typically runs at 2–4×. See benchmark table below.
Compute loops (excluded from VM)
~0% overhead
Tight math loops are excluded from virtualization via configuration. They run at native CPython speed. Only the logic worth protecting enters the VM.
Fixture .pyd size CPython Compiled Overhead Cold start
Mixed (business logic + JSON + ctypes) 269 KB 563 ms 1,182 ms +110% 18 ms
Windows API (ctypes round-trips) 268 KB 442 ms 1,215 ms +175% 14 ms
Strings (tokenize, dict, sort, hash) 268 KB 421 ms 1,846 ms +339% 11 ms
Compute (mandelbrot, primes, matmul) : hot loops excluded 268 KB 406 ms 404 ms ~0% 5 ms

Overhead measured with AST mutation and bytecode mutation active, anti-debug probes disabled to avoid timing interference. C extensions (NumPy, Pandas, SciPy) are never virtualized and run at native speed. The "1.1–1.8×" figure applies to the Mixed fixture only; exclude compute-heavy loops via @vm_skip or the Cython hot-path to keep those at native speed.

What's not supported yet

Feature Status
async / await Not supported
Generators (yield) Not supported
Context managers (with) Not supported
macOS / Linux targets Not supported — on the roadmap
Free-threaded Python 3.13t (no-GIL) Not supported — deferred
Code-signed .pyd output Not supported — planned before GA
Hardware HWID lock Coming Q3 2026

See it in action.

Upload a .py file and get back a protected .pyd in under 90 seconds. Free during beta. No credit card.

Join for Free Back to Overview