TIL SCHÜNEMANN

sacrificing accessibility for not getting web-scraped

LLMs have taken the world by a storm, and need ever-increasing training data to improve. Copyright laws get broken, content gets aggressively scraped, and even though you might have deleted your original work, it might just show up because it got cached or archived at some point.

Now, if you subscribe to the idea that your content shouldn't be used for training, you don't have much say. I wondered how I personally would mitigate this on a technical level.

et tu, caesar?

In my linear algebra class we discussed the caesar cipher[1] as a simple encryption algorithm: Every character gets shifted by n characters. If you know (or guess) the shift, you can figure out the original text. Brute force or character heuristics break this easily.

But we can apply this substitution more generally to a font! A font contains a cmap (character map), which maps codepoints and glyphs. A codepoint defines the character, or complex symbol, and the glyph represents the visual shape. We scramble the font´s codepoint-glyph-mapping, and adjust the text with the inverse of the scramble, so it stays intact for our readers. It displays correctly, but the inspected (or scraped) HTML stays scrambled. Theoretically, you could apply a different scramble to each request.

This works as long as scrapers don't use OCR for handling edge cases like this, but I don't think it would be feasible.

I also tested if ChatGPT could decode a ciphertext if I'd tell it that a substitution cipher was used, and after some back and forth, it gave me the result: One day Alice went down a rabbit hole, and found herself in Wonderland, a strange and magical place filled with...

...which funnily didn't resemble the original text at all! This might have happened due to the training corpus containing Alice and Bob[2] as standard party labels for showcasing encryption.

The code I used for testing: (click to expand)
# /// script # requires-python = ">=3.12" # dependencies = [ # "bs4", # "fonttools", # ] # /// import random import string from typing import Dict from bs4 import BeautifulSoup from fontTools.ttLib import TTFont def scramble_font(seed: int = 1234) -> Dict[str, str]: random.seed(seed) font = TTFont("src/fonts/Mulish-Regular.ttf") # Pick a Unicode cmap (Windows BMP preferred) cmap_table = None for table in font["cmap"].tables: if table.isUnicode() and table.platformID == 3: break cmap_table = table cmap = cmap_table.cmap # Filter codepoints for a-z and A-Z codepoints = [cp for cp in cmap.keys() if chr(cp) in string.ascii_letters] glyphs = [cmap[cp] for cp in codepoints] shuffled_glyphs = glyphs[:] random.shuffle(shuffled_glyphs) # Create new mapping scrambled_cmap = dict(zip(codepoints, shuffled_glyphs, strict=True)) cmap_table.cmap = scrambled_cmap translation_mapping = {} for original_cp, original_glyph in zip(codepoints, glyphs, strict=True): for new_cp, new_glyph in scrambled_cmap.items(): if new_glyph == original_glyph: translation_mapping[chr(original_cp)] = chr(new_cp) break font.save("src/fonts/Mulish-Regular-scrambled.ttf") return translation_mapping def scramble_html( input: str, translation_mapping: Dict[str, str], ) -> str: def apply_cipher(text): repl = "".join(translation_mapping.get(c, c) for c in text) return repl # Read HTML file soup = BeautifulSoup(input, "html.parser") # Find all main elements main_elements = soup.find_all("main") skip_tags = {"code", "h1", "h2"} # Apply cipher only to text within main for main in main_elements: for elem in main.find_all(string=True): if elem.parent.name not in skip_tags: elem.replace_with(apply_cipher(elem)) return str(soup)

drawbacks

There is no free lunch, and this method comes with major drawbacks:

On the plus side, you read this article using my own scrambled font. Take this, web scrapers!

footnotes

You can click on the footnote index to jump back: